Keynotes

Min Wu

University of Maryland, USA

Visual Processing and Data Science for Physiological Forensics

Abstract

Many nearly invisible “micro-signals” have played important roles in media security and forensics, despite traditionally being regarded as interference or discarded as noise. These micro-signals are ubiquitous and typically an order of magnitude lower in strength or scale than the dominant ones. Physiological forensics is attracting attention in recent years, utilizing for example visual micro-signals that are the subtle changes in facial skin color following the pace of heartbeats. Video-based analysis of this repeating change provides a contact-free way to capture photo-plethysmogram (PPG), from which we can infer a person’s heart rate, breathing, blood oxygen level, and other physiological conditions. This talk will review the connections for micro-signal analysis between physiological forensics and other media forensic research, and highlight the synergistic roles of signal processing, computer vision, data science, security and privacy, and biomedical insights.

Speaker’s Bio

Min Wu is a Professor of Electrical and Computer Engineering and a Distinguished Scholar-Teacher at the University of Maryland, College Park, and Associate Dean of Engineering for Graduate Affairs. She received her undergraduate degrees from Tsinghua University, Beijing, China, in 1996 with the highest honors, and her Ph.D. degree in electrical engineering from Princeton University in 2001. At UMD, she leads the Media, Analytics, and Security Team (MAST), with main research interests on information security and forensics, multimedia signal processing, and applications of data science and machine learning for health and IoT. Dr. Wu was elected as IEEE Fellow, AAAS Fellow, and Fellow of the National Academy of Inventors. She was a founding member of APSIPA and elected to serve on its Board of Governors. She chaired the IEEE Technical Committee on Information Forensics and Security, and served as Editor-in-Chief of the IEEE Signal Processing Magazine. Currently, she is President-Elect (2022-2023) of the IEEE Signal Processing Society.


Michael Elad

Technion – Israel Institute of Technology, Israel

Image Denoising – Not What You Think

Abstract

Image denoising – removal of white additive Gaussian noise from an image – is one of the oldest and most studied problems in image processing. An extensive work over several decades has led to thousands of papers on this subject, and to many well-performing algorithms for this task. As expected, the era of deep learning has brought yet another revolution to this subfield, and took the lead in today’s ability for noise suppression in images. All this progress has led some researchers to believe that “denoising is dead”, in the sense that all that can be achieved is already done.

Exciting as all this story might be, this talk IS NOT ABOUT IT!

Our story focuses on recently discovered abilities and vulnerabilities of image denoisers. In a nut-shell, we expose the possibility of using image denoisers for serving other problems, such as regularizing general inverse problems and serving as the engine for image synthesis. We also unveil the (strange?) idea that denoising (and other inverse problems) might not have a unique solution, as common algorithms would have you believe. Instead, we will describe constructive ways to produce randomized and diverse high perceptual quality results for inverse problems.

Speaker’s Bio

Michael Elad holds a B.Sc. (1986), M.Sc. (1988) and D.Sc. (1997) in Electrical Engineering from the Technion in Israel. Since 2003 he holds a faculty position in the Computer-Science department at the Technion. Prof. Elad works in the field of signal and image processing, specializing in particular on inverse problems, sparse representations and deep learning. He has authored hundreds of publications in leading venues, many of which have led to exceptional impact. Prof. Elad has served as an Associate Editor for IEEE-TIP, IEEE-TIT, ACHA, SIAM-Imaging-Sciences – SIIMS and IEEE-SPL. During the years 2016-2021 Prof. Elad served as the Editor-in-Chief for SIIMS.

Michael received numerous teaching and research awards and grants, including an ERC advanced grant in 2013, the 2008 and 2015 Henri Taub Prizes for academic excellence, and the 2010 Hershel-Rich prize for innovation, the 2018 IEEE SPS Technical Achievement Award for contributions to sparsity-based signal processing; the 2018 IEEE SPS Sustained Impact Paper Award for his K-SVD paper, and the 2018 SPS best paper award for his paper on the Analysis K-SVD. Michael is an IEEE Fellow since 2012, and a SIAM Fellow since 2018.


Matthias Nießner

Technical University of Munich, Germany

The Revolution of Neural Rendering

Abstract

In this talk, I will present our research vision in how to create a photo-realistic digital replica of the real world, and how to make holograms become a reality. Eventually, I would like to see photos and videos evolve to become interactive, holographic content indistinguishable from the real world. Imagine taking such 3D photos to share with friends, family, or social media; the ability to fully record historical moments for future generations; or to provide content for upcoming augmented and virtual reality applications. AI-based approaches, such as generative neural networks, are becoming more and more popular in this context since they have the potential to transform existing image synthesis pipelines. I will specifically talk about an avenue towards neural rendering where we can retain the full control of a traditional graphics pipeline but at the same time exploit modern capabilities of deep learning, such as handling the imperfections of content from commodity 3D scans. While the capture and photo-realistic synthesis of imagery open up unbelievable possibilities for applications ranging from entertainment to communication industries, there are also important ethical considerations that must be kept in mind. Specifically, in the content of fabricated news (e.g., fake-news), it is critical to highlight and understand digitally-manipulated content. I believe that media forensics plays an important role in this area, both from an academic standpoint to better understand image and video manipulation, but even more importantly from a societal standpoint to create and raise awareness around the possibilities and moreover, to highlight potential avenues and solutions regarding trust of digital content.

Speaker’s Bio

Dr. Matthias Nießner is a Professor at the Technical University of Munich where he leads the Visual Computing Lab. Before, he was a Visiting Assistant Professor at Stanford University. Prof. Nießner’s research lies at the intersection of computer vision, graphics, and machine learning, where he is particularly interested in cutting-edge techniques for 3D reconstruction, semantic 3D scene understanding, video editing, and AI-driven video synthesis. In total, he has published over 70 academic publications, including 22 papers at the prestigious ACM Transactions on Graphics (SIGGRAPH / SIGGRAPH Asia) journal and 43 works at the leading vision conferences (CVPR, ECCV, ICCV); several of these works won best paper awards, including at SIGCHI’14, HPG’15, SPG’18, and the SIGGRAPH’16 Emerging Technologies Award for the best Live Demo.

Prof. Nießner’s work enjoys wide media coverage, with many articles featured in main-stream media including the New York Times, Wall Street Journal, Spiegel, MIT Technological Review, and many more, and his was work led to several TV appearances such as on Jimmy Kimmel Live, where Prof. Nießner demonstrated the popular Face2Face technique; Prof. Nießner’s academic Youtube channel currently has over 5 million views.

For his work, Prof. Nießner received several awards: he is a TUM-IAS Rudolph Moessbauer Fellow (2017 – ongoing), he won the Google Faculty Award for Machine Perception (2017), the Nvidia Professor Partnership Award (2018), as well as the prestigious ERC Starting Grant 2018 which comes with 1.500.000 Euro in research funding; in 2019, he received the Eurographics Young Researcher Award honoring the best upcoming graphics researcher in Europe.

In addition to his academic impact, Prof. Nießner is a co-founder and director of Synthesia Inc., a brand-new startup backed by Marc Cuban, whose aim is to empower storytellers with cutting-edge AI-driven video synthesis.


Aljosa Smolic

Hochschule Luzern (HSLU), Switzerland

Volumetric Video Content Creation for Immersive XR Experiences

Abstract

Volumetric video (VV) is an emergent digital media that enables novel forms of interaction and immersion within eXtended Reality (XR) applications. VV supports 3D representation of real-world scenes and objects to be visualized from any viewpoint or viewing direction; an interaction paradigm that is commonly seen in computer games. This allows for instance to bring real people into XR. Based on this innovative media format, it is possible to design new forms of immersive and interactive experiences that can be visualized via head-mounted displays (HMDs) in virtual reality (VR) or augmented reality (AR). The talk will highlight technology for VV content creation developed by the V-SENSE lab and the startup company Volograms. It will further showcase a variety of creative experiments applying VV for immersive storytelling in XR.

Speaker’s Bio

Dr. Aljosa Smolic is lecturer in AR/VR in the Immersive Realities Research Lab of Hochschule Luzern (HSLU). Before joining HSLU, Dr. Smolic was the SFI Research Professor of Creative Technologies at Trinity College Dublin (TCD), Senior Research Scientist and Head of the Advanced Video Technology group at Disney Research Zurich as, and with the Fraunhofer Heinrich-Hertz-Institut (HHI), Berlin, also heading a research group as Scientific Project Manager. At Disney Research he led over 50 R&D projects in the area of visual computing that have resulted in numerous publications and patents, as well as technology transfers to a range of Disney business units. Dr. Smolic served as Associate Editor of the IEEE Transactions on Image Processing and the Signal Processing: Image Communication journal. He was Guest Editor for the Proceedings of the IEEE, IEEE Transactions on CSVT, IEEE Signal Processing Magazine, and other scientific journals. His research group at TCD, V- SENSE, was on visual computing, combining computer vision, computer graphics and media technology, to extend the dimensions of visual sensation. This includes immersive technologies such as AR, VR, volumetric video, 360/omni-directional video, light-fields, and VFX/animation, with a special focus on deep learning in visual computing. Dr. Smolic is also co-founder of the start-up company Volograms, which commercializes volumetric video content creation. He received the IEEE ICME Star Innovator Award 2020 for his contributions to volumetric video content creation and TCD’s Campus Company Founders Award 2020.