Times are displayed in (UTC-07:00) Pacific Time (US & Canada)Change
2/5/2025 | 3:30 PM - 5:30 PM | Regency B
Sparse view synthesis
Author(s)
Ravi Ramamoorthi | University of California, San Diego
Abstract
View synthesis, to create novel views of scenes from input images, is a long-standing problem in computer graphics and vision, and key for immersive experiences, virtual and augmented reality and 3D photography. We describe recent experiences that have revolutionized the realism of image-based rendering and view synthesis and recent advances seeking to dramatically reduce the number of images required to enable sparse view synthesis. We start by discussing our award-winning works on Local Light Field Fusion and Neural Radiance Fields that establish the basic theory and representations. We then describe recent efforts that push the boundaries in terms of sparse sampling and images, including real-time radiance fields for portraits from a single input image, gaussian-splatted radiance fields from a very sparse set of uncalibrated images, and progress towards algorithms for single-image view synthesis on general objects and scenes. We will also discuss coupling of the ideas with large language models to enable text-based synthesis of large-scale 3D scenes. Finally, we discuss extensions to specular materials, high-resolution GAN-based synthesis, and lifting 2D operators to 3D.
Sparse view synthesis
Description
Date and Location: 2/5/2025 | 04:10 PM - 04:30 PM | Regency B
Primary Session Chair:
Andre van Rynbach | Air Force Research Laboratory
Session Co-Chair: