Collaborative spatial streaming: real-time auto-calibrating system for multi-device dynamic 3D capture
Author(s)
Tyler Bell | University of Iowa
Abstract
Multi-device capture systems for volumetric 3D reconstruction are becoming increasingly common but typically require precise, pre-computed calibration of all devices in the system. This calibration process is critical but cumbersome, as any movement of a device necessitates a complete recalibration. Additionally, mobile devices are increasingly being used to capture 3D data, making the generation of 3D content more accessible. However, using these mobile devices for dynamic volumetric reconstruction is challenging due to their handheld nature and susceptibility to movement. To address this limitation, we introduce Collaborative Spatial Streaming, a novel platform that enables real-time auto-calibration for 3D data capture and transmission from multiple mobile devices (e.g., iPhones). Our platform’s flexible calibration is continuously estimated such that data from each local device can be wirelessly transmitted and integrated into a composite 3D reconstruction for each frame. The dynamic, composite 3D scene can then be viewed by remotely located immersive audiences (e.g., augmented reality [AR], virtual reality [VR] headsets) and other devices (e.g., desktop computers, data recording and processing servers). Collaborative Spatial Streaming simplifies the setup of a multi-device 3D reconstruction system and delivers 3D reconstructions suitable for spatial computing devices, enabling advanced applications in telepresence, performance capture, remote robotics, and more.
Collaborative spatial streaming: real-time auto-calibrating system for multi-device dynamic 3D capture
Description
Date and Location: 2/4/2025 | 10:50 AM - 11:10 AM | Grand Peninsula DSession Chair: Takashi Kawai | Waseda University
Paper Number: SD&A-335
Back to Session Gallery