Scene Reconstruction Using Event Cameras
Aytekin, Tulya (2025)
Aytekin, Tulya
2025
Master's Programme in Computing Sciences and Electrical Engineering
Informaatioteknologian ja viestinnän tiedekunta - Faculty of Information Technology and Communication Sciences
Hyväksymispäivämäärä
2025-12-29
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:tuni-2025122612133
https://urn.fi/URN:NBN:fi:tuni-2025122612133
Tiivistelmä
Event cameras represent a paradigm shift in imaging, capturing scene dynamics asynchronously as a stream of brightness changes rather than as standard static frames. While this architecture offers high temporal resolution and dynamic range, it lacks the absolute intensity information required for traditional visualization. Conversely, Light Field displays require dense, multi-view 3D representations of a scene, which are typically computationally expensive to capture using standard sensors alone. This thesis investigates the feasibility of reconstructing high-fidelity scenes for Light Field displays by fusing sparse event data with standard monochromatic intensity frames. A machine learning framework was developed to integrate these two modalities, leveraging the temporal precision of events to enhance angular reconstruction. Experimental results indicate that while the proposed method successfully synthesizes scene structure, the final reconstruction quality is currently constrained by the sparsity of the input data and limitations in existing quality metrics. Despite these challenges, this study demonstrates that sensor fusion is a viable pathway for bandwidth-efficient 3D visualization, providing a critical foundation for future applications in light field displays.
