A General Framework for Depth Compression and Multi-Sensor Fusion in Asymmetric View-Plus-Depth 3D Representation
Georgiev, Mihail; Belyaev, Evgeny; Gotchev, Atanas (2020-01-01)
Georgiev, Mihail
Belyaev, Evgeny
Gotchev, Atanas
01.01.2020
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:tuni-202006266237
https://urn.fi/URN:NBN:fi:tuni-202006266237
Kuvaus
Peer reviewed
Tiivistelmä
We present a general framework which can handle different processing stages of the three-dimensional (3D) scene representation referred to as 'view-plus-depth' (V+Z). The main component of the framework is the relation between the depth map and the super-pixel segmentation of the color image. We propose a hierarchical super-pixel segmentation which keeps the same boundaries between hierarchical segmentation layers. Such segmentation allows for a corresponding depth segmentation, decimation and reconstruction with varying quality and is instrumental in tasks such as depth compression and 3D data fusion. For the latter we utilize a cross-modality reconstruction filter which is adaptive to the size of the refining super-pixel segments. We propose a novel depth encoding scheme, which includes specific arithmetic encoder and handles misalignment outliers. We demonstrate that our scheme is especially applicable for low bit-rate depth encoding and for fusing color and depth data, where the latter is noisy and with lower spatial resolution.
Kokoelmat
- TUNICRIS-julkaisut [15239]