On the asymmetric view+depth 3D scene representation
Georgiev, Mihail; Gotchev, Atanas (2016-02-16)
Georgiev, Mihail
Gotchev, Atanas
16.02.2016
This publication is copyrighted. You may download, display and print it for Your own personal use. Commercial use is prohibited
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:tty-201708111671
https://urn.fi/URN:NBN:fi:tty-201708111671
Tiivistelmä
In this work we promote the asymmetric view + depth representation as an efficient representation of 3D visual scenes. Recently, it has been proposed in the context of aligned view and depth images and specifically for depth compression. The representation employs two techniques for image analysis and filtering. A super-pixel segmentation of the color image is used to sparsify the depth map in spatial domain and a regularizing spatially adaptive filter is used to reconstruct it back to the input resolution. The relationship between the color and depth images established through these two procedures leads to substantial reduction of the required depth data. In this work we modify the approach for representing 3D scenes, captured by RGB-Z capture setup formed by non-confocal RGB and range sensors with different spatial resolutions. We specifically quantify its performance for the case of low-resolution range sensor working in low-sensing mode that generates images impaired by rather extreme noise. We demonstrate its superiority against other upsampling methods in how it copes with the noise and reconstructs a depth map with good quality out of very low-resolution input range image.
Kokoelmat
- TUNICRIS-julkaisut [18911]