RGBD-Net: Predicting Color and Depth Images for Novel Views Synthesis
Nguyen, Phong; Karnewar, Animesh; Huynh, Lam; Rahtu, Esa; Matas, Jiri; Heikkilä, Janne (2021)
Nguyen, Phong
Karnewar, Animesh
Huynh, Lam
Rahtu, Esa
Matas, Jiri
Heikkilä, Janne
2021
This publication is copyrighted. You may download, display and print it for Your own personal use. Commercial use is prohibited.
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:tuni-202211078232
https://urn.fi/URN:NBN:fi:tuni-202211078232
Kuvaus
Peer reviewed
Tiivistelmä
We propose a new cascaded architecture for novel view synthesis, called RGBD-Net, which consists of two core components: a hierarchical depth regression network and a depth-aware generator network. The former one predicts depth maps of the target views by using adaptive depth scaling, while the latter one leverages the predicted depths and renders spatially and temporally consistent target images. In the experimental evaluation on standard datasets, RGBD-Net not only outperforms the state-of-the-art by a clear margin, but it also generalizes well to new scenes without per-scene optimization. Moreover, we show that RGBD-Net can be optionally trained without depth supervision while still retaining high-quality rendering. Thanks to the depth regression network, RGBD-Net can be also used for creating dense 3D point clouds that are more accurate than those produced by some state-of-the-art multi-view stereo methods.
Kokoelmat
- TUNICRIS-julkaisut [20173]