Monocular Depth Estimation Primed by Salient Point Detection and Normalized Hessian Loss
Huynh, Lam; Pedone, Matteo; Nguyen, P.; Matas, Jiri; Rahtu, Esa; Heikkilä, Janne (2021)
Huynh, Lam
Pedone, Matteo
Nguyen, P.
Matas, Jiri
Rahtu, Esa
Heikkilä, Janne
IEEE
2021
This publication is copyrighted. You may download, display and print it for Your own personal use. Commercial use is prohibited
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:tuni-202211078234
https://urn.fi/URN:NBN:fi:tuni-202211078234
Kuvaus
Peer reviewed
Tiivistelmä
Deep neural networks have recently thrived on single image depth estimation. That being said, current developments on this topic highlight an apparent compromise between accuracy and network size. This work proposes an accurate and lightweight framework for monocular depth estimation based on a self-attention mechanism stemming from salient point detection. Specifically, we utilize a sparse set of keypoints to train a FuSaNet model that consists of two major components: Fusion-Net and Saliency-Net. In addition, we introduce a normalized Hessian loss term invariant to scaling and shear along the depth direction, which is shown to substantially improve the accuracy. The proposed method achieves state-of-the-art results on NYU-Depth-v2 and KITTI while using 3.1-38.4 times smaller model in terms of the number of parameters than baseline approaches. Experiments on the SUN-RGBD further demonstrate the generalizability of the proposed method.
Kokoelmat
- TUNICRIS-julkaisut [15314]