SharpNet: Fast and Accurate Recovery of Occluding Contours in Monocular Depth Estimation
Michaël Ramamonjisoa1
Vincent Lepetit1
1LIGM (UMR 8049) - Ecole des Ponts, UPE
ICCV 2019 Workshops
[Paper]
[GitHub]

Abstract

We introduce SharpNet, a method that predicts an accurate depth map given a single input color image, with a particular attention to the reconstruction of occluding contours: Occluding contours are an important cue for object recognition, and for realistic integration of virtual objects in Augmented Reality, but they are also notoriously difficult to reconstruct accurately. For example, they are a challenge for stereo-based reconstruction methods, as points around an occluding contour are only visible in one of the two views. Inspired by recent methods that introduce normal estimation to improve depth prediction, we introduce novel terms to constrain normals, depth and occluding contours predictions. Since ground truth depth is difficult to obtain with pixel-perfect accuracy along occluding contours, we use synthetic images for training, followed by fine-tuning on real data. We demonstrate our approach on the challenging NYUv2-Depth dataset, and show that our method outperforms the state-of-the-art along occluding contours, while performing on par with the best recent methods for the rest of the images. Its accuracy along the occluding contours is actually better than the “ground truth” acquired by a depth camera based on structured light. We show this by introducing a new benchmark based on NYUv2-Depth for evaluating occluding contours in monocular reconstruction, which is our second contribution.


Code


 [GitHub]


Paper and Supplementary Material

M. Ramamonjisoa, V. Lepetit.
SharpNet: Fast and Accurate Recovery of Occluding Contours in Monocular Depth Estimation.
In International Conference on Computer Vision (ICCV) Workshop on 3D Reconstruction in the Wild, 2019.
(hosted on ArXiv)



Poster


Results Video

[Bibtex]


Acknowledgements

This webpage template was borrowed from some colorful folks.