Assembling convolution neural networks for automatic viewing transformation
journal contributionposted on 27.09.2019, 12:27 by Haibin Cai, Lei Jiang, Bangli Liu, Yiqi Deng, Qinggang Meng
Images taken under different camera poses are rotated or distorted, which leads to poor perception experiences. This paper proposes a new framework to automatically transform the images to the conformable view setting by assembling different convolution neural networks. Specifically, a referential 3D ground plane is firstly derived from the RGB image and a novel projection mapping algorithm is developed to achieve automatic viewing transformation. Extensive experimental results demonstrate that the proposed method outperforms the state-ofthe-art vanishing points based methods by a large margin in terms of accuracy and robustness.
YOBAN project (Newton Fund/Innovate UK, 102871)
SukeIntel Co., Ltd
- Computer Science