ALL_TII-19-1583 (002).pdf (1.53 MB)
0/0

Assembling convolution neural networks for automatic viewing transformation

Download (1.53 MB)
journal contribution
posted on 27.09.2019 by Haibin Cai, Lei Jiang, Bangli Liu, Yiqi Deng, Qinggang Meng
Images taken under different camera poses are rotated or distorted, which leads to poor perception experiences. This paper proposes a new framework to automatically transform the images to the conformable view setting by assembling different convolution neural networks. Specifically, a referential 3D ground plane is firstly derived from the RGB image and a novel projection mapping algorithm is developed to achieve automatic viewing transformation. Extensive experimental results demonstrate that the proposed method outperforms the state-ofthe-art vanishing points based methods by a large margin in terms of accuracy and robustness.

Funding

YOBAN project (Newton Fund/Innovate UK, 102871)

EPSRC CDT-EI

SukeIntel Co., Ltd

History

School

  • Science

Department

  • Computer Science

Published in

IEEE Transactions on Industrial Informatics

Volume

16

Issue

1

Pages

587 - 594

Publisher

Institute of Electrical and Electronics Engineers (IEEE)

Version

AM (Accepted Manuscript)

Rights holder

© IEEE

Publisher statement

Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.

Acceptance date

22/08/2019

Publication date

2019-09-09

Copyright date

2019

ISSN

1551-3203

eISSN

1941-0050

Language

en

Depositor

Prof Qinggang Meng

Exports