ALL_TII-19-1583 (002).pdf (1.53 MB)
Download fileAssembling convolution neural networks for automatic viewing transformation
journal contribution
posted on 2019-09-27, 12:27 authored by Haibin Cai, Lei Jiang, Bangli Liu, Yiqi Deng, Qinggang MengQinggang MengImages taken under different camera poses are
rotated or distorted, which leads to poor perception experiences.
This paper proposes a new framework to automatically transform
the images to the conformable view setting by assembling
different convolution neural networks. Specifically, a referential
3D ground plane is firstly derived from the RGB image and
a novel projection mapping algorithm is developed to achieve
automatic viewing transformation. Extensive experimental results
demonstrate that the proposed method outperforms the state-ofthe-art vanishing points based methods by a large margin in
terms of accuracy and robustness.
Funding
YOBAN project (Newton Fund/Innovate UK, 102871)
EPSRC CDT-EI
SukeIntel Co., Ltd
History
School
- Science
Department
- Computer Science
Published in
IEEE Transactions on Industrial InformaticsVolume
16Issue
1Pages
587 - 594Publisher
Institute of Electrical and Electronics Engineers (IEEE)Version
- AM (Accepted Manuscript)
Rights holder
© IEEEPublisher statement
Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Acceptance date
2019-08-22Publication date
2019-09-09Copyright date
2019ISSN
1551-3203eISSN
1941-0050Publisher version
Language
- en