Cross view capture for stereo image super-resolution
Stereo image super-resolution exploits additional features from cross view image pairs for high resolution (HR) image reconstruction. Recently, several new methods have been proposed to investigate cross view features along epipolar lines to enhance the visual perception of recovered HR images. Despite the impressive performance of these methods, global contextual features from cross view images are left unexplored. In this paper, we propose a cross view capture network (CVCnet) for stereo image super-resolution by using both global contextual and local features extracted from both views. Specifically, we design a cross view block to capture diverse feature embeddings from the views in stereo vision. In addition, a cascaded spatial perception module is proposed to redistribute each location in feature maps according to the weight it occupies to make the extraction of features more effective. Extensive experiments demonstrate that our proposed CVCnet outperforms the state-of-the-art image super-resolution methods to achieve the best performance for stereo image super-resolution tasks. The source code is available at https://github.com/xyzhu1/CVCnet.
Funding
Natural Science Foundation of China under Grant 62076255
Hunan Provincial Science and Technology Plan Project 2020SK2059
National Science Foundation of Hunan Province, China, under Grant 2019JJ20025 and Grant 2019JJ40406
National Social Science Fund of China (No. 20&ZD120)
History
School
- Science
Department
- Computer Science
Published in
IEEE Transactions on MultimediaVolume
24Pages
3074 - 3086Publisher
Institute of Electrical and Electronics EngineersVersion
- AM (Accepted Manuscript)
Rights holder
© IEEEPublisher statement
© 2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Acceptance date
2021-06-23Publication date
2021-06-25Copyright date
2021ISSN
1520-9210eISSN
1941-0077Publisher version
Language
- en