A no-reference optical flow-based quality evaluator for stereoscopic videos in curvelet domain
journal contributionposted on 2017-06-16, 09:55 authored by Jiachen Yang, Huanling Wang, Wen Lu, Baihua LiBaihua Li, Atta Badii, Qinggang MengQinggang Meng
Most of the existing 3D video quality assessment (3D-VQA/SVQA) methods only consider spatial information by directly using an image quality evaluation method. In addition, a few take the motion information of adjacent frames into consideration. In practice, one may assume that a single data-view is unlikely to be sufficient for effectively learning the video quality. Therefore, integration of multi-view information is both valuable and necessary. In this paper, we propose an effective multi-view feature learning metric for blind stereoscopic video quality assessment (BSVQA), which jointly focuses on spatial information, temporal information and inter-frame spatio-temporal information. In our study, a set of local binary patterns (LBP) statistical features extracted from a computed frame curvelet representation are used as spatial and spatio-temporal description, and the local flow statistical features based on the estimation of optical flow are used to describe the temporal distortion. Subsequently, a support vector regression (SVR) is utilized to map the feature vectors of each single view to subjective quality scores. Finally, the scores of multiple views are pooled into the final score according to their contribution rate. Experimental results demonstrate that the proposed metric significantly outperforms the existing metrics and can achieve higher consistency with subjective quality assessment.
This research is partially supported by the Natural Science Foundation of China (No. 61471260), and Natural Science Foundation of Tianjin: 16JCYBJC16000.
- Computer Science
Published inInformation Sciences
CitationYANG, J. ... et al, 2017. A no-reference optical flow-based quality evaluator for stereoscopic videos in curvelet domain. Information Sciences, 414, pp. 133–146.
- AM (Accepted Manuscript)
Publisher statementThis work is made available according to the conditions of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) licence. Full details of this licence are available at: https://creativecommons.org/licenses/by-nc-nd/4.0/
NotesThis paper was accepted for publication in the journal Information Sciences and the definitive published version is available at http://dx.doi.org/10.1016/j.ins.2017.05.051