posted on 2017-02-24, 15:31authored byGokce Nur, Hemantha Kodikara Arachchi, Safak DoganSafak Dogan, Ahmet Kondoz
For enjoying 3D video to its full extent, it is imperative that access and consumption of it is user centric, which in turn ensures improved 3D video perception. Several important factors including video characteristics, users’ preferences, contexts prevailing in various usage environments, etc have influences on 3D video perception. Thus, to assist efficient provision of user centric media, user perception of 3D video should be modeled considering the factors affecting perception. Considering ambient illumination context to model 3D video perception is an interesting research topic, which has not been particularly investigated in literature. This context is taken into account while modeling video quality and depth perception of 3D video in this paper. For the video quality perception model: motion and structural feature characteristics of color texture sequences; and for the depth perception model: luminance contrast of color texture and depth intensity of depth map sequences of 3D video are used as primary content related factors in the paper. Results derived using the video quality and depth perception models demonstrate that these models can efficiently predict user perception of 3D video considering the ambient illumination context in user centric media access and consumption environments.
History
School
Loughborough University London
Published in
Multimedia Tools and Applications
Volume
70
Issue
1
Pages
333 - 359
Citation
NUR, G. ... et al, 2014. Modeling user perception of 3D video based on ambient illumination context for enhanced user centric media access and consumption. Multimedia Tools and Applications, 70 (1), pp.333-359
This work is made available according to the conditions of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) licence. Full details of this licence are available at: https://creativecommons.org/licenses/by-nc-nd/4.0/
Acceptance date
2014-05-01
Publication date
2014
Notes
The final publication is available at Springer via http://dx.doi.org/10.1007/s11042-011-0824-z