Content-aware delivery of visual attention-based scalable multi-view video over P2P
conference contribution
posted on 2016-10-11, 10:53authored byErhan EkmekciogluErhan Ekmekcioglu, Hemantha Kodikara Arachchi, Ahmet Kondoz, Cihat G. Gurler, Saadet S. Savas
3D media applications have become widespread thanks to the intense research being conducted on 3D enabling technologies, commercial products being released and service providers. There is also a huge potential for IP network to become a major means for delivering 3D video services, being highly flexible and allowing much custom and scalable applications to take up. Peer-to-Peer (P2P) video streaming, which offers high scalability in the presence of many media consuming peers, is suitable for multi-view video applications with significantly higher bandwidth requirements. To exploit flexible streaming and also serve a range of displays, P2P networking should be combined with scalable coded multi-view video that offers a useful bit-rate and maximum viewpoint adaptation range. However, since maintaining users' Quality of Experience (QoE) is a primary target, scalable coded video should ensure that perceptually sensible visual data is delivered with high accuracy at all times to users, even under severe network conditions. Hence, content-based visual attention models provide a suitable means for letting salient video data be delivered at all times to users. With the utilization of appropriate adaptation decision making process in users' equipment, the P2P protocol can adaptively stream the scalable 3D media. Work presented in this paper explains such a delivery framework over P2P networks.
History
School
Loughborough University London
Published in
Packet Video Workshop (PV), 2012 19th International
Pages
71 - 76
Citation
EKMEKCIOGLU, E. ... et al., 2012. Content-aware delivery of visual attention-based scalable multi-view video over P2P. IN: Proceedings of the 2012 19th IEEE International Packet Video Workshop (PV 2012), Munich, Germany, 10-11 May 2012, pp.71-76.
This work is made available according to the conditions of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) licence. Full details of this licence are available at: https://creativecommons.org/licenses/by-nc-nd/4.0/