posted on 2016-05-26, 15:01authored byHyun Lim, Cagri Ozcinar, Andrew P. Hill, Ahmet Kondoz
This paper proposes an audiovisual 3D multimedia system based on multi-view video and wave field synthesis cooperated with the object-based audio. The spatial audio rendering method based on wave field synthesis is particularly useful for applications where multiple users experience true immersive sound while being free to move without losing spatial sound effects. The unique features of the methodology can be very useful in high quality virtual experience applications, and in particular for creating precise audio objects synchronised with multi-view video stream, delivered to the users in network platforms. The paper introduces a novel 3D multimedia rendering and streaming architecture over the Internet, and also reports on the experimental results. The multimedia system is demonstrated and con-firmed that while covering the viewing angle, the developed approach can create a variety of virtual audio objects at target positions with very high accuracy.
Funding
EC FP7
History
School
Loughborough University London
Published in
12th Western Pacific Acoustics Conference (WESPAC) 2015
Issue
paper number: O11000175
Pages
467 - 472
Citation
LIM, H. ... et al., 2015. Sound localisation for 3D multimedia streaming. IN: Lim, K.M. (ed.) Proceedings of the 12th Western Pacific Acoustics Conference (WESPAC) 2015, Singapore, 6-9 December (paper number: O11000175), pp. 467 - 472.
This work is made available according to the conditions of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) licence. Full details of this licence are available at: https://creativecommons.org/licenses/by-nc-nd/4.0/