This paper proposes an immersive audio rendering scheme
for networked 3D multimedia systems. The spatial audio
rendering method based on wave field synthesis is
particularly useful for applications where multiple listeners
experience a true spatial soundscape while being free to
move without losing spatial sound properties. The proposed
approach can be considered as a general solution to the
static listening restriction imposed by conventional methods,
which rely on an accurate sound reproduction within a sweet
spot only. The paper reports on the results of numerical
analysis and experimental validation using various sound
sources. It is demonstrated and confirmed that while
covering the majority of the listening area, the developed
approach can create a variety of virtual audio objects at
target positions with very high accuracy. Subjective
evaluation results show that an accurate spatial impression
can be achieved with multiple simultaneous audible depth
cues improving localization accuracy over single object
rendering.
Funding
This work was supported by the ROMEO project (grant
number: 287896), which was funded by the EC FP7 ICT
collaborative research programme.
History
School
Loughborough University London
Published in
IEEE International Conference on Image Processing (ICIP)
Pages
76 - 80
Citation
LIM, H. ... et al., 2014. An approach to immersive audio rendering with wave field synthesis for 3D multimedia content. IN: Proceedings of 2014 IEEE International Conference on Image Processing (ICIP 2014), Paris, France, 27-30 October 2014, pp.76-80.
This work is made available according to the conditions of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) licence. Full details of this licence are available at: https://creativecommons.org/licenses/by-nc-nd/4.0/