File(s) not publicly available
Reason: This item is currently closed access.
Spatial synchronization of audiovisual objects by 3D audio object coding
conference contributionposted on 11.10.2016 by Banu Gunel, Erhan Ekmekcioglu, Ahmet Kondoz
Any type of content contributed to an academic conference, such as papers, presentations, lectures or proceedings.
Free viewpoint video enables the visualisation of a scene from arbitrary viewpoints and directions. However, this flexibility in video rendering provides a challenge in 3D media for achieving spatial synchronicity between the audio and video objects. When the viewpoint is changed, its effect on the perceived audio scene should be considered to avoid mismatches in the perceived positions of audiovisual objects. Spatial audio coding with such flexibility requires decomposing the sound scene into audio objects initially, and then synthesizing the new scene according to the geometric relations between the A/V capturing setup, selected viewpoint and the rendering system. This paper proposes a free viewpoint audio coding framework for 3D media systems utilising multiview cameras and a microphone array. A real-time source separation technique is used for object decomposition followed by spatial audio coding. Binaural, multichannel sound systems and wave field synthesis systems are addressed. Subjective test results shows that the method achieves spatial synchronicity for various viewpoints consistently, which is not possible by conventional recording techniques.
This work has been supported by the MUSCADE Integrating Project (www.muscade.eu), funded under the European Commission ICT 7th Framework Programme.
- Loughborough University London