We present a novel multi-view depth map enhancement method deployed
as a post-processing of initially estimated depth maps, which
are incoherent in the temporal and inter-view dimensions. The proposed
method is based on edge and motion-adaptive median filtering
and allows for an improved quality of virtual view synthesis. To
enforce the spatial, temporal and inter-view coherence in the multiview
depth maps, the median filtering is applied to 4-dimensional
windows that consist of the spatially neighbor depth map values
taken at different viewpoints and time instants. These windows have
locally adaptive shapes in a presence of edges or motion to preserve
sharpness and realistic rendering. We show that our enhancement
method leads to a reduction of a coding bit-rate required for representation
of the depth maps and also to a gain in the quality of synthesized
views at an arbitrary virtual viewpoint. At the same time,
the method carries a low additional computational complexity.
Funding
This work was in part developed within VISNET II, a European Network
of Excellence (http://www.visnetnoe.org), funded under the European
Commission IST FP6 programme.
History
School
Loughborough University London
Published in
Picture Coding Symposium, 2009. PCS 2009
Pages
1 - 4
Citation
EKMEKCIOGLU, E., VELISAVLJEVIC, V. and WORRALL, S., 2009. Edge and motion-adaptive median filtering for multi-view depth map enhancement. IN: Proceedings of 2009 IEEE Picture Coding Symposium (PCS 2009), Chicago, United States, 6-8 May 2009, doi: 10.1109/PCS.2009.5167415.
This work is made available according to the conditions of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) licence. Full details of this licence are available at: https://creativecommons.org/licenses/by-nc-nd/4.0/