posted on 2017-08-15, 13:58authored byHarshana Dantanarayana, Jonathan Huntley
We present an algorithm based on maximum
likelihood analysis for the automated recognition of objects, and estimation of their pose, from 3D point clouds. Surfaces segmented from depth images are used as the features, unlike ‘interest point’ based algorithms which normally discard such data. Compared to the 6D Hough transform it has negligible memory requirements, and is
computationally efficient compared to iterative closest point (ICP) algorithms. The same method is applicable to both the initial recognition/pose estimation problem as well as subsequent pose refinement through
appropriate choice of the dispersion of the probability density functions. This single unified approach therefore avoids the usual requirement for different algorithms for these two tasks. In addition to the theoretical description, a simple 2 degree of freedom
(DOF) example is given, followed by a full 6 DOF analysis of 3D point cloud data from a cluttered scene acquired by a projected fringe-based scanner, which demonstrated an rms alignment error as low as 0:3 mm.
Funding
The research was funded by the Engineering and Physical Sciences Research Council under the
Light Controlled Factory project EP/K018124/1.
History
School
Mechanical, Electrical and Manufacturing Engineering
Published in
Royal Society Open Science
Volume
4
Issue
8
Citation
DANTANARAYANA, H.G. and HUNTLEY, J.M., 2017. Object recognition and localisation from 3D point clouds by maximum likelihood estimation. Royal Society Open Science, 4: 160693.
Publisher
The Royal Society
Version
NA (Not Applicable or Unknown)
Publisher statement
This work is made available according to the conditions of the Creative Commons Attribution 4.0 International (CC BY 4.0) licence. Full details of this licence are available at: http://creativecommons.org/licenses/ by/4.0/
Acceptance date
2017-07-17
Publication date
2017-08-16
Notes
This is an Open Access Article. It is published by Royal Society under the Creative Commons Attribution 4.0 Unported Licence (CC BY). Full details of this licence are available at: http://creativecommons.org/licenses/by/4.0/