Tracking and identifying articulated objects have received growing attention in computer vision in the past decade. In marker-based optical motion capture (MoCap) systems, an articulated movement of near-rigid segments is represented via a sequence of moving dots of known 3D coordinates, corresponding to the captured marker positions. We propose a segment-based articulated model-fitting algorithm to address the problem of self-initializing identification and pose estimation utilizing one frame of data in such point-feature tracking systems. It is ultimately crucial for recovering the complete motion sequence. Experimental results, based on synthetic pose and real-world human motion capture data, demonstrate the performance of the algorithm.
History
School
Science
Department
Computer Science
Published in
Proceedings of the 7th International Conference on Control, Automation, Robotics and Vision, ICARCV 2002
Pages
298 - 303
Citation
LI, B., MENG, Q. and HOLSTEIN, H., 2002. Articulated point pattern matching in optical motion capture systems. Proceedings of the 7th International Conference on Control, Automation, Robotics and Vision, ICARCV 2002, 2nd-5th December, vol. 1, pp.298-303
Publisher
IEEE
Version
NA (Not Applicable or Unknown)
Publisher statement
This work is made available according to the conditions of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) licence. Full details of this licence are available at: https://creativecommons.org/licenses/by-nc-nd/4.0/