Articulated point pattern matching in optical motion capture systems

2016-02-09T15:50:43Z (GMT) by Baihua Li Horst Holstein Qinggang Meng
Tracking and identifying articulated objects have received growing attention in computer vision in the past decade. In marker-based optical motion capture (MoCap) systems, an articulated movement of near-rigid segments is represented via a sequence of moving dots of known 3D coordinates, corresponding to the captured marker positions. We propose a segment-based articulated model-fitting algorithm to address the problem of self-initializing identification and pose estimation utilizing one frame of data in such point-feature tracking systems. It is ultimately crucial for recovering the complete motion sequence. Experimental results, based on synthetic pose and real-world human motion capture data, demonstrate the performance of the algorithm.