Loughborough University
Browse

File(s) under permanent embargo

Reason: This item is currently closed access.

Articulated pose identification with sparse point features

journal contribution
posted on 2016-02-09, 12:42 authored by Baihua LiBaihua Li, Qinggang MengQinggang Meng, Horst Holstein
We propose a general algorithm for identifying an arbitrary pose of an articulated subject with sparse point features. The algorithm aims to identify a one-to-one correspondence between a model point-set and an observed point-set taken from freeform motion of the articulated subject. We avoid common assumptions such as pose similarity or small motions with respect to the model, and assume no prior knowledge from which to infer an initial or partial correspondence between the two point-sets. The algorithm integrates local segment-based correspondences under a set of affine transformations, and a global hierarchical search strategy. Experimental results, based on synthetic pose and real-world human motion data demonstrate the ability of the algorithm to perform the identification task. Reliability is increasingly compromised with increasing data noise and segmental distortion, but the algorithm can tolerate moderate levels. This work contributes to establishing a crucial self-initializing identification in model-based point-feature tracking for articulated motion.

History

School

  • Science

Department

  • Computer Science

Published in

IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics

Volume

34

Issue

3

Pages

1412 - 1422

Citation

LI, B., MENG, Q. and HOLSTEIN, H., 2004. Articulated pose identification with sparse point features. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 34 (3), pp.1412-1422

Publisher

© IEEE

Version

  • NA (Not Applicable or Unknown)

Publisher statement

This work is made available according to the conditions of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) licence. Full details of this licence are available at: https://creativecommons.org/licenses/by-nc-nd/4.0/

Publication date

2004

Notes

This paper is closed access.

ISSN

1083-4419

Language

  • en