posted on 2016-02-10, 11:35authored byJohn Darby, Baihua LiBaihua Li, Ryan Cunningham, Nicholas Costen
The aim of this paper is to track objects during their use by humans. The task is difficult because these objects are small, fast-moving and often occluded by the user. We present a novel solution based on cascade action recognition, a learned mapping between body-and object-poses, and a hierarchical extension of importance sampling. During tracking, body pose estimates from a Kinect sensor are classified between action classes by a Support Vector Machine and converted to discriminative object pose hypotheses using a {body, object} pose mapping. They are then mixed with generative hypotheses by the importance sampler and evaluated against the image. The approach out-performs a state of the art adaptive tracker for localisation of 14/15 test implements and additionally gives object classifications and 3D object pose estimates.
History
School
Science
Department
Computer Science
Published in
Proceedings - International Conference on Pattern Recognition
Pages
817 - 820
Citation
DARBY, J. ... et al, 2012. Object localisation via action recognition. Proceedings - 21st International Conference on Pattern Recognition, 11th-15th November 2012, Tsukuba, pp.817-820
This work is made available according to the conditions of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) licence. Full details of this licence are available at: https://creativecommons.org/licenses/by-nc-nd/4.0/