Capturing Hand Motion with an RGB-D Sensor, Fusing a Generative Model with Salient PointsDimitrios Tzionas and Abhilash Srikantha and Pablo Aponte and Juergen Gall----> Extended IJCV version of the project here <---- (accepted on 10.02.2016) AbstractHand
motion capture has been an active research topic in recent years,
following the success of full-body pose tracking. Despite similarities,
hand tracking proves to be more challenging, characterized by a higher
dimensionality, severe occlusions and self-similarity between fingers.
For this reason, most approaches rely on strong assumptions, like hands
in isolation or expensive multi-camera systems, that limit the
practical use. In this work, we propose a framework for hand tracking
that can capture the motion of two interacting hands using only a
single, inexpensive RGB-D camera. Our approach combines a generative
model with collision detection and discriminatively learned salient
points. We quantitatively evaluate our approach on 14 new sequences
with challenging interactions.
PublicationsTzionas,
D., Srikantha, A., Aponte, P. and Gall, J.
Capturing Hand Motion with an RGB-D Sensor, Fusing a Generative Model with Salient Points (PDF, BibTex) German Conference on Pattern Recognition (GCPR'14) Supplementary Material: Capturing Hand Motion with an RGB-D Sensor, Fusing a Generative Model with Salient Points (PDF, Files)
DataSequences marked with (*) are used just for comparison with the FORTH tracker.
Model-files marked with (**) do not contain sequence-specific files (.SKEL and .MOTION)
Presentation
Related projects
|