Video-based motion capture

Extracting kinematics and kinetics from video data

Accurate 3D kinematics estimation of human body is crucial in various applications for human health and mobility, such as rehabilitation, injury prevention, and diagnosis, as it helps to understand the biomechanical loading experienced during movement. Conventional marker-based motion capture is expensive in terms of financial investment, time, and the expertise required. Moreover, due to the scarcity of datasets with accurate annotations, existing markerless motion capture methods suffer from challenges including unreliable 2D keypoint detection, limited anatomic accuracy, and low generalization capability. In this work, we propose a novel biomechanics-aware approach using virtual and synthetic videos. The proposed approach trained on artificial data, outperforms previous state-of-the-art methods when evaluated across multiple datasets, revealing a promising direction for enhancing video-based human motion capture.

3D Kinematics Estimation from Video with a Biomechanical Model and Synthetic Training Data

Zhi-Yi Lin, Bofan Lyu, Judith Cueto Fernandez, Eline van der Kruk, Ajay Seth, Xucong Zhang

Rxiv preprint

Finished graduation projects

Video-based markerless motion capture is an exciting emerging technology, though its biomechanical reliability is still a work in progress. In a recent publication, we demonstrated that algorithms could be trained using synthetic videos generated from accurate motion capture data*. This improves the biomechanical consistency. Punitha Devaraja applied this approach by creating virtual video data of skaters from a marker-based motion capture dataset, to test and train an algorithm specifically for speed skating.

MSc students working on this project