Recurrent Neural Networks for Modeling Motion Capture Data
Mir Abdul Rasool Khan, Mir A. (2017)
Mir Abdul Rasool Khan, Mir A.
2017
Information Technology
Tieto- ja sähkötekniikan tiedekunta - Faculty of Computing and Electrical Engineering
This publication is copyrighted. You may download, display and print it for Your own personal use. Commercial use is prohibited.
Hyväksymispäivämäärä
2017-06-07
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:tty-201705261557
https://urn.fi/URN:NBN:fi:tty-201705261557
Tiivistelmä
This thesis introduces a Recurrent Neural Network (RNN) framework as a generative model for synthesizing human motion capture data. The skeleton model of the data used is a complex human skeleton model with 64 joints, resulting in a total of 192 degrees-of-freedom, which makes our data nearly three times as complex as in previous approaches of applying neural networks on motion capture data. The RNN model can generates good quality and novel human motion sequences that can, at times, be difficult to visually distinguish from real motion capture data, which demonstrates the ability of RNNs to analyze long and very high-dimensional sequences. Additionally, the synthesized motion sequences show strong inter-joint correspondence and extending up to 250 frames. The quality of the motion and its accuracy is analyzed quantitatively through various metrics that evaluate inter-joint relationships and temporal joint correspondence.