Exploring Text-to-Motion Generation with Human Preference

Jenny Sheng, Matthieu Lin, Andrew Zhao, Kevin Pruvost, Yu-Hui Wen, Yangguang Li, Gao Huang, Yong-jin Liu

16 Mar 2024 (modified: 07 Jun 2024)CVPR 2024 Workshop HuMoGen SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: human motion generation, preference learning
TL;DR: finetune MotionGPT with preference pairs
Abstract: This paper presents an exploration of preference learning in text-to-motion generation. We find that current improvements in text-to-motion generation still rely on datasets requiring expert labelers with motion capture systems. Instead, learning from human preference data does not require motion capture systems; a labeler with no expertise simply compares two generated motions. This is particularly efficient because evaluating the model's output is easier than gathering the motion that performs a desired task (e.g. backflip). To pioneer the exploration of this paradigm, we annotate 3,528 preference pairs generated by MotionGPT, marking the first effort to investigate various algorithms for learning from preference data. In particular, our exploration highlights important design choices when using preference data. Additionally, our experimental results show that preference learning has the potential to greatly improve current text-to-motion generative models. Our code and dataset will be publicly available to further facilitate research in this area.
Supplementary Material: zip
Submission Number: 7
Loading