When exploring their surroundings, communicating with others and expressing themselves, humans can perform a wide range of body motions. The ability to realistically replicate these motions, applying them to human and humanoid characters, could be highly valuable for the development of video games and the creation of animations, content that can be viewed using virtual reality (VR) headsets and training videos for professionals.
Researchers at Peking University’s Institute for Artificial Intelligence (AI) and the State Key Laboratory of General AI recently introduced new models that could simplify the generation of realistic motions for human characters or avatars. The work is published on the arXiv preprint server.
Their proposed approach for the generation of human motions, outlined in a paper presented at CVPR 2025, relies on a data augmentation technique called MotionCutMix and a diffusion model called MotionReFit.