
How can virtual agents plan and perform actions that are purposeful, adaptive, and realistic? AI-Agent Motion research aims to develop systems that decide what to do next and how to move to achieve their goals, rather than simply replaying pre-recorded behaviors. As applications in the metaverse, immersive simulation, and interactive virtual worlds continue to grow, developing approaches that generate believable and context-aware motion remains a central challenge. In these environments, agents must respond to dynamic scenes, diverse user interactions, and evolving tasks, making it essential that their motion adapts fluidly to context rather than relying on fixed patterns.
Our research explores methods that help agents break down tasks into sequences of actions and plan how to execute them in changing environments. We focus on combining knowledge about objects, spatial relationships, and task objectives to produce motion that is both functional and natural. This includes studying how agents can plan coordinated bimanual actions, deciding when and how to use both hands to manipulate objects effectively. The aim is to support more interactive and versatile virtual experiences without relying on extensive manual scripting.
Recent papers
- Publications forthcoming