Workshop on Humanoids 2025

At the Crossroads of Control:
Model-based VS Learning based Control for Humanoids

Thursday, October 2, 2025, Seoul, Korea

We are seeking contributions of late-breaking results and posters that address model-based, learning-based, or hybrid control approaches for reliable humanoid robots. If you think your results fit with one of the topics below, please submit your contribution here by September 14th, 2025, for potential inclusion at our workshop!

Topics

Model-based control methods and whole-body MPC for legged / humanoid robots

Model-based control approaches, including inverse dynamics and whole-body control frameworks, have long served as the foundation of humanoid locomotion. Among them, whole-body Model Predictive Control (MPC) has emerged as a powerful strategy for planning and executing dynamic motions with physical consistency and real-time constraint handling. This topic will explore the strengths and limitations of model-based frameworks, focusing on recent advancements in whole-body MPC for structured and semi-structured environments, as well as its real-world deployment challenges.

Reinforcement learning for locomotion and contact-rich motion control

Reinforcement learning (RL) has demonstrated impressive capabilities in simulation, enabling robots to learn robust and agile motions without explicit modeling. However, transferring RL policies to real humanoids remains a challenge, especially in contact-rich settings. This topic will highlight the progress in policy learning, sim-to-real transfer techniques, and the role of reward shaping, curriculum learning, and policy distillation for stable real-world deployment.

Hybrid control architectures combining model-based and learning-based approaches

Hybrid control approaches aim to combine the robustness and interpretability of model-based control with the flexibility and scalability of learning-based methods. From learning residual policies to embedding differentiable physics into RL, this topic covers diverse integration strategies and their real-world effectiveness. Discussions will center on whether these hybrids truly deliver "the best of both worlds" and what architectural principles are emerging as best practices.

Comparative evaluations and benchmarking on real hardware

Evaluating control methods solely in simulation often fails to capture the full spectrum of real-world complexity. This topic emphasizes the need for rigorous comparisons on physical hardware, including standardized benchmarks, open datasets, and metrics that go beyond final performance to include robustness, safety, and recovery from failure. Contributions that provide empirical insights on what works (and what doesn't) across platforms are especially welcome.

Practical engineering challenges: failure modes, interpretability, and maintainability

A high-performing algorithm on paper may not translate to a deployable solution in practice. This topic delves into the "last-mile" issues faced by engineers, such as debugging policy failures, interpreting behavior in failure modes, and maintaining complex control stacks. We seek contributions that share lessons learned from field deployment, particularly in identifying what bottlenecks slow or prevent scaling.

Future directions for learning-integrated locomotion control

As learning-based control continues to evolve, questions arise about how it can be safely and effectively integrated into everyday robot operation. This topic invites forward-looking perspectives on topics such as lifelong learning, modularity, safety constraints in learning, and control-stack redesign for future humanoid platforms. It also considers societal and deployment implications—what’s needed to move from research to product?

Sim-to-Real transfer and domain adaptation strategies

Exploring how simulation-trained policies can robustly generalize to real-world variations in dynamics, sensing, and contact uncertainties.

Submissions on related topics beyond the listed ones are also welcome.

We encourage contributions that explore novel ideas or emerging directions relevant to humanoid locomotion control, even if they do not fall squarely within the specified topics