Workshop on Humanoids 2025

At the Crossroads of Control:
Model-based VS Learning based Control for Humanoids

Thursday, October 2, 2025, Seoul, Korea

Speakers

We have a great list of speakers lined up who will be speaking on a number of topics. These include:

Sungjoon Choi, Korea University

Talk Title: Spatio-Temporal Motion Retargeting from Noisy Sources for Legged Robots

Recent advances in motion capture and reinforcement learning have enabled a two-stage pipeline—Spatio-Temporal Motion Retargeting (STMR)—that transforms noisy source motions (e.g., hand-held videos or sparse keypoint trajectories) into dynamic, physically feasible behaviors for legged robots: first, STMR reconstructs full-body kinematically consistent motions from input keypoints and contact cues, then refines these trajectories in the temporal domain under dynamic and morphological constraints, providing demonstrations that guide reinforcement-learning policies; hardware experiments on four distinct quadruped platforms—including terrain-aware backflips atop obstacles—demonstrate that STMR bridges the gap between raw, unstructured motion sources and robust, agile real-world robot behaviors.

Sungjoon Choi received Ph.D. in Electrical and Computer Engineering (ECE) from Seoul National University (2018) and B.S degree in Electrical Engineering and Computer Science (EECS) from Seoul National University (2012). He is currently an assistant professor at the department of Artificial Intelligence (AI) in Korea University. Before joining Korea University as a faculty, he was a postdoc at Disney Research Los Angeles and a research scientist at Kakao Brain in Korea. His research interests include natural motion generation and human robot interaction and received Best Conference Paper Finalist Award at 2016 IEEE International Conference on Robotics and Automation (ICRA).

Hae-won Park, Korea Advanced Institute of Science and Technology (KAIST)

Talk Title: TBA

Abstract of the presentation

Prof. Hae-Won Park is the director of the Humanoid Robot Research Center and an Associate Professor of Mechanical Engineering at KAIST. He received his B.S. and M.S. from Yonsei University and his Ph.D. from the University of Michigan, Ann Arbor. Before joining KAIST, he was an Assistant Professor at the University of Illinois at Urbana-Champaign and a postdoctoral researcher at MIT. His research focuses on learning, model-based control, and robotic design, especially in legged and bio-inspired robots. Prof. Park has received several prestigious awards, including the NSF CAREER Award and the RSS Early-Career Spotlight Award, and serves on editorial boards for top robotics journals and conferences such as IJRR and IEEE ICRA

Zachary Manchester, Carnergie Mellon University

Talk Title: Was Locomotion Really That Hard After All?

For decades, legged locomotion was a challenging research topic in robotics. In the last few years, however, both model-based and reinforcement-learning approaches have not only demonstrated impressive performance in laboratory settings, but are now regularly deployed "in the wild." One surprising feature of these successful controllers is how simple they can be. This talk will discuss several recent works from my group that try to push the limits of how simple locomotion controllers for general-purpose robots can be from several different view points.

Zac Manchester is an associate professor in The Robotics Institute at Carnegie Mellon where he leads the Robotic Exploration Lab. His research leverages insights from physics, control theory, and optimization to enable robotic systems that can achieve the same level of agility, robustness, and efficiency as humans and animals. His lab develops algorithms for controlling a wide range of autonomous systems from cars merging onto highways to spacecraft landing on Mars. Zac Previously worked at NASA Ames Research Center and received a NASA Early Career Faculty Award in 2018, a Google Faculty Award in 2020, and an NSF CAREER Award in 2025. He has also served as Principal Investigator of four NASA small-satellite missions.

Geoffrey Clark, Florida Institute for Human and Machine Cognition ( IHMC)

Talk Title: Walk Like a Human: Lessons in Training Robots with Generative Rewards

The field of AI-driven whole-body control and locomotion has made substantial strides in recent years, particularly in enabling robotic systems to closely mimic human motion. Leveraging motion capture (mocap) data and imitation learning frameworks, researchers have succeeded in transferring human-like movement patterns to robotic platforms with impressive fidelity. However, while the reproduction of known behaviors is now relatively well-understood, the generation of entirely new motions—especially those required for unseen tasks or environments—remains a fundamental open challenge.
In this talk, I will explore how generative modeling can help bridge this gap. Specifically, I will discuss approaches that move beyond supervised imitation to enable robots to synthesize novel, plausible, and task-relevant motions autonomously. This includes integrating generative models into control pipelines and designing reward functions that support exploration and diversity without sacrificing physical realism or goal-directed behavior.
Drawing on examples from humanoid locomotion and full-body control, I will highlight both the opportunities and the limitations of current generative approaches, and discuss the unique challenges of working in high-dimensional action spaces where physical constraints and temporal consistency are critical.

Dr. Geoffrey Clark joined IHMC as a Research Scientist in May 2023, working with Dr. Robert Griffin and other members of his team. Geoffrey recieved his Ph.D. from Arizona State University, in Electrical Engineering, where he worked in the Interactive Robotics Lab. His research focused on the intersection of classical control, optimal filtering, and machine learning to create symbiotic human-robot interactions for robotic prosthetics and exoskeletons. His work also created adaptable policies with reinforcement learning on legged robots by taking inspiration from human locomotion. 

He also served as a mentor for robotics and machine learning for Desert Women in Autonomous Vehicle Engineering (WAVE).  He earned a Deans Fellowship and an Arizona Graduate Scholar Award while studying at ASU. 

Carlos Mastalli, Heriott-Watt University & Florida Institute for Human and Machine Cognition

Talk Title: From Crocoddyl to ODYN and Genesys: Building Optimization-Driven Robotics

The next generation of agile and versatile robots will rely on optimization not just as a tool, but as the core computational layer unifying planning, control, estimation, simulation, and learning. In this presentation, I will introduce ODYN, our real-time optimization framework designed to address the complexities of contact-rich interaction and closed-loop kinematics in control and estimation, and Genesys, our differentiable contact simulator built on these principles. Together, they form a foundational software stack that we plan to open-source to the robotics community. I will also present our advances in whole-body model predictive control, inertial parameter identification, and system identification for robots with significant friction and closed mechanisms through our new version of Crocoddyl. These efforts illustrate how an optimization-driven foundation can empower robots to move, adapt, and learn with unprecedented agility in the real world.

Carlos Mastalli is an Assistant Professor at Heriot-Watt University, UK, and a Research Scientist at the Florida Institute for Human & Machine Cognition (IHMC), USA. He is the Head of the Robot Motor Intelligence (RoMI) Lab and has pioneered algorithms for motor intelligence in legged robotics over the last decade. Over the years, Carlos conducted cutting-edge research in several world-leading labs, including IIT (Italy), LAAS-CNRS (France), ETH Zürich (Switzerland), and the University of Edinburgh (UK). He supports open science, building a large community around our Crocoddyl optimal control library, and currently releases Odyn—one of the most advanced optimization frameworks in the world. Carlos specializes in numerical optimization for real-time planning, control, estimation, simulation, and learning. His efforts are paving the way to advance motor intelligence in legged robots