One AI Model Unlocks Humanlike Robot Movement

▼ Summary
– Atlas now uses a single AI model to control both its walking and grasping abilities, a significant advancement over previous separate models.
– The model exhibits emergent skills, such as instinctively recovering dropped items without specific training for that action.
– Developed by Boston Dynamics and Toyota Research Institute, the model processes visual, proprioceptive, and language input to guide the robot’s movements.
– This approach allows Atlas to move more naturally, like repositioning its legs for balance when reaching low, similar to a human.
– Researchers hope this method will lead to robots developing unexpected new skills, similar to how large language models exhibit emergent abilities.
The remarkable Atlas humanoid robot, known for its athletic parkour and dance performances, has achieved a new breakthrough in robotic movement and manipulation. Rather than relying on separate AI systems for locomotion and object interaction, Atlas now operates using a single, unified artificial intelligence model. This integrated approach enables more fluid, humanlike behavior and has even led to the emergence of unexpected skills not explicitly programmed into the system.
Developed through a collaboration between Boston Dynamics and the Toyota Research Institute, this generalist model processes visual input, proprioceptive feedback, and language-based commands to coordinate both the robot’s arms and legs. Russ Tedrake, a roboticist leading the project, describes the approach as treating the feet like “additional hands,” allowing for a more holistic and adaptive form of movement.
The model was trained using a combination of teleoperation, simulation, and recorded demonstrations, resulting in what researchers call a large behavior model (LBM). This system allows Atlas to perform tasks with a naturalness previously unseen in robotics. For example, when retrieving objects from a low bin, the robot subtly shifts its weight and adjusts its stance, much like a human would. Even more impressively, the LBM has demonstrated emergent behaviors, such as instinctively bending to recover a dropped item without prior training for that specific scenario.
This development signals a shift toward more generalized robotic intelligence, echoing advances seen in large language models. Just as LLMs sometimes reveal unexpected capabilities like coding or creative writing, roboticists anticipate that training robots on diverse tasks will lead to novel, unpredicted skills. Tedrake’s team is already exploring this potential with other robotic systems, including arms trained for activities like slicing vegetables and cleaning up spills.
The implications are significant for the future of automation, where versatile and adaptive robots could perform complex tasks in dynamic environments, from warehouses to homes, with greater efficiency and autonomy.
(Source: Wired)