With the rapid development of robotics technology, scientists have once again broken through the boundaries of technology. Recently, a research team from Columbia University in the United States published an eye-catching study in the authoritative journal Nature Machine Intelligence: A research team led by Hod Lipson, director of the Department of Mechanical Engineering at Columbia University, developed a new strategy based on the intersection of visual learning and robotics technology, enabling robots to establish an understanding of their own structure and movement by observing their own movements.
The Core of Robotics Technology
The core of this technology is to use videos shot by ordinary 2D cameras to allow robots to establish kinematic self-awareness through self-observation, so that they can improve their movements and predict their own spatial movements, and even recover from damage without human intervention, opening up a new path for the development of autonomous robotics technology. The research team used deep neural networks and ordinary cameras to successfully enable robots to autonomously create three-dimensional kinematic models. This method enables robots to perceive themselves through vision, understand and adapt to their own movements, just like humans looking in the mirror.
By developing “self-awareness,” these robots can be fully automated, making them more independent, adaptable, and efficient in real-world environments such as homes, factories, and disaster zones. This self-modeling ability is important in real-world applications. For example, if a robot is damaged while performing a task, traditional methods may require human intervention to repair it. A robot with self-modeling capabilities can observe its own damage, adjust its movement, and continue to complete the task, improving the robustness and reliability of the system.
“We humans can’t always repair broken parts and adjust performance parameters for robots like we take care of babies,” said Professor Lipson. If robots are to be truly useful, they must learn to take care of themselves, which is why self-modeling technology is so important.”

Continuing Research in Robotics
This research builds on two decades of research at Columbia University. During this time, researchers have been developing methods for robots to create self-models using cameras and other sensors. In 2006, their robots could only generate simple models. A decade later, they used multiple cameras to produce more complete high-fidelity models, and now they have finally succeeded for the first time in building a complete motion model of a robot using short video clips from a single standard camera.
Professor Lipson explained: “We humans are born with an intuitive understanding of our bodies, and we can envision future states and evaluate the consequences of our actions before we actually act. Our ultimate goal is to enable robots to have similar self-imagination abilities. Once they can foresee the future, their potential will be unlimited.”
The Infinite Possibilities of Robotics
In a sense, this technology leads to a transformation in the relationship between robots and humans. Through this fusion of visual learning and robotics, robots gradually improve their motor skills in self-observation and are able to predict their spatial behavior. This ability means that we are entering a new era of human-machine symbiosis, where robots are no longer simple tools, but intelligent partners who can provide us with services in a more independent way.
When robots look in the mirror, they no longer see just their own reflection, but a new self-cognition process. This process not only reveals the infinite possibilities of robotics, but also points to how future technology will change the way people live.