Wednesday , 1 May 2024
Home Robotics: Technology, News & Trends Robot Emo Predicts Human Smiles 0.9 Seconds in Advance, Smiles Along with Humans

Robot Emo Predicts Human Smiles 0.9 Seconds in Advance, Smiles Along with Humans

39
Emo robot predict human facial expression
Emo's eyes are equipped with cameras, and beneath its blue, flexible silicone skin lie 26 motors, similar to human facial muscles, providing the power for the robot to make expressions. It can predict an upcoming smile 839 milliseconds before a human does, expressing the smile simultaneously with the person. Emo can also predict expressions of sadness, anger, and surprise.

Large models have rapidly developed robot’s verbal communication capabilities, but non-verbal communication has not kept pace. Now, researchers at Columbia University have developed a robot that can observe human facial expressions and predict a person’s smile 0.9 seconds in advance through minor facial changes, responding with a smile. The research was published in Science Robotics on March 27.

Artificial Intelligence can mimic human language, but robots have yet to replicate complex non-verbal cues and behaviors crucial for communication. Humanoid robots can rely on sound for communication, but expressing through facial movements faces a dual challenge: physically driving a face capable of rich expressions is challenging, and knowing what expressions to generate to make the robot appear realistic, natural, and timely is complex.

Researchers at Columbia University suggest that training robots to predict future facial expressions and execute these expressions simultaneously with humans could mitigate these challenges. A team led by Hod Lipson, Professor at Columbia University’s Creative Machines Lab, developed a robot named Emo that uses AI models and high-resolution cameras to predict and attempt to replicate human facial expressions.

According to Latest that the robot could predict an upcoming smile 839 milliseconds before it happens, expressing the smile simultaneously with humans through the model. Yuhang Hu, the lead author of the paper and a Ph.D. student at Columbia University’s Creative Machines Lab, explained that for example, before a smile fully forms, there’s a brief moment when the corners of the mouth begin to lift, and the eyes start to crinkle slightly. Emo can capture these subtle changes on people’s faces to predict facial expressions. The researchers demonstrated this ability using a robot face with 26 degrees of freedom.

Emo details

The robot includes 26 motors and uses position control. Three motors control the neck’s movement across three axes. Twelve motors control the upper face, including the eyeballs, eyelids, and eyebrows. Eleven motors control the mouth and jaw.

Emo’s eyes are equipped with cameras, and beneath its blue, flexible silicone skin are 26 motors, akin to human facial muscles, that power the robot’s facial expressions. Three motors control the neck’s movement across three axes. Twelve motors control the upper face, including the eyeballs, eyelids, and eyebrows. Eleven motors control the mouth and jaw. The robot uses two neural networks, one to observe faces and predict expressions, and another to study how to produce expressions on the robot’s face. The first network was trained using videos from video websites, and the second allowed the robot to train by watching its own expressions through a live camera.

“When it pulls all these muscles, it knows what its face will look like,” said Lipson, “It’s a bit like a person looking in a mirror, knowing what their face will look like even if they smile with their eyes closed.”

The researchers hope this technology will make human-robot interaction more lifelike. They believe that robots must first learn to predict and mimic human expressions before advancing to more spontaneous and self-driven expressive communication.

In addition to smiles, Emo can also predict expressions of sadness, anger, and surprise. However, Emo cannot produce all human expressions since it only has 26 facial “muscles.” In the future, researchers need to expand the range of the robot’s expressions. They also hope to train the robot to respond to what people say, rather than simply mimicking another person. Additionally, researchers are integrating verbal communication into Emo using large models, allowing Emo to answer questions and engage in conversations.

Related Articles

Robotic

Japanese Team Develops Two-Legged Robot Driven by Muscle Tissue, Silicon Material Vital!

Taking inspiration from human gait, Japanese researchers have combined lab-grown muscle tissue...

World’s First Fully Electric Humanoid Robot ‘Tiangong’ Unveiled, Capable of Running at 6 km/h

According to Latest report, on April 27th, the Beijing Humanoid Robot Innovation...

Roam

Rohm Develops Cargo Robots Capable of Navigation Without Maps

The role of AGVs (Automated Guided Vehicles) and AMRs (Autonomous Mobile Robots)...

Atlas robot

Boston Dynamics Unveils New Electric Atlas Robot

According to the latest reports, Boston Dynamics unveiled its newly developed all-electric...