A team at the Ulsan National Institute of Science and Technology in South Korea has developed a breakthrough technology that recognizes human emotions in real-time, promising to revolutionize industries and deliver next-generation wearable systems based on emotional services. A related paper was published in the latest issue of the journal Science Advances.
Understanding and accurately extracting emotional information has long been a challenge due to the abstract and ambiguous nature of emotions and feelings. To address this issue, the research team developed a multimodal human emotion recognition system. The system combines verbal and non-verbal expressive data to effectively utilize comprehensive emotional information.
Systems using Personalised Skin Integrated Facial Interface (PSiFI)
At the heart of the system is a Personalised Skin-integrated Facial Interface (PSiFI) system with first-of-its-kind bi-directional friction-strain and vibration sensors, which simultaneously senses and collects both verbal and non-verbal expression data, and then transmits it wirelessly for real-time emotion recognition.
The technology is based on the phenomenon of friction charging, where objects separate into positive and negative charges when rubbed together, and the system does not require an external power source or complex measurement equipment to recognize the data.
The team successfully integrated the detection of facial muscle deformations and vocal cord vibrations, experimentally collecting multimodal data on facial muscle deformations and sounds. With minimal training, the system demonstrated high accuracy. Wearability and convenience are also ensured. The system can also be used in VR environments, such as “digital concierges”, smart homes, private cinemas, and smart offices.
According to the researchers, the results show the potential for using emotion, one of the most complex forms of human information expression, in next-generation wearable systems. With this system, real-time emotion recognition can be achieved in just a few learning steps, without the need for previously complex measurement devices. This opens up possibilities for future portable emotion recognition devices and next-generation emotion-based digital platform services.
As stated in the paper, emotions are complex forms of human information. Emotions can be displayed through facial expressions and voice, so this system, which can detect facial muscle deformation and vocal vibration, has emerged as an attentive observer of our changing state of mind. People’s pleasure and boredom can be recognized, and the system can respond differently according to people’s state of mind, giving them options that match their preferences and providing them with a service that is tailored to them. However, sometimes humans can disguise their moods, can this robotic system recognize a fake smile?