Against the backdrop of the rapid development of the Internet of Things (IoT) technology, Google and Synaptics recently announced an important cooperation to jointly develop edge AI for the IoT. The cooperation aims to achieve multimodal processing, build more efficient context-aware computing capabilities, and provide strong support for the development of various smart devices in the future.
This cooperation will integrate Google’s MLIR-compliant ML kernels and open source software and tools on Synaptics Astra hardware, which will accelerate the development of IoT artificial intelligence devices, support the processing of vision, images, voice, sound and other modalities, and provide seamless interaction scenarios for applications such as wearable devices, home appliances, entertainment, embedded hubs, monitoring, and control in consumer, automotive, enterprise and industrial systems.
Software and Hardware Combination
As a leading company in the field of human-computer interaction technology, Synaptics’ Astra hardware will be combined with Google’s MLIR-compliant machine learning (ML) kernels. This combination will not only accelerate the development of IoT AI devices, but also promote its application in multimodal data processing such as vision, image, voice, sound, etc., providing a more seamless user experience for wearable devices, home appliances, entertainment systems, embedded devices, monitoring systems and industrial control.
Billy Rutledge, director of system research at Google Research, emphasized that this cooperation can help both parties meet the challenges of edge AI devices in power consumption, performance, cost and space requirements. He said that Synaptics’ experience in open source software and proven AI hardware will enable Astra products to better combine with Google’s ML core and open up a broader market.
Reasons for Using Edge AI
With the continuous development of AI technology, especially in the fields of natural language processing, computer vision and audio signal processing, the application of edge AI will become more extensive. Technologies such as AI painting and AI writing are also evolving rapidly. With more and more powerful processing power and algorithm support, these tools are changing the way of creation. For artists and content creators, AI tools not only improve creative efficiency, but also provide new creative inspiration and possibilities.
While the application and combination of IoT and AI technologies usually means connection to the Internet or cloud, there is a growing demand for local or edge-based processing power for security reasons. This has ushered in a new evolutionary path for IoT devices, namely, using edge intelligent AI technology to improve device performance, availability, security, and effectively complement cloud functions or services.
Edge AI is the deployment of AI applications in devices throughout the physical world. It is called edge AI because AI computing is done at the edge of the network, close to the data, rather than in the cloud. The edge can be any location: retail stores, factories, hospitals, or peripheral devices such as traffic lights, automatic machines, and phones. The benefits of using edge AI include reducing the cost of sending data to the cloud, protecting sensitive data, processing data in real time, and reducing dependence on the network.
Application of Edge AI
The potential of IoT edge AI technology is huge, enabling intelligent decision-making and operation in different scenarios. For example, in smart homes, through voice recognition and image processing technology, users can control various devices in their homes with simple instructions to improve the convenience of life. In industrial applications, real-time data analysis and processing will help improve production efficiency and reduce errors, reducing costs.
Since the advent of edge computing, various problems caused by the over-centralized data processing mode of cloud computing have been solved, and human-machine interaction, a technology that requires high reliability and real-time, has a new development direction. Among them, Synaptics is committed to developing human-machine interface solutions for smartphones, personal computers, cars and various smart home devices (such as speakers), and applying its intelligent edge AI technology to a variety of human-computer interaction products.
As the technology matures further, the power of edge AI will be greatly released, boosting the next wave of AI applications. Edge AI will bring many new opportunities that were previously unimaginable to humans, such as helping radiologists identify pathology, escorting people driving cars on highways, and helping plants pollinate. Edge AI models can combine historical data, weather patterns, power grid health and other information to provide customers with more efficient energy production, distribution and management information. Sensors equipped with AI scan defects on equipment and issue alarms when the machine needs maintenance so that problems can be solved early. Modern medical instruments equipped with AI can complete minimally invasive surgery using ultra-low latency surgical video streams.
Summary
However, the development of edge AI is also accompanied by challenges and risks. For example, how to ensure data security and privacy, as well as algorithm transparency in multimodal processing, are all issues that need to be addressed urgently. Therefore, while technology drives development, all parties in the industry need to strengthen cooperation, formulate reasonable norms and standards, and protect the rights and interests of users.
In summary, the cooperation between Google and Synaptics marks a new starting point for IoT edge AI technology. As smarter devices enter people’s lives, future smart homes, wearable devices, and industrial systems will be more humane and efficient. We look forward to this cooperation bringing new vitality and better solutions to the industry.