Friday , 30 January 2026
Home AI: Technology, News & Trends New Approach to Robot Hand-Eye Coordination

New Approach to Robot Hand-Eye Coordination

15
RoboSense

In recent years, artificial intelligence (AI) has become the focal point of the Consumer Electronics Show (CES), with technology supply chains and end products revolving entirely around AI. CES 2026 marks the stage for AI’s large-scale implementation, with embodied intelligence emerging as one of the most prominent areas, showcasing the leading capabilities of numerous Chinese companies at this year’s exhibition.

Notably, alongside various humanoid robot bodies capturing the audience’s attention, a number of solution providers also unveiled their latest technological advancements. RoboSense set up a “delivery robot” demonstration at this CES, capable of autonomously completing a long chain of tasks from gift packing, transportation, and unpacking to box recycling. Supporting this demonstration is the company’s self-developed “hand-eye coordination” solution, which integrates several core technologies including the VTLA-3D manipulation model, the Active Camera robotic vision system, and a multi-degree-of-freedom dexterous hand.

According to RoboSense, this is one of the longest on-site, non-teleoperated, continuous operation demonstrations currently available. The company has established an end-to-end technical closed loop for embodied intelligence and plans to gradually shift its business structure from being primarily focused on lidar to a more balanced model that equally emphasizes robotic components and solutions.

How Robots Master the “Last 100 Meters”

RoboSense humanoid robot

Currently, delivery vehicles based on autonomous driving technology can cover the “last mile,” but the “last 100 meters” segment involving entering buildings or navigating outdoor spaces still relies mainly on human labor. As labor shortages in the delivery industry become increasingly prominent—especially in certain high-end communities with transportation restrictions and against the backdrop of growing demand for instant delivery—robotic substitution is becoming a significant trend.

At CES 2026, RoboSense simulated the entire process of instant logistics delivery, demonstrating the robot’s operational capabilities from the “first 100 meters” to the “last 100 meters.” Throughout the demonstration, the robot fully autonomously executed steps such as box retrieval, packing, transportation, unpacking, and folding/recycling, requiring no human intervention or remote operation.

From a technical architecture perspective, a delivery robot can be divided into a lower-body mobility platform and an upper-body manipulation unit. The lower body primarily handles point-to-point movement, based on relatively mature autonomous driving solutions; the upper body represents the current technical challenge, involving the dexterous hand—often called “the most difficult part of robot manufacturing”—and its high-level coordination with the vision system.

To address this challenge, RoboSense introduced its new-generation robotic vision system, AC2. This system integrates a solid-state dToF lidar, a binocular RGB camera, and an IMU to form a multi-sensor fusion setup. It can maintain a stable ranging accuracy of ±5mm within an 8-meter detection range, aiding robots in performing precise operations in complex environments. This vision solution is applicable across various scenarios including humanoid robots, warehouse AGVs, home service robots, and digital twins.

Simultaneously, the robot end-effector is equipped with a dexterous hand featuring multiple force-tactile point arrays. Tactile feedback compensates for visual blind spots, enhancing the gentleness and precision of operations. Combined with the self-developed VTLA-3D manipulation model and the 3D color point cloud information generated by the Active Camera, the robot can perceive its environment more comprehensively, significantly improving the success rate of high-dexterity manipulations.

The “Dual-Speed” System for Robotic Commercialization

Amid the complex on-site environment of CES, the RoboSense delivery robot continuously performed multiple operational steps, demonstrating strong anti-interference capabilities. It is reported that, based on the VTLA-3D manipulation model, the company also trained a task-planning AI capable of decomposing complex, abstract tasks into atomic sub-tasks and scheduling their execution. This forms a “dual-speed system” that balances long-term planning with precise operation.

As a company with lidar as its core business, RoboSense has driven a significant reduction in lidar costs over the past decade. This demonstration of an embodied intelligence integrated solution also reflects the expansion of its future positioning—the company aims to provide new incremental components and solutions centered on the robotics industry.

According to relevant personnel, the company will maintain its positioning as a supply chain enterprise, with the goal of creating more value-added within the robotics field. It plans to gradually launch key components including vision systems, dexterous hands, and joints, with “eyes” and “hands” being the priority for near-term implementation.

Based on the latest news, RoboSense’s sales volume in robotics and other related fields increased by 393.1% year-over-year, with the revenue proportion continuing to rise, and overseas revenue also showing significant growth. With the accelerated development of the embodied intelligence industry, the robotics business is expected to become the company’s main revenue source in the future.

Related Articles

Tesla pic

Tesla to Restart Dojo 3 as AI5 Chip Nears Completion

On January 19, Tesla CEO Elon Musk announced in the latest news...

Jensen Huang

Large Model Alpamayo Unveiled: Nvidia Bets on Autonomous Driving

At the 2026 CES show, Nvidia defied expectations for new graphics cards...

Grok 5 is playing game

Grok 5 Tops League of Legends Korean Server: Optimus’ Embodied Intelligence Ambition

In January 2026, a mysterious player named “택배기사 (Courier)” emerged out of...

AI vs. children

Can Top-Tier AI Outperform 3 Years Old Children in Visual Capabilities?

On January 12, 2026, Sequoia China xbench and the UniPatAI team jointly...