Robot with advanced visual perception

The NexMOV-2 AGV
The NexMOV-2 AGV

NexAIoT in Taiwan has developed an autonomous mobile robot  platform using an AI accelerator from Kneron, writes Nick Flaherty.

The NexMOV-2 uses the Kneron KL730 neural-processing unit for SLAM and obstacle avoidance, and it replaces a 2D Lidar laser sensor with a camera via an integrated image signal processor (ISP).

The chip is automotive-qualified for reliability, and it has four ARM Cortex-A55 processor cores alongside a neural processor unit, a vision digital-signal processor (DSP) based on the Tensilica DSP core from Cadence Design Systems and the ISP.

The A55 cores and integrated units provide the performance of up to 4 tera operations per second (TOPS). This is four times more power-efficient than the earlier Kneron AI chips.

“Running AI requires AI-dedicated chips with an architecture that is completely different from anything we’ve seen before. A simple re-appropriation of adjacent technologies, such as graphics-dedicated GPU chips, simply isn’t going to do the job,” says Albert Liu, founder and CEO of Kneron.

The SLAM software comes from Kudan Global and runs on the ROS2 robot operating system. The Kneron chip can run the same transformer AI frameworks used in ChatGPT that are improving the detection of objects in images.

Our collaboration with NexAIoT on the NexMOV-2 demonstrates the potential of integrating Kudan’s Visual SLAM technology to address complex localisation challenges across diverse operational environments.

By relying solely on Visual SLAM for positioning and AI-powered 3D vision for navigation, the NexMOV-2 proves how advanced visual perception can enhance the performance of autonomous mobile robots while streamlining hardware and reducing costs,” said Tian Hao, chief operating officer at Kudan.

 

UPCOMING EVENTS