Renesas MCUs get MicroAI models - Embedded.com

Renesas MCUs get MicroAI models

In a bid to bring machine learning to MCUs, Renesas is integrating MicroAI’s AtomML algorithm in its RA family of MCUs.

After NXP and STMicroelectronics developed artificial intelligence (AI) toolchains around their low-compute microcontrollers, Renesas has chosen a third-party supplier to train machine learning (ML) models directly in the embedded environment.

NXP’s eIQ and STMicro’s STM32Cube.AI enable the development of neural networks (NNs) for edge inference on these companies’ respective MCU offerings. Now, in a bid to bring machine learning to MCUs, the Japanese chipmaker is integrating the MicroAI AtomML algorithm into its RA family of MCUs. MicroAI delivers personalized AI to edge devices using its edge-native AI technology that embeds machine learning into MCUs and MPUs in connected endpoints.


Figure 1: MicroAI Atom enables the training of machine learning models directly on the MCU. Source: Renesas

MicroAI has also joined hands with Silicon Labs to embed its edge-native technology into the Austin, Texas-based chipmaker’s embedded offerings. Founded as One Tech in 2018 by Tokyo-based Systena Corp. and Dallas-based Plasma Group, the company initially focused on the industrial IoT market. Then, in 2021, the company introduced the MicroAI AtomML technology and renamed itself MicroAI to reflect the target market.

The MicroAI Atom software development kit (SDK), available on the company’s developer portal, allows design engineers to implement the machine learning platform directly onto the MCU hardware. That, in turn, accelerates machine learning model formation and adoption while lowering the overall cost of deploying AI-driven solutions. Otherwise, the existing methods require an expensive overhaul of hardware.


Figure 2: MicroAI Atom collects and processes data locally right at the MCU and provides users with data straight from the device in real-time. Source: MicroAI

Another MCU supplier with embedded AI offerings, NXP, has teamed up with Canada-based Au-Zone Technologies to augment its eIQ machine learning platform. Au-Zone’s DeepView ML Tool Suite will enable design engineers to import datasets and models and rapidly train and deploy neural network models and ML workloads across the MCUs and MPUs from NXP. Au-Zone’s DeepView run-time inference engine complements open-source inference technologies within NXP’s eIQ platform, which provides silicon-optimized inference engines for edge processing.

While MCU suppliers are frantically readying their offerings for the AI party, it seems that the role of third-party suppliers like Au-Zone and MicroAI will still be crucial in reducing time to market and creating a broad and viable ecosystem. Such partnerships could also be the harbinger of a new era of collaboration in the AI-centric embedded systems.

>> This article was originally published on our sister site, EDN.


Related Contents:

For more Embedded, subscribe to Embedded’s weekly email newsletter.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.