Development platform enables AI training on Arm Cortex-M-based microcontrollers - Embedded.com

Development platform enables AI training on Arm Cortex-M-based microcontrollers

Cartesiam has launched NanoEdge AI Studio, an integrated development environment that developers can use to create AI training and inference applications on microcontrollers without any data science knowledge, in a matter of hours. The technology is compatible with any Arm Cortex-M based microcontroller, M0 to M7 including M55, and does not require an extensive data set for training.


Learning can now be achieved inside the microcontroller. The same microcontroller with the same library applied to a predictive maintenance application can learn the vibration characteristics of individual machines (Image: Shutterstock)

Training an AI algorithm on such resource-constrained devices represents a major first. Typical applications, such as predictive maintenance, use unsupervised learning directly on the microcontroller to teach the algorithm about the normal environment inside a particular machine. A model can then be created and used for inference (prediction).

“What is really revolutionary is we are able to learn inside the microcontroller,” Cartesiam co-founder Joël Rubino told EE Times Europe. “The rest of the world is doing inference inside the microcontroller and doing the learning in the cloud. They need to capture the data, create the model with data scientists in the cloud, and then compile everything and then it goes on the microcontroller. What we do is different. We create a library able to learn directly inside the microcontroller, inside any machine. We create the model at the edge. We train at the edge. This changes the game, because today nobody is able to do that in the market.”

AI at the Edge

Rubino said that the industry has been moving from centralised to decentralised computing, with a trend for edge intelligence, for a while.

“We started with the idea that all the objects at the edge are going to generate tons of data. And there is no way that the cloud will eb able to analyse and compute that data. Instead of sending all the data to the cloud, the idea was why not send intelligence to the edge,” he said.

Local intelligence reduces the need for communications bandwidth and prevents the risk of tampering.

Microcontrollers, being the most available endpoint computing platform available today, are perfectly placed to bring intelligence to endpoint devices. However, limited compute power and memory have so far been insurmountable challenges.

Cartesiam, based in Toulon, France, hired a team of PhD-educated mathematicians, data scientists and machine learning experts.

“We rewrote all the machine learning algorithms from scratch so they could fit inside a microcontroller,” Rubino said. “The other problem with today’s process is you need to capture data for the phenomenon you want to observe, but data scientists are a scarce resource… if you want to bring intelligence to the edge it has got to be a lot simpler and quicker and more affordable than it is today.”

Development Flow

NanoEdge AI Studio has been 3 years in the making.

“It’s really about bringing AI to all embedded designers, using NanoEdge AI Studio, they will be able to develop AI inside their objects and do it very fast,” Rubino said.

A typical development flow using NanoEdgeAI Studio is perfectly accessible to embedded developers without AI expertise. The developer defines what type of sensor is being used (current, accelerometer, etc), defines which microcontroller is available (M0 to M7) and specifies the amount of RAM available. A small sample of typical data is loaded and the IDE will use that to optimise different algorithms – between the signal processing library, the machine learning library and the hyperparameters, there are 500 million combinations available. The final algorithm selected is provided as a C library that is 4kB to 32kB in size. It can be tested on the developer’s workstation using a simplified emulator, then downloaded onto the target microcontroller. Applications can be up and running in a matter of a few days, Rubino said.


Developers provide a small sample of typical data into the IDE, and a library is created which can be embedded into microcontrollers. These microcontrollers can learn in the field about the characteristics of the individual machines they are monitoring (Image: Cartesiam)

From there, the microcontroller might be installed in the field where it uses unsupervised learning to train the algorithm (in unsupervised learning, unlabelled data is given to the algorithm and it must find structure in the data itself). A model is then created which can be used for inference going forward.

“You can put the same library on the same microcontroller on two different machines, and the beauty here is that using the vibration data from one machine, it will learn and create the machine learning model inside the microcontroller. The same library on a different machine will learn from the vibrations of this machine, and create its own machine learning model,” Rubino said.

For example, one of Cartesiam’s customers using the beta version of NanoEdge AI Studio makes air conditioning units. The company monitors the current drawn by the fan motor to detect if the air filter is clogged and will need changing. In markets where their customers use compatible filters, rather than original filters from the manufacturer, when the filter is changed the model can learn the characteristics of the new filter to predict when it will need changing.

Funding Rounds

The company was founded in 2016 with the idea of bringing AI to endpoint and IoT devices such as sensor nodes. Seed funding of €500k allowed a proof of concept before a funding round of €2m two years ago accelerated the development of the algorithms.


Éolane’s Bob assistant is a matchbox-sized device that can learn the characteristic vibrations of the machines it is fixed to (Image: Cartesiam)

The first product to reach the market with Cartesiam technology is Éolane’s Bob assistant, which has been on the market for around 2 years. Bob is a hardware device the size of a matchbox which can be affixed to machinery with magnets. It spends 7 days learning the characteristic vibrations of each machine (performing a brief learning phase every few minutes), and then it creates a machine learning model which can be used for prediction at specified intervals. If an anomaly is detected, it connects to a LoRA network to raise an appropriate notification.

Rubino said that Bob has been widely deployed in predictive maintenance applications in European companies such as Renault, French train operator SNCF, French utility EDF Energy, Airbus, Thales and many more.

The success of Bob led Cartesiam to realise it could not continue to develop individual libraries for each customer, and dedicated its efforts to developing a tool that customers could use to develop their own libraries for their own applications. The result, NanoEdge AI Studio, is available now.

>> This article was originally published on our sister site, EE Times Europe.

 

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.