Cloud providers cite role as AI inference moves to edge - Embedded.com

Cloud providers cite role as AI inference moves to edge

Performing AI inference on embedded edge devices is attractive in situations where low latency is required, or where the volumes of data collected at the edge would cost too much to send into the cloud. Advances in dedicated machine learning accelerators, state-of-the-art microcontrollers and AI models and software mean more AI inference can be done at edge power budgets than ever before. However, even when all the inference is done at the edge, the cloud is still a useful tool for managing devices out in the field, say cloud providers.

AWS IoT

At Embedded World Digital 2021, cloud providers AWS and Microsoft presented their ecosystem solutions for edge AI devices.

“Running TensorFlow Light or machine learning models on the devices themselves is one capability,” said Rajeev Muralidhar, principal specialist solutions architect at AWS. “But the ability to build an entire pipeline for device lifecycle in a secure manner, managing them at scale, the ability to roll out formal upgrades on the devices so that you can manage the versions running on them, provide more secure, more features, and also the ability to update the ML models running there – this is another important capability.”

AWS provides an infrastructure for this via its AWS IoT platform. This platform, said Muralidhar, comprises three key parts – device side software (FreeRTOS or AWS Greengrass), control and connectivity components (AWS IoT Core), and analytics services.


AWS offers support for edge AI devices through its IoT platform (Image: AWS)

FreeRTOS is a widely-used open-source operating system for microcontroller-level devices. It comes with long-term support (RTS), which guarantees security updates, feature upgrades and bug fixes for two years. It also comes with capabilities to do over the air updates securely, and roll out firmware to the devices at scale that have been deployed in the field. The FreeRTOS Kernel can talk directly to a device gateway that may be running AWS Greengrass.

AWS IoT Core is the entry point for data into the cloud. It includes a message broker which can enact rules on the data, whether that’s store it, or move it into a database or a dashboard for analysis, such as SageMaker for machine learning analysis.

AWS IoT platform also has components for large scale fleet management, device management, and IoT analytics, as well as an event management capability that can automate detection and response to events coming from your IoT devices.

“End-to-end device life cycle management is crucial when you are thinking about devices at scale, and the ability to update those devices’ underlying operating system is also crucial,” Muralidhar said. “You want to be able to do it securely, you want to be able to rotate security credentials so that you don’t compromise on security of your fleet of devices.”

The ideal situation is sending the incoming data into the cloud so there can be continuous evaluation and training, said Muralidhar.

“[Then] when you have newer models updated that are more accurate, you can pull them down into the device, and roll that out to your entire fleet of devices on the industrial shop floor, or your connected vehicles fleet,” he said. “That way, the devices that are running in your vehicles are more capable, and can react faster and more accurately.”

Azure Percept

At the show, Microsoft also gave presentations on its brand-new Azure Percept concept. Azure Percept is a hardware and software platform for edge AI which takes advantage of some of Azure’s cloud offerings – including device management, AI model development and analytics. Azure cloud tools are used to manage devices, and access open-source AI models or create new ones.


Microsoft’s Azure Percept platform is both hardware and software. Hardware includes a Trusted Platform Module (center), Azure Percept Audio module (left) and Azure Percept Vision module (right) (Image: Microsoft)

The company also launched a hardware development kit with two modules. The Azure Percept Vision module for computer vision at the edge is based on Intel’s Movidius Myriad X AI accelerator. There is also an Azure Percept Audio module, but no details of that module were available.

With this new offering, Microsoft wants to provide an end-to-end solution that lowers barriers to entry for non-specialists. The idea is to simplify developing, training and deploying edge AI.

Azure Percept also connects to Azure IoT Hub, designed to facilitate secure communication between IoT devices and the Azure cloud.

In the future, Microsoft is planning to expand the number of Azure Percept devices available from third parties. Developers using the current hardware development kit will then be able to deploy their solution on Percept-certified devices available in the market.

>> This article was originally published on our sister site, EE Times Europe.


Related Contents:

For more Embedded, subscribe to Embedded’s weekly email newsletter.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.