MCU vendors lag in publishing IoT benchmark data

In the course of writing an article on 10 cool IoT processors for our sister publication Electronic Products, I realized a few things about the state of IoT silicon.

The good news is that a wealth of chips is available for IoT gateways and end nodes. A growing set of OEMs, system integrators, and even large end users will become customers for them as the IoT gets established over the next decade.

The bad news is that finding the one that’s right for you is like sorting through a sandbox to find a grain that’s a specific shade of brown. Chip vendors could help by increasing their generally poor level of support for existing and emerging benchmarks.

The most basic benchmark for IoT is performance/watt, but very little data is available on it. The EEMBC trade group has created a handful of IoT chip benchmarks to date, but many top vendors have yet to publish data using them.

For example, EEMBC released its ultra-low-power ULPMark in 2014 to measure the energy efficiency of MCUs. Kudos to the several vendors who have posted scores to date, but shame on the many who have not including some of the giants in small processors such as Cypress, Dialog, Infineon, Renesas, Samsung, and Toshiba. Get with it, people!

I am told that some marketing departments don’t want the quality of their chips reduced to a number that may not tell their whole story. Balderdash! EEMBC already supports a variety of metrics including Core and Peripheral profiles for ULPMark, a Bluetooth version of the new IoTMark, and even a SecureMark.

More are on the way. EEMBC hopes to release a standard measure of CoreMarks/joule at Embedded World this year. The group also plans versions of IoTMark for other wireless networks, with Wi-Fi probably the next one.

In addition, the group hopes to release in the next several weeks a machine-learning benchmark. It is expected to initially focus on computer-vision jobs using convolutional neural networks.

Given the heat around AI these days, chip vendors should be racing to be among the first to publish data using this emerging inference benchmark. The same goes for another embedded inference benchmark in the works at the MLPerf group.

I wrote a story for Electronic Products last year about embedded accelerators for machine learning. I came away from working on that story and a related article for EE Times convinced that basically everyone who sells embedded processors is working on an AI accelerator chip, block, or program.

They should all be putting engineers on getting out benchmarks for their work ASAP. Especially in this new style of computing, engineers need to know what’s working poorly, better, and best.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.