AUSTIN, Texas — The EEMBC trade group has started an effort to define a machine-learning benchmark for running inference jobs on devices at the edge of the network. The effort spun out of a separate benchmark that the group plans to release in June for chips used in advanced driver assistance systems (ADAS).
The work marks at least the third major initiative in six months to measure performance of neural-network jobs. It may be the first to focus on chips for power-constrained embedded systems.
Last month, Baidu and Facebook announced work with a handful of chipmakers on ML Perf, initially focused on training jobs in data centers. The Transaction Processing Council formed an effort in December that also likely will focus on training.
EEMBC’s AI work group is centered around chips for smart speakers, nodes, and gateways for the Internet of Things and other embedded systems. It has met three times to date and aims to release a benchmark before June next year. The separate ADAS benchmark is already in beta test with multiple users.
“As we were building the ADAS benchmark, we found way more interest in the neural network at the end of it from engineers being pushed to learn this very complex space,” said Peter Torelli, who recently became president of EEMBC.
So far, the work group has about a dozen members from embedded processor vendors such as Arm, Analog Devices, Intel, Nvidia, NXP, Samsung, STMicroelectronics, and Texas Instruments. It aims to embrace a variety of neural-net types and use cases.
“We’re looking for more input, especially from integrators and OEMs making component choices, to make sure it’s something they can use,” said Torelli. “We also need to get a handle on what network architectures are important and which ones will be portable to the edge.”
The benchmark aims to measure raw inference performance as well as the time to spin up a neural-net model. The group also hopes that it can define a way to measure power efficiency of the tasks in a standard way.