Advertisement

ST shows embedded AI efforts

February 28, 2018

junko.yoshida-February 28, 2018

BARCELONA — As expected, AI is the crowd magnet at this year’s Mobile World Congress. As Jem Davies, vice president, fellow and general manager of the machine learning group at Arm, quipped, during an interview with EE Times, “Machine learning is a bit like fleas. Everyone has got one.”

Companies who already tipped their plans for machine learning prior to the show include Arm pushing its Project Trillium, MediaTek for P60, Ceva with PentaG and startup GreenWaves’ GAP8.     

STMicroelectronics, meanwhile, broke its silence and discussed during the company's press conference Tuesday (Feb. 27) how the company sees machine learning as a key to “distributed intelligence” in the embedded world. ST envisions a day when a network of tiny MCUs become smart enough to detect wear and tear in machines on the factory floor or find anomalies in a building, without reporting sensory data every so often back to data centers.

At its booth, ST demonstrated three tangible AI solutions: a neural network converter and code generator called STM32 CubeMX.AI, ST’s own Deep Learning SoC (codenamed Orlando V1), and a neural network hardware accelerator (currently under development using an FPGA) which can be eventually integrated into the STM32 microcontroller.

ST's president and CEO Carlo Bozotti (Photo: EE Times)
ST's president and CEO Carlo Bozotti (Photo: EE Times)

Asked if ST’s embedded AI solutions have been developed in partnerships with Arm’s Project Trillium, ST’s president and CEO Carlo Bozotti replied emphatically, “No. These are internally developed by ST.”

Unlike many smartphone chip vendors developing an AI accelerator designed to work with a CPU and a GPU inside a handset, ST focuses on designing machine-learning solutions on embedded processors deployed in connected mesh networks. Gerard Cronin, ST’s group vice president, told EE Times that ST already has neural network code that runs on any STM32 in software today. Its drawback is, he explained, that it would run too slow for sophisticated/processing intensive applications.

For machine-learning acceleration, ST is designing AI-specific hardware and software architectures. ST unveiled its first test chip, an ultra-energy efficient deep convolutional neural network (DCNN) SoC. It contains 8 DCNN reconfigurable accelerators and 16 DSPs. Manufactured in a 28nm FD-SOI process, it is “ultra-energy efficient,” claimed Bozotti. He described it as a significant achievement for ST’s R&D team. “It’s a real SoC, running AlexNet at 0.5 TOPS,” Bozotti said. 

ST's Deep Convolutional Neural Networks (DCNN) SoC (Photo: EE Times)
ST's Deep Convolutional Neural Networks (DCNN) SoC (Photo: EE Times)

ST has not decided whether the SoC will be launched as is, since the company is already working on its follow-ons. But, running 2.9TOPS per watt at 266MHz, it can be used as a co-processor for ST’s MCUs.

ST’s ultimate AI scenario for STM32, however, might be in integrating a neural network hardware accelerator inside the MCU. The FPGA-based demo showed that it would take only a fraction of STM32 CPU load to detect how many people are in a scene captured by an infrared camera.

ST's AI accelerator development (Photo: EE Times)
ST's AI accelerator development (Photo: EE Times)
 

Continue reading on Embedded's sister site, EE Times: "ST projects embedded AI vision."

 

Loading comments...