MADISON, Wis. — NXP Semiconductors will unveil at Arm TechCon this week an AI strategy centered on the software tools. NXP is debuting an AI software development environment for the edge, called eIQ, and customizable system-level solutions.
Calling the current AI landscape still in flux, Geoff Lees, senior vice president and general manager of microcontrollers at NXP, told us, “The first- and second-generation AI accelerators proved to be not scalable.” Although a host of AI SoC startups are developing new acceleration architecture, Lees said that customers today want more scalable general processors to meet their AI needs, he noted.
NXP’s resulting strategy is to avoid locking into any specific AI acceleration architecture. It prefers to be a chip supplier offering machine-learning (ML) solutions across a variety of MCU and application processor platforms.
GHz MCU in 2019?
The emerging trend today is the growing realization that AI at the edge will “require even more processing” than anticipated, observed Lees. Customers are seeking processors that can boost AI performance, security, and network connectivity. Lees hinted, “Don’t be surprised that NXP is getting ready with a GHz MCU for a 2019 launch.”
Linley Gwennap, Linley Group president and principal analyst, differs from Lees. He acknowledged that AI is moving edgeward, but Gwennap’s trend radar is homing in on AI accelerators rather than the more powerful MCUs. “Even MCUs can have a small AI engine to offload the CPU (e.g., Abee, Eta, Greenwaves, QuickLogic).” Gwennap said, “Running the MCU at 1 GHz doesn’t make sense from a power or cost standpoint.”
Offering a third point of view, Jim McGregor, founder of Tirias Research, agreed with Lees when asked about the emergence of GHz MCU. McGregor said, “Oh, yeah. Even inference requires more processing, especially when you are trying to do it in real time. This is also blurring the lines between MCUs and MPU-class devices.”
ML as a middleware
Setting aside the hardware debate, NXP’s announcement this week is focused on AI software. Lees explained that NXP is “treating machine learning as middleware.”
Click here for larger image
NXP eIQ Edge Intelligence environment (Source: NXP Semiconductors)
NXP’s eIQ includes tools necessary to structure and optimize cloud-trained ML models. The goal for NXP customers is to run those ML models in “resource-constrained edge devices for a broad range of industrial, IoT, and automotive applications,” explained NXP.
Describing eIQ as “a one-stop foundation for world-class machine-learning applications,” NXP noted that its AI software developments include:
- data acquisition and curation tools (e.g., vision, voice and audio front end, sensor);
- model conversion for a wide range of neural net (NN) frameworks and inference engines, such as TensorFlow Lite, Caffe2, CNTK, and Arm NN;
- support for emerging NN compilers like GLOW and XLA;
- classical ML algorithms (e.g., Support Vector Machine and random forest)
Furthermore, eIQ includes tools to deploy models for heterogeneous processing — distributing the ML workload across computational blocks such as Cortex A/M cores, DSP, and GPU — on a range of NXP’s embedded processors.
ML today is a “tough broad market” for system vendors to crack, observed Markus Levy, head of AI at NXP. If a consumer electronics vendor who developed a smart speaker last year decides to add vision or other sensory data-based AI to a next-gen product, it is looking for a flexible, scalable AI processing solution, not a fixed AI accelerator.
Moreover, system vendors demand processing at the edge due to latency, security, and spectrum limitations.
In turn, system vendors face tradeoffs. “How much latency is acceptable in inferences? How much are they prepared to pay for memory requirements? What level of accuracy are they looking for?” said Levy.