Low-cost crossover processor supports endpoint inference - Embedded.com

Low-cost crossover processor supports endpoint inference


BRISTOL, UK — XMOS has adapted its Xcore processor core for machine learning, creating a crossover processor for AIoT applications. The Xcore.ai will be available from $1.

Xcore.ai, the third generation of products built on the company’s proprietary core design, is designed for real-time AI inference and decision-making in endpoint devices, and can also handle signal processing, control and communications functions.

New to this third-generation chip is a vector pipeline capability for machine learning applications. It is the only crossover processor of its type to support binarized (1-bit) neural networks, which are growing in importance for ultra-low power AI in the endpoint applications as they offer order-of-magnitude improvement in performance and memory density traded for a modest reduction in accuracy (the Xcore.ai also supports 32-bit, 16-bit and 8-bit numbers).

The Xcore.ai joins a new class of AI-capable system on chips for endpoint applications, the crossover processor (Image: XMOS)

Xcore.ai joins an emerging class of endpoint processor with AI capability, the crossover processor. Coined by NXP, this term describes an important new category of devices with the performance of an application processor combined with the ease of use, low power consumption and real-time operation of a microcontroller.

“If you talk to customers about ‘microcontrollers,’ they think about Cortex-M0, M3 or M4 devices that come in at 75 cents or below, with relatively low performance, a hundred MIPS or so. Then ‘SoCs’ might be something with quad-A53 cores, that runs at a gigahertz. There’s a big gap for things in the middle, like processors for voice, which is a particularly difficult maths problem, requiring thousands of MIPS. So there’s this big gap with a really big and important application area sitting right in the middle of it, and it deserves a name,” said Mark Lippett, CEO of XMOS, in an exclusive interview with EETimes.

Voice Interfaces

The company’s previously announced second generation product, the XVF3510, was launched in July 2019 as an ASIC for voice interfaces, but the under the hood the silicon is also based on the company’s proprietary Xcore design, shipped with firmware. Reference designs for far-field voice solutions based on the XVF3510 are qualified for Amazon’s Alexa Voice Service.

Given XMOS’ history in the voice sector, it’s no surprise that the Xcore.ai chip will initially be targeted at voice interface applications that require AI for keyword detection or dictionary functions.

“Let’s be absolutely clear, voice is the most important AI workload at the endpoint, and probably will remain so for quite some time to come. But in order to make voice interfaces better, you’ll find that devices will become more multimodal,” Lippett said, describing a trend for using different types of sensors to make applications more context-aware, whether that’s detecting a person’s presence or detecting where they are speaking from.

Mark Lippett (Image: XMOS)

“There are a lot of opportunities to improve user experiences by not just listening to audio, but by doing more than just that,” he said.

Many applications for AI in IoT devices hinge on a combination of privacy, security and safety that requires processing to be done at the endpoint. Lippett described safety features on appliances that use voice and radar to switch off the oven if only children are present in the kitchen, for example.

Xcore.ai will therefore go to market with libraries provided for the creation of voice interfaces, but Lippett said that it has spare capacity for customers to build their own systems. A MIPI interface is included for camera input.

Xcore Architecture

The Xcore.ai chip delivers up to 3200 MIPS, 51.2 GMACCs and 1600 MFLOPS. It has 1 Mbyte of embedded SRAM plus a low power DDR interface for expansion.

Versus a Cortex-M7 device which provides roughly the same level of integration as Xcore.ai, running at a similar operating frequency, XMOS’ own figures put their part at 32x the AI processing performance and 15x the DSP performance.

“In the endpoint world, it has to be price-performance, there’s no point talking about one without the other,” Lippett said. “We’ve been really aggressive on price, we can come down to $1 for this part [in volume]. Broadly speaking, we’re about half the cost [of the comparable Cortex-M7 device] and we are blowing it out of the water in terms of performance.”

The Xcore is based on logical cores arranged in tiles with memory,
ALUs and vector units (Image: XMOS)

Xcore.ai is based on XMOS’ proprietary Xcore architecture. The Xcore itself is built on building blocks called logical cores, which can be used for either I/O, DSP, control functions or AI acceleration. There are eight logical cores on each tile, with two tiles in each Xcore.ai chip, and designers can choose how many cores to allocate to each function. Each tile also contains memory, ALUs and a vector unit which the logical cores share access to.

“Critically, they [share access] in a very predictable way,” said Lippett. “This is what’s special about the Xcore. Initially, we wanted to deliver I/O flexibility to software engineers, and hardware is not very tolerant if you miss deadlines. So the Xcore is multi-core, not because we want to farm out workloads and do things very quickly  — we can do that — but really it’s multi-core because we want to give particular parts of the application their own resources, so that when it’s needed, it’s ready. It’s designed from the bottom up to deliver that kind of timing accuracy.”

Mapping different functions (I/O, DSP, control, AI) to the logical cores in firmware allows the creation of a ‘virtual SoC’, entirely written in software. In the example below, one core is performing tasks that would normally be done in hardware, such as I2 S, I2 C and LED drivers, and some cores are processing the neural network, while others are doing tasks that would normally be done in software. Defining all this in software is faster, to match the transient demands of IoT devices. Development is also cheaper, Lippett said, enabling companies to create solutions that are economical even in smaller market segments.

An example application mapped onto an Xcore.ai device (Image: XMOS)

“The way we see the market evolving is that the market is demanding more diverse features, and companies will need to respond more quickly,” Lippett said. “It’s very difficult to place a two-year bet on the IoT without building a very generic platform that might not [eventually] be good enough for any segment. [With the Xcore.ai], it’s much easier to bring devices to market much more quickly, with less capex, and effectively place smaller bets on smaller markets and make those markets economical.”

How will XMOS compete against the big microcontroller makers moving into this crossover processor space?

“Not by building ARM-based SoCs! Because they do that really well,” Lippett said. “The only way to compete against those guys is by having an architectural edge. That’s about the intrinsic capabilities of the Xcore in terms of performance, but also the flexibility.”

>> This article was originally published on our sister site, EE Times.


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.