Neuromorphic AI chips for spiking neural networks debut - Embedded.com

Neuromorphic AI chips for spiking neural networks debut

Innatera, the Dutch startup making neuromorphic AI accelerators for spiking neural networks, has produced its first chips, gauged their performance, and revealed details of their architecture.

Innatera, the Dutch startup making neuromorphic AI accelerators for spiking neural networks, has produced its first chips, gauged their performance, and revealed details of their architecture.

The company has also announced that Cadence and Synopsys co-founder Alberto Sangiovanni-Vincentelli has joined the company as chairman of its board of directors. The industry veteran is currently a Professor at the University of California at Berkeley.


Innatera’s chip is designed to accelerate different SNNs for audio, health and radar applications (Image: Innatera)

The Innatera chip is designed to accelerate spiking neural networks (SNNs), a type of neuromorphic AI algorithm based on brain biology which uses the timing of spikes in an electrical signal to perform pattern recognition tasks. SNNs are completely different in structure from mainstream AI algorithms and thus require dedicated hardware for acceleration, but they typically offer significant power consumption and latency advantages for sensor edge applications.

Most other companies working on spiking neural network algorithms and hardware (for example, Prophesee) are targeting images and video streams. Innatera has decided to focus on audio (sound and speech recognition), health (vital signs monitoring) and radar (for consumer/IoT use cases such as elderly person fall sensors which maintain privacy).


Marco Jacobs (Image: Innatera)

“These sensors have time series data, instead of images which are very parallel,” said Marco Jacobs, Innatera VP marketing and business development, in an interview with EE Times . “Our array is especially good at processing time series data… it’s a good technology fit. Also, from a market perspective, we see a lot of interesting applications in this area and not that many solutions that address it.”

Another thing these three applications have in common is that since processing is required in the sensor node, the power envelope is very tight. In Innatera’s tests, each spike event (each neuron firing in response to input data) required less than a picoJoule of energy — actually, less than 200 femtoJoules in TSMC 28nm, Innatera confirmed. This is approaching the amount of energy used by biological neurons and synapses. A typical audio keyword spotting application required under 500 spike events per inference, resulting in “deep sub-milliWatt power dissipation,” according to Innatera’s CEO, Sumeet Kumar. In this case, clusters of neurons firing together represent different phonemes in speech.


Clusters of neurons firing (groups of dots here) represent detection of phonemes in speech. As the input data incorporates more noise, the same clusters are mostly present, though they are harder to spot (Image: Innatera)

Processing architecture

Innatera’s spiking neural processor uses a parallel array of spiking neurons and synapses to accelerate continuous-time SNNs with fine-grained temporal dynamics. The device is an analog/mixed-signal accelerator designed to leverage SNN’s ability to incorporate the notion of time in how the data is processed.


Innatera’s spiking neural processor includes a massively parallel neuro-synaptic array and spike encoders and decoders (Image: Innatera)

One of the key aspects of Innatera’s compute fabrics is its programmability, which is important for two reasons.

First, programming different SNNs onto the chip. Neurons need to be connected in a flexible manner – the brain uses very complex neural network topologies to do things efficiently, which requires complex connections between neurons, which need to be recreated in silicon.


Sumeet Kumar (Image: Innatera)

Secondly, to optimize performance. Rather than representing information as bits in words, in an SNN information is represented as precisely timed spikes. The timing of the spikes needs to be manipulated at a very fine-grained level to extract insights about the data. The neurons and the connections between them (the synapses) therefore need to exhibit complex timing behaviors. These behaviors can be adjusted via Innatera’s SDK to optimize performance.

Innatera describes its chip as analog-mixed signal or “digitally assisted analog.” Neurons and synapses are implemented in analog silicon to maintain ultra-low power consumption. Analog electronics also allows continuous time networks (digital electronics would require discretization). This is important to SNNs because their nature means they inherently have a notion of time and need to be able to hold particular states over a period of time.

“Doing this is much easier in the in the analog domain — you don’t have to shift the complexity of keeping state into the network topology,” Kumar said. “Our compute elements naturally retain that that state information. This is the reason why we do things in the analog domain.”


A compute segment in Innatera’s array, where the neurons are designed to be carefully matched. Programmable synapses are arranged in a multi-level crossbar structure. (Black lines/dashes here represent input and output spikes) (Image: Innatera)

Minor inconsistencies in fabrication between compute elements on the chip, and between different chips, can be a problem for implementing neural networks accurately in the analog domain. Innatera’s solution is to group neurons into what it calls segments, which are carefully designed to match path lengths and numbers of neurons.

The segment design “essentially allows us to use the best of analog circuitry while minimizing these non-idealities that you would typically have in an analog circuit,” Kumar said. “All of this was essentially done to make sure that neurons inside a segment exhibit deterministic behavior and they function in a way that is similar to their immediate neighbors.”

Inconsistencies between different chips can cause problems when the same trained network is rolled out to devices in the field. Innatera gets around this with software.

“Mismatch and variability are dealt with deep inside the SDK,” Kumar said. “If you are a power user, we can expose some of that to you, but a typical programmer doesn’t need to bother about it.”

Application specific

Innatera, a spin-out from the Delft University of Technology, was already working with revenue customers on its SNN algorithms before moving into hardware and raising a seed round of €5 million (around $6 million) towards the end of 2020.

“We’ve been working with a number of customers since the time we actually started the company, and these engagements are still ongoing — they’ve matured very significantly,” Kumar said. “We hope to be able to show more demonstrations together with some of these customers in in the later part of this year.”

Kumar said the company maintains its focus as a compute solutions company, that is, they will supply turnkey solutions which include both hardware and application-specific SNN algorithms.

Innatera’s first chip is suitable for audio, health and radar applications. The company’s roadmap could include further optimized chips for each of the applications.

“We architected the device in such a way so that we could accelerate a wide variety of spiking neural networks,” Kumar said. “[Our chip] can implement these networks across application domains. But as we go deeper into domains, it may be necessary to optimize the hardware design, and this is something which we will look at in the future. Right now the hardware is not overly specialized towards any specific class of applications or any style of spiking neural networks, the aim is to support a variety of them generally inside the architecture.”

Samples of the initial chip are on track to become available before the end of 2021.

>> This article was originally published on our sister site, EE Times.


Related Contents:

For more Embedded, subscribe to Embedded’s weekly email newsletter.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.