Neuromorphic chip researchers seek better AI

SAN JOSE, Calif. – Kwabena Boahen believes a better AI is imminent.

The Stanford professor is one of dozens of researchers working on chips modeled on the human brain. They promise to handle orders of magnitude greater computations than today’s processors with a fraction of their power consumption.

Braindrop, his latest chip, beats Nvidia’s Tesla GPUs in energy efficiency and also outpaces similar processors from academics. He is already working to secure funding for a next-generation effort that could do even better, probably using ferroelectric FETs at Globalfoundries.

The problem with all the so-called neuromorphic chips is they are missing a key piece of the puzzle. Researchers believe they understand the analog process the brain uses for computing and the spiking neural network technique it uses for efficiently communicating among neurons. What they don’t know is how the brain learns.

It’s a fundamental piece of the algorithm that’s still missing. Researchers like Boahen are optimistic and hot on the trail indicated by several good clues. But they lack the equivalent of back-propagation, aka backprop.

In the related field of deep learning, backprop is the heart of the training process. It’s painfully slow and requires banks of expensive CPUs or GPUs with tons of memory working offline, but it is delivering stellar results on a wide range of pattern recognition problems.

The problem with backprop and deep learning in general, say researchers like Boahen, is that it's artificial. It is not using neurons and techniques modeled after the brain that crunches through supercomputer-like tasks on the equivalent of a 35W power source.

“There’s a huge opportunity in this space. A lot of applications are not served by deep neural networks that run in the cloud with batch requests that create latency,” said Boahen in an interview with EE Times.

For example, neuromorphic chips could monitor and analyze vibrations on bridges in real time with a few microwatts from an energy harvester, only communicating when a human needs to take action. “We should think about how we can give everything — not just cloud services — a nervous system,” he said.

Boahen’s optimism was echoed at a recent workshop for leading researchers in the field.

“We want to expand the space of the types of computations we can perform. A lot of interesting computations are done in the brain that fall outside deep learning,” said Mike Davies who manages a neuromorphic computing lab at Intel.

“Deep learning uses a crude approximation of a neuron, but it’s useful and got traction thanks to backprop, which enables offline training. It’s not a neural-inspired idea, it’s stochastic gradient descent — but it works really well,” Davies added in a talk at the workshop.

Stanford’s Braindrop beat Nvidia’s Tesla and other research chips in some tests. Click to enlarge. (Source: Stanford)

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.