In-memory computing builds for AI
SAN JOSE, Calif. — Startups, corporate giants, and academics are taking a fresh look at a decade-old processor architecture that may be just the thing ideal for machine learning. They believe that in-memory computing could power a new class of AI accelerators that could be 10,000 times faster than today’s GPUs.
The processors promise to extend chip performance at a time when CMOS scaling has slowed and deep-learning algorithms demanding dense multiply-accumulate arrays are gaining traction. The chips, still more than a year from commercial use, also could be vehicles for an emerging class of non-volatile memories.
Startup Mythic (Austin, Texas) aims to compute neural-network jobs inside a flash memory array, working in the analog domain to slash power consumption. It aims to have production silicon in late 2019, making it potentially one of the first to market of the new class of chips.
“Most of us in the academic community believe that emerging memories will become an enabling technology for processor-in-memory,” said Suman Datta, who chairs the department of electrical engineering at Notre Dame. “Adoption of the new non-volatile memories will mean creating new usage models, and in-memory processing is a key one.”
Datta notes that several academics attempted to build such processors in the 1990s. Designs such as the EXECUBE, IRAM, and FlexRAM “fizzled away, but now, with the emergence of novel devices such as phase-change memories, resistive RAM, and STT MRAM and strong interest in hardware accelerators for machine learning, there is a revitalization of the field … but most of the demonstrations are at a device or device-array level, not a complete accelerator, to the best of my knowledge.”
One of the contenders is IBM’s so-called Resistive Processing Unit, first disclosed in 2016. It is a 4,096 x 4,096 crossbar of analog elements.
“The challenge is to figure out what the right analog memory elements are — we are evaluating phase-change, resistive RAM, and ferroelectrics,” said Vijay Narayanan, a materials scientist recently named an IBM Research fellow, largely for his work in high-k metal gates.
Stanford announced its own effort in this field in 2015. Academics in China and Korea are also pursuing the concept.
To succeed, researchers need to find materials for the memory elements that are compatible with CMOS fabs. In addition, “the real challenge” is that they need to show a symmetrical conductance or resistance when voltage is applied, said Narayanan.
Continue to page two on Embedded's sister site, EE Times: "AI revives In-Memory processors."