Advertisement

New architectures essential for AI at the edge

July 09, 2018

nitind-July 09, 2018

GRENOBLE, France — Addressing the "memory wall" and pushing for a new architectural solution enabling highly efficient performance computing for rapidly growing artificial intelligence (AI) applications are key areas of focus for Leti, the French technology research institute of CEA Tech.

Speaking to EE Times at Leti’s annual innovation conference here, Leti CEO Emmanuel Sabonnadière said there needs to be a highly integrated and holistic approach to moving AI from software and the cloud into an embedded chip at the edge.

“We really need something at the edge, with a different architecture that is more than just CMOS, but is structurally integrated into the system, and enable autonomy from the cloud — for example for autonomous vehicles, you need independence of the cloud as much as possible,” Sabonnadière said.

He commented on the Qualcomm bid for NXP being a key pointer as a driver for more computing at the edge. “Why do you think Qualcomm is buying NXP? It’s for the sensing, and to put digital behind the sensing.”

Emmanuel Sabonnadiere
Emmanuel Sabonnadiere

To address the computing architecture paradigm, Sabonnadière said that he hopes for some breakthroughs in Let's collaboration with professor Subhasish Mitra’s team at Stanford University’s department of electrical engineering and computer science. Mitra’s work, in development for quite some time — and funded by the Defense Advanced Research Projects Agency (DARPA), the National Science Foundation, Semiconductor Research Corp., STARnet SONIC, and member companies of the Stanford SystemX Alliance — focuses on exploring a new processing-in-memory architecture for abundant data and dense interconnections applications.

“We have a deep conviction that this is a way forward to address ‘more-than-Moore’ challenges and have asked professor Mitra to create a demonstrator,” said Sabonnadière, talking about the need to validate in silicon.

At the conference, Mitra said a computing nanosystem architecture using advanced 3D integration is necessary for the coming superstorm of abundant data, where computational demands exceed processing capability.

“We have to process the data to create the decisions, but there’s so much ‘dark’ data which we just can’t process," Mitra said. "Look at Facebook for example – it took 256 Tesla P100 GPUs to train ImageNet in one hour, which would previously have taken days.”

>> Continue reading page 2 of this article on our sister site, EE Times: "Addressing 'Memory wall' is key to Edge-Based AI."

 

Loading comments...