Charting these early days of AI

Editor's Note: This is one of several articles in our March 2019 Aspencore Special Report on Artificial Intelligence, exploring the status and outlook of deep learning across hardware, software and use cases in the field. All the stories in the report are available at the links listed below.

SAN JOSE, Calif. — “We need to get to real AI because most of today’s systems don’t have the common sense of a house cat!” The keynoter’s words drew chuckles from an audience of 3,000 engineers who have seen the demos of systems recognizing photos of felines.

There’s plenty of room for skepticism about AI. Ironically, the speaker in this case was Yann LeCun, the father of convolutional neural networks, the model that famously identified cat pictures better than a human.

It’s true, deep neural networks (DNNs) are a statistical method — by their very nature inexact. They require large, labeled data sets, something many users lack.

It’s also true that DNNs can be fragile. The pattern-matching technique can return dumb results when the data sets are incomplete and misleading results when they have been corrupted. Even when results are impressive, they are typically inexplicable.

The emerging technique has had its share of publicity, sometimes bordering on hype. The fact remains that DNNs work. Though only a few years old, they already are being applied widely. Facebook alone uses sometimes simple neural nets to perform 3×10 14 predictions per day, some of which are run on mobile devices, according to LeCun.

Cadence and Synopsys both have reported projects using them to help engineers design better chips. Intel helped Novartis use them to accelerate drug discovery. Siemens is using them to accelerate processing of medical images, and scientists are using them to speed up reading genomes of cancer patients.

They are being employed to identify snow leopards in the wild threatened with extinction. And they even help brew beer in England and Denmark.


Early work in computing with neural networks dates back to the 1950s. (Image: ISSCC, Yann LeCun)

Deep learning is with us to stay as a new form of computing. Its applications space is still being explored. Its underlying models and algorithms are still evolving, and hardware is trying to catch up with it all.

It’s “a fundamental transformation of the computing landscape,” said Chris Rowen, a serial entrepreneur who is “100% all in on deep learning” as CEO of startup BabbleLabs developing models for audio applications.

“I used to write an algorithm and tell a system what to do,” he said. “Now, we have a class of methods not described by an explicit algorithm but a set of examples, and the system figures out the pattern.”

“Just about everything software touches can be influenced by this new method, especially where the data is most ambiguous and noisy — where a conventional programmer could not distinguish relevant from irrelevant bits,” he added.

It’s early days for this revolution in computing, forcing designers to pull out all the stops in the quest for more performance. Researchers at the non-profit OpenAI center said that hardware needs tenfold performance improvements a year to keep up with the needs of training DNNs.

“That’s an amazing requirement … [chips] have to take a scale-up approach, you can’t do 10× a year any other way,” said David Patterson, a veteran researcher whose Turing Award lecture with John Hennessey dubbed it a golden age in computer design, in part driven by deep learning.

In this special report, we give a glimpse of the status and outlook of the chips, the software, and the uses of this emerging technology.

To explore the status and outlook of AI in greater depth, check out all the stories listed below that are part of this special report.

It’s Still Early Days for AI
AI is still in its infancy with some of the most interesting accelerators yet to be disclosed, software still evolving, and benchmarks yet to get fleshed out and exercised.

AI Silicon Sprouts in the Dark
Deep learning has spawned work on a wide variety of novel chips, but the most interesting architectures have yet to be designed, let alone benchmarked.

AI Code Wags Hardware–Vigorously
Deep-learning models, frameworks, and techniques like reinforcement learning are moving faster than you can carve out paths in silicon.

Why I Joined MLPerf
A microprocessor analyst tells how he got involved in an effort to benchmark deep-learning systems, what he learned so far, and what he wants engineers to know about the work ahead.

Kamen Aims to Deliver AI to FedEx
The engineer behind the Segway and First Robotics Competition discusses his team’s current work for FedEx on a delivery robot using neural networks.

AI Trolls for Data Center Woes
Neural networks require time, expertise, and plenty of computer power, said a senior engineer reporting on a project at Hewlett-Packard Enterprise.

China Sees U.S. Ahead in AI
Your competitive position in the global rush to deep learning varies depending on where you sit, according to a story of recent articles in EET-China.

>> This article was originally published on our sister site, EE Times: “It's Still Early Days for AI.”

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.