Software is lead dog in pursuit of AI

Editor's Note: This is one of several articles in our March 2019 Aspencore Special Report on Artificial Intelligence, exploring the status and outlook of deep learning across hardware, software and use cases in the field.

SAN JOSE, Calif. — In AI, hardware is the tail and software is the dog — and this is a very active dog. One need only browse the popular arXiv.org site to find one- or two-dozen new research papers posted daily.

Wei Li, who leads a software group at Intel devoted to machine learning, rattles off a list of a dozen popular convolutional, recurrent, and other neural-network models. Adding another layer, most big cloud and chip vendors have created their own frameworks to build and run the models optimally on their platforms.

“There’s a variety of topologies and frameworks to test,” he said.

Don’t let the complexity overwhelm you, said Chris Rowen, chief executive of BabbleLabs, a startup creating DNN engines for audio tasks. “The structure of a neural net can be important to efficiency, but any of them can get the job done,” he said. “In many cases, it’s almost a question of style.”

Automated learning is perhaps the most powerful megatrend that will drive change in the software. It could take decades to evolve into what is still considered a kind of science fiction — machines that can learn independently of humans. Meanwhile, researchers are helping today’s neural nets take baby steps in that direction.

“In my opinion, the future of AI is self-supervised learning,” said Yann LeCun, who is considered the father of convolutional neural nets, now used widely in computer vision and other systems. “The trend is to rely increasingly on unsupervised, self-supervised, weakly supervised, or multi-task learning, for which larger networks perform even better,” he wrote in a recent paper.

Generalized adversarial networks are showing promise as one technique to let systems make their own predictions. In a recent talk, LeCun showed examples of GANs used for designing fashionable clothes and guiding self-driving cars. He also pointed to work such as BERT, a pre-training technique using unlabeled data that Google recently made open-source.

Such code requires big iron and lots of memory, and future algorithms will demand even larger models. Tomorrow’s neural nets will also be more dynamic and sparse, using new kinds of basic primitives such as dynamic, irregular graphs, LeCun said.

Long term, “one hope is that training a system to predict videos will allow it to discover much of the hidden regularities, geometry, and physics of the world … [The resulting predictive models could] be the centerpiece of intelligent systems … for such applications such as robotic grasping and autonomous driving,” he added.

The near-term challenge is especially acute for engineers such as Jinook Song, who designs AI blocks for Samsung’s smartphones. He recently described a 5.5-mm 2 block in the latest 8-nm Exynos chip that hits performance of 6.937 TOPS when a neural net allows pruning of up to three-quarters of its weight.

He’s not tapping the brakes. Asked what he most wants for a future generation, he said some kind of learning capability in the power budget of a handset.

text

Researchers are showing progress teaching neural nets a form of learning by having them fill in blanks in images. Click to enlarge. (Source: Yann LeCun, ISSCC)

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.