Advertisement

AI workloads raise sense of urgency for next-gen performance

July 10, 2019

rick.merritt-July 10, 2019

Google and its hyperscale rivals crave a roadmap of performance leaps to stay at the bleeding edge of deep learning, a demanding and rapidly expanding new style of computing. The chip industry needs something to replace Moore’s law which is delivering decreasing returns while generating rising costs.

So, Google engineer Cliff Young invited chip vendors to beat a path to a new kind of AI computer, perhaps built from some new kind of transistor.

Today, Google is running whole buildings full of its third generation TPU, a huge honking accelerator flanked by enormous DRAM stacks. The chips are directly connected on proprietary interconnects in pods that require liquid cooling to run problems expressed in a newly streamlined numerical format.

Just six years after noticing matrix multiplication jobs were emerging as a new workload on its cloud services, Google has pulled out all the stops to speed them up. It is about to report stellar benchmarks for its systems. And it’s not enough.

“We’re ripping things up and building them anew. We need new techniques to apply. I’d love to learn about your field and build the next generation of TPUs together,” Young said in a keynote here.

“We need as much Moore’s law as we can get, and we need more options, too — some possibly raw,” he said, suggesting new kinds of analog or optical computing, perhaps with emerging non-volatile memories.

Young’s group is even toying with radical ideas like letting signal-to-noise effects in analog chips directly represent accuracy levels in neural network models, sidestepping the whole digital computing layer. Everything is up for discussion.

“We’re on a collision course with scaling limits. We can only get machines to be so dense — even Google will reach limits to how many data centers and systems we can run,” he said.

Inside a Google data center in Oklahoma. (Source: Google)

No one can predict whether hyperscalers and chip architects can define some fundamental new chip techniques that rewrite the rules in computing. However, the hyperscalers clearly have the money.

Amazon, Facebook and Google together have annual revenues of greater than $400 billion. Overall, the top seven hyperscalers manage about 70% of all public cloud services and buy about a third of all data center gear, said Vlad Galabov, a principal analyst at IHS Markit.

By contrast, the U.S. government last year rolled out a new and ambitious DARPA program to give the chip industry a shot in the arm. However, its budget is just $1.5 billion spread over five years. An industry group called for at least tripling federal spending on core semiconductor research.

Young spoke at the second annual AI Design Forum, hosted by the SEMI trade group and Applied Materials. The event’s implicit theme last year was that AI is the new Moore’s law. It’s the same theme this year, but the tone of urgency has dialed up a couple notches.

>> This article was originally published on our sister site, EE Times: "AI Seeks New Moore's Law."

Loading comments...