SAN JOSE, Calif. — Google announced at an annual event here a laundry list of ways that it is expanding its use of deep learning and a new TPU 3.0 chip driving them. Perhaps the most surprising of new AI-powered products, its sister company Waymo said that it will launch a driverless ride-hailing service in Phoenix later this year.
In a keynote, Google chief executive Sundar Pichai addressed rising concerns about the negative impacts of machine learning and the tech industry in general. He discussed new initiatives and examples of how Google aims to make a positive difference in everything from accessibility to fake news and smartphone addiction.
“Technology can be a positive force, but we can’t be wide-eyed about the innovations [that] technology creates,” he said. “Very real questions are being raised about the impact of advances and the role they play — the path forward has to be calibrated carefully.”
The slate of new Google applications using machine learning include:
- Smart displays from JBL, Lenovo, and LG using Google Assistant
- An improved Google Assistant that parses more complex queries
- Computer-vision capabilities integrated into camera apps in smartphones from 11 vendors
- A new set of machine-learning APIs in the next generation of Android
- An extension of autocorrect that can suggest whole sentences or phrases
The most surprising of these is that Waymo will launch a ride-hailing service later this year in Phoenix using self-driving cars.
“That’s just the beginning,” said John Krafcik, chief executive of Waymo, a division of Google’s parent company, Alphabet. “We are building a better driver for ride hailing, logistics, and personal cars. Our technology is an enabler for all these industries, and we will partner with many companies.”
Test users in Phoenix have been riding in Waymo’s self-driving cars for some time, said Krafcik. To date, its fleet has driven more than 6 million miles on public roads and 5 billion miles in simulations.
Waymo partnered in 2013 with Google’s machine-learning unit, Google Brain, to apply deep learning to reduce errors detecting pedestrians by 100 times. Using Google’s TensorFlow framework and TPUs, it now trains models 15 times faster and has developed models to eliminate sensor noise caused by snow.
Google has deployed a new version of its TPUs that uses liquid cooling, a first for a Google data center, to boost performance of clusters of systems eight-fold to “well over 100 petaflops,” said Pichai.
Continue to page two on Embedded's sister site, EE Times: “Google tips TPU 3.0 as AI expands.”