The company will also make its AI models for autonomous vehicles available to developers.
At the company’s GPU Technology Conference (GTC) in Suzhou, China, Nvidia CEO Jensen Huang took to the stage to introduce Drive AGX Orin, the next generation SoC in the company’s automotive portfolio.
Orin follows Drive AGX Xavier, launched just under 2 years ago at CES 2018. Xavier is Nvidia’s current flagship SoC for AI acceleration in vehicles.
Orin, at 17 billion transistors, is almost double the size of Xavier, which had 9 billion, and it offers nearly 7x the performance (200 TOPS for INT8 data). Despite its size, Orin also offers 3x the power efficiency of Xavier, the company said.
“[This is] a huge boost [in performance], but it’s not just about the TOPS, it’s about the architecture being designed for very complex workloads, very diverse and redundant algorithms that have to run inside of an autonomous vehicle, that will be handled by the Xavier today and Orin in the future,” said Danny Shapiro, senior director of automotive at Nvidia.
Orin will use 12 Hercules ARM64 CPUs alongside next-generation Nvidia GPU cores and new deep learning and computer vision accelerators, which the company did not reveal.
It will be used in autonomous vehicles (across designs from Level 2 to Level 5) and robotics where many neural networks and other applications need to run simultaneously, while achieving ISO 26262 ASIL-D levels of safety. Making use of the Nvidia Drive platform, Orin will be software compatible with Xavier.
The Orin family will include a range of configurations based on a single architecture, and will be available for customer production runs in 2022.
Nvidia also announced a partnership with Didi. Didi is an app-based transportation provider (similar to Uber), active in Asia, Latin America and Australia.
Didi will use Nvidia GPUs in its data center for training machine learning algorithms, and the Nvidia Drive platform for inference in its Level 4 autonomous vehicles. The company spun out its autonomous driving business unit into a separate company in August. It will also launch virtual GPU cloud services for customers based on Nvidia GPUs.
In a separate announcement, Nvidia revealed it will make pre-trained models for the deep neural networks (DNNs) it developed for Nvidia Drive freely available to autonomous vehicle developers. These include models for the detection of traffic lights and signs, as well as other objects like vehicles, pedestrians and bicycles. They also include path perception, gaze detection and gesture recognition algorithms.
Importantly, these models can be customized using tools provided by the company, and can be updated using federated learning. Federated learning is a technique where training is done locally at the edge, preserving data privacy, before a central model is updated with training results from multiple sources.
“The AI autonomous vehicle is a software-defined vehicle required to operate around the world on a wide variety of datasets,” said Jensen Huang, CEO of Nvidia. “By providing AV developers access to our DNNs and the advanced learning tools to optimize them for multiple datasets, we’re enabling shared learning across companies and countries, while maintaining data ownership and privacy. Ultimately, we are accelerating the reality of global autonomous vehicles.”
>> This article was originally published on our sister site, EE Times.