What Is Nvidia Doing in Automotive?

The Nvidia GPU Technology Conference (GTC) has been the most important GPU conference for over a decade. Now, it may also be the most important AI hardware-software-deployment conference, as GPUs are driving much of AI training and inferencing activities.

 

At past GTCs, autonomous vehicles played an important role, as they were expected to be an early deployment segment for AI. The 2024 GTC presented some automotive content, but less than previous conferences. This year’s automotive content was mostly rolled into the presentations on Nvidia’s AI strategy, and in the keynote presentation, automotive AI was part of the robotics strategy. The most important automotive-related announcement was that the Blackwell GPU would be part of Nvidia’s Drive Thor centralized computer for safe and secure AVs.

This article offers perspectives on what Nvidia is doing in the automotive industry.

 

The evolution of Nvidia Drive

Drive is Nvidia’s computer platform for developing advanced driver-assistance systems (ADAS) and AVs. Nvidia Drive was introduced at CES 2015 and has grown through multiple generations. Table 1 summarizes Nvidia Drive’s evolution over the years.

 

The first generations, Drive CX and Drive PX, used the Maxwell microarchitecture and focused on digital cockpits and ADAS applications. Even in 2015, there were 256 or 512 CUDA cores available that could be used for parallel compute operations. The second-generation Drive PX2 targeted ADAS functions and was introduced in January 2016. Drive PX2 used the Pascal GPU architecture, and the number of Arm processors increased to 12 CPUs, all using 64-bit processors. Tesla used the Drive PX for its battery electric vehicle (BEV) Autopilot for several years.

 

Drive PX Xavier was introduced in January 2017 and used the Volta microarchitecture. Nvidia positioned it for use in L3 and L4 vehicles. Drive PX Pegasus arrived in September 2017, was based on the Turing architecture and was Nvidia’s first automotive product with AI functionality. The Pegasus platform provided a performance increase of about 10× over Drive PX2. Availability was scheduled for mid-2018, and as of October 2017, Nvidia had more than 200 partners developing hardware and/or software products for its Drive platform.

 

In December 2019, Nvidia introduced the Drive AGX Orin board family, and in May 2020, Nvidia announced Orin would use the Ampere architecture. The Drive Orin is still deployed for ADAS and L3–L4 vehicles and is likely to be in production for many years. Orin has up to 2,048 CUDA cores, a level that enables parallel processing of complex AI models. The Drive Orin Ampere SoC has 17 billion transistors and meets ISO 26262 ASIL-D regulations.

 

Nvidia announced in April 2021 that the planned Drive Atlan would be based on the Ada Lovelace GPU architecture, but in September 2022, the company canceled Drive Atlan and announced a replacement, Drive Thor. At GTC 2024, Nvidia reported that Drive Thor would use the Blackwell GPU architecture and the Arm Neoverse V3, a 64-bit CPU with up to 64 cores that was announced in February 2024.

Drive Thor developments

With its basis in Blackwell, Drive Thor represents a considerable technological advance over Drive Orin, which is based on the Ampere GPU architecture. The Blackwell GPU builds on the accumulated capabilities of three generations of Ampere and leverages a further four years of Nvidia’s AI experience. Drive Thor and Drive Orin are compared in Table 2. VSI Labs expects more details on Drive Thor to be available when the platform is ready for deployment.

 

Drive Thor has 12× more transistors than Drive Orin, resulting in more than 60× higher performance based on Nvidia’s own comparisons. The Blackwell calculations are 4-bit floating-point (FP4) arithmetic—much faster than Ampere’s 16-bit floating-point (FP16) calculations. FP4 calculation is a recent addition to Nvidia’s GPUs, as is FP8. FP8 and FP4 calculation accuracy is good for accelerating large language models (LLMs). Most of the millions to billions of parameters in LLMs can be expressed as FP8 or FP4 numbers, an ability that speeds up AI training and/or lowers power consumption.

 

All Blackwell products feature two reticle-limited dies connected by a 10-terabyte/second (TB/s) chip-to-chip interconnect in a unified single GPU. Deployment of Drive Thor vehicles is expected to start in 2025. Blackwell adds reliability and resiliency with a dedicated reliability, availability and serviceability (RAS) engine to identify potential faults and minimize downtime. The RAS AI-powered predictive-management capabilities monitor thousands of data points across hardware and software to predict sources of potential vehicle safety issues and downtime. The RAS engine provides in-depth diagnostic data to identify areas of concern and plan for maintenance. By localizing the source of issues, the RAS engine reduces turnaround time and prevents potential vehicle safety problems that could result in crashes, injuries and fatalities.

 

The Transformer AI model is a neural network that learns the context of sequential data and generates new data. The AI model learns to understand and generate human-like text by analyzing patterns in large amounts of text data. The transformer AI model is a key factor in LLM growth. Blackwell and Drive Thor can leverage transformer technology to solve AV and similar automotive problems.

 

AV software and AI models require that vast amounts of data move from program and data memories to and from processors. Blackwell’s Decompression Engine can access large amounts of memory in the Nvidia Grace CPU over a high-speed link offering 900 GB/s of bidirectional bandwidth. This accelerates the database queries that are a large part of all AI LLMs and software platforms.

 

CUDA is the source of Nvidia’s success in GPU-centric applications. By year-end 2023, CUDA downloads surpassed 48 million. CUDA is a parallel computing platform and application programming interface (API) that allows software to use GPUs for many programming tasks simultaneously. This made CUDA the leader in AI applications, because AI models are all about exploiting as many GPU cores and other accelerators as possible to run tasks in parallel.

 

This summary shows that Nvidia is leveraging its ongoing learning and experience from its AI leadership position to rapidly add new functions and features to its GPUs and CPU chips.

Drive Thor customers

Table 3 lists Drive Thor customers that were publicly known at the beginning of April 2024, but many more can be expected. The table also includes a partial list of customers of previous Drive versions. Most customers from earlier versions of Drive will be upgraded to Drive Thor as improved vehicles are introduced.

 

Other car companies using Drive—at least for research and testing—include Audi, Chery, Hyundai, Tesla, Toyota and VW. Truck companies using Drive but not listed in the table include DAF, Einride, Kenworth, Navistar and Peterbilt. Additional AV startups using Nvidia Drive include 2getthere, AutoX, Cruise, Didi, Navya, Optimus Ride and Zoox.

NIM in automotive

Another strategy that strengthens Nvidia’s software leadership is Nvidia Inference Microservices (NIM), a way of packaging and delivering CUDA-based software that increases GPU-centric software availability. NIM services also create an opportunity for developers to reach hundreds of millions of GPUs with their custom AI software.

 

NIM services are built from Nvidia’s accelerated computing libraries and generative AI models. There is expected to be a growing NIM software base—courtesy of the standard NIM APIs—comprising third-party software developers who are attracted to the large CUDA installed base as future customers. The NIM will be most important for AI applications, especially business-centric AI. Automotive NIM software is likely to be a growth market over the next decade and will apply to software-defined vehicles (SDVs), AVs and infotainment applications.

 

In summary, NIM is a set of optimized cloud-native microservices designed to shorten time to market and simplify deployment of generative AI models. The microservices work across cloud platforms, data centers, GPU-accelerated workstations and Nvidia Drive vehicles. NIM expands the AI developer pool by abstracting away the complexities of AI model development and packaging for production using industry-standard APIs.

 

NIMs will help expand AI models and applications across the automotive industry as a NIM software base becomes available for use across automotive segments and use cases.

Omniverse in automotive?

In 2012, Pixar developed an interchange framework called Universal Scene Description (USD). Nvidia integrated the USD framework with its Omniverse, including technologies for modeling physics, materials and real-time path tracking.

 

In 2016, Nvidia released open-source USD software for generating 3D worlds using OpenUSD applications. OpenUSD provides a rich, common language for defining, packaging, assembling and editing 3D data to create virtual worlds across many industries, including automotive, architecture, construction, engineering, entertainment, media and telecom.

 

Nvidia Omniverse is now a platform of APIs, software development kits and services that enables developers to integrate OpenUSD and RTX rendering technologies into existing software tools and simulation workflows for building AI systems. (RTX is Nvidia’s ray tracing platform for designing complex visual models.) Nvidia’s view is that Omniverse brings AI into the physical world; in essence, Omniverse can mimic and simulate the real world.

 

A key announcement at GTC 2024 was the Nvidia Omniverse Cloud, which will be available as APIs. The cloud APIs are expected to extend the reach of Omniverse as the leading platform for creating industrial digital twin applications and workflows.

 

The five new Omniverse Cloud APIs let developers integrate core Omniverse technologies directly into existing design and automation software applications for digital twins. Developers can also integrate their simulation workflows to test and validate autonomous machines like robots or AVs.

 

Expect an expanded market presence for Omniverse in the automotive industry as digital twins become increasingly important for most automotive developers. AV and SDV digital twins are likely to become key Omniverse applications.

Summary

Nvidia is a strong technology supplier to the automotive industry, and its importance is set to grow over the next decade. The automotive industry has become less significant to Nvidia, as the AI explosion has overwhelmed most other industry segments. However, the AI technology that Nvidia is developing will have a substantial impact on multiple automotive segments, from AVs and ADAS to SDVs and infotainment. Both Nvidia’s chip and software technologies could advance automotive products and services.

 

Nvidia Drive continues to grow as an ADAS and AV platform. Nvidia Drive has over 25 design wins, with at least 50 models using or expected to rely on Nvidia Drive technology. Drive Orin is reaching volume use among multiple automotive OEMs and will likely continue as a production technology for many years. Drive Thor is gaining design wins, with first deployment expected in 2025 and production projected to extend through 2030 or longer.

 

Nvidia’s software strategy is as important as its hardware technologies in the automotive industry. The CUDA platform for using parallel processing software is a core reason for Nvidia’s success and is gaining strength as AI expands in automotive applications. The NIM software technology is set to extend Nvidia’s reach in automotive AI software systems.

 

Omniverse is another part of Nvidia’s software strategy that will benefit automotive development. The Omniverse platform is projected to be a key technology for creating digital twins across automotive activities, from product creation and testing to simulation and operation.

 

GTC 2024 was a milestone for Nvidia and showed its growing product portfolio and market reach across technologies and industries. The company is best known for its GPU chips, which are the core of its product strategy. The Blackwell GPU chip architecture is the latest advance in a long string of success stories. The strengths of Nvidia’s software platforms and accompanying chips and hardware will make the company a long-term leader and tough competitor.

 

 

source: eetimes.eu



Leave a Reply