Forgot password
Enter the email address you used when you joined and we'll send you instructions to reset your password.
If you used Apple or Google to create your account, this process will create a password for your existing account.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Reset password instructions sent. If you have an account with us, you will receive an email within a few minutes.
Something went wrong. Try again or contact support if the problem persists.
Photo by Chesnot and Getty Images

‘The ChatGPT moment for physical AI is here’: NVIDIA launches new AI model that’s set to revolutionize autonomous vehicles

Tesla stocks are not looking so good.

NVIDIA officially debuted Alpamayo at CES 2026, launching a powerful new family of open-source AI models and tools they believe represent the “ChatGPT moment” for physical robotics and autonomous vehicles, as per TechCrunch. Alpamayo is specifically designed to help self-driving cars actually reason their way through complicated, real-world driving situations, making autonomous navigation safer and much smarter.

Recommended Videos

“The ChatGPT moment for physical AI is here, when machines begin to understand, reason, and act in the real world,” Huang said in a statement. He explained that Alpamayo brings true reasoning to autonomous vehicles. It allows them to think through extremely rare scenarios, drive safely even in complex environments, and perhaps most importantly, explain exactly why they made the decisions they did.

At the heart of this huge launch is Alpamayo 1. This is a massive 10 billion-parameter vision language action (VLA) model that uses a chain-of-thought process. It can tackle those tough “edge cases” that have always plagued self-driving development, such as figuring out how to safely navigate a busy intersection when the traffic lights are totally out.

An autonomous vehicle equipped with this model can finally start thinking more like a human driver

NVIDIA’s vice president of automotive, Ali Kani, explained how the process works during a press briefing. “It does this by breaking down problems into steps, reasoning through every possibility, and then selecting the safest path,” Kani stated.

Trust is the biggest hurdle for widespread AV adoption, and if the car can tell you why it swerved or stopped, that builds confidence immediately. Huang emphasized this transparency during his keynote. “Not only does [Alpamayo] take sensor input and activate steering wheel, brakes, and acceleration, it also reasons about what action it’s about to take,” Huang explained. “It tells you what action it’s going to take, the reasons by which it came about that action. And then, of course, the trajectory.”

NVIDIA is making Alpamayo accessible to developers. The underlying code for Alpamayo 1 is open source and available for everyone to use on the Hugging Face platform. This is fantastic news for the industry because developers can fine-tune this massive model into smaller versions perfect for vehicle deployment. They can also use the code to train simpler driving systems or build new tools, like systems that automatically tag video data or evaluators that check if the car’s decision was actually smart.

To ensure developers have everything they need to start building, NVIDIA is providing a robust ecosystem of tools and data. They’re releasing an open dataset containing more than 1,700 hours of driving footage. This data was collected across various geographies and conditions, focusing specifically on those rare, complex, real-world scenarios that Alpamayo is designed to solve, so that we don’t have to see robotaxis driving into an active police standoff or flouting signals from stopped school buses.


Attack of the Fanboy is supported by our audience. When you purchase through links on our site, we may earn a small affiliate commission. Learn more about our Affiliate Policy
Author