NVIDIA officially debuted Alpamayo at CES 2026, launching a powerful new family of open-source AI models and tools they believe represent the “ChatGPT moment” for physical robotics and autonomous vehicles, as per TechCrunch. Alpamayo is specifically designed to help self-driving cars actually reason their way through complicated, real-world driving situations, making autonomous navigation safer and much smarter.
“The ChatGPT moment for physical AI is here, when machines begin to understand, reason, and act in the real world,” Huang said in a statement. He explained that Alpamayo brings true reasoning to autonomous vehicles. It allows them to think through extremely rare scenarios, drive safely even in complex environments, and perhaps most importantly, explain exactly why they made the decisions they did.
At the heart of this huge launch is Alpamayo 1. This is a massive 10 billion-parameter vision language action (VLA) model that uses a chain-of-thought process. It can tackle those tough “edge cases” that have always plagued self-driving development, such as figuring out how to safely navigate a busy intersection when the traffic lights are totally out.
An autonomous vehicle equipped with this model can finally start thinking more like a human driver
NVIDIA’s vice president of automotive, Ali Kani, explained how the process works during a press briefing. “It does this by breaking down problems into steps, reasoning through every possibility, and then selecting the safest path,” Kani stated.
Trust is the biggest hurdle for widespread AV adoption, and if the car can tell you why it swerved or stopped, that builds confidence immediately. Huang emphasized this transparency during his keynote. “Not only does [Alpamayo] take sensor input and activate steering wheel, brakes, and acceleration, it also reasons about what action it’s about to take,” Huang explained. “It tells you what action it’s going to take, the reasons by which it came about that action. And then, of course, the trajectory.”
NVIDIA is making Alpamayo accessible to developers. The underlying code for Alpamayo 1 is open source and available for everyone to use on the Hugging Face platform. This is fantastic news for the industry because developers can fine-tune this massive model into smaller versions perfect for vehicle deployment. They can also use the code to train simpler driving systems or build new tools, like systems that automatically tag video data or evaluators that check if the car’s decision was actually smart.
To ensure developers have everything they need to start building, NVIDIA is providing a robust ecosystem of tools and data. They’re releasing an open dataset containing more than 1,700 hours of driving footage. This data was collected across various geographies and conditions, focusing specifically on those rare, complex, real-world scenarios that Alpamayo is designed to solve, so that we don’t have to see robotaxis driving into an active police standoff or flouting signals from stopped school buses.
Published: Jan 6, 2026 03:00 pm