As its market dominance is being challenged by a host of competitors on a scale not seen before, Nvidia on Monday rolled out six new AI chips and new open models, which appear to show the AI giant is determined to stay ahead of the field.
At the CES consumer electronics show in Las Vegas, the AI hardware and software provider released the Nvidia Rubin platform, a system comprising the six new chips that collectively form an AI supercomputer. The new open generative AI models include more agent-building models in the Nemotron family and new World Foundation Models in the Cosmos model suite, designed for use with humanoid robots and other physical AI applications, as well as for generating synthetic data.
Nvidia also showcased a model that powers autonomous vehicles, Nvidia Alpamayo, which the vendor previously released, in early December.
A Full-Stack Approach
The slew of releases demonstrates how Nvidia is not only pursuing a full-stack approach from chips to software but also looking to enable third parties to develop their own full-stack products, said Chirag Dekate, an analyst at Gartner.
“What they’re trying to highlight here is that AI is no longer just a GPU game,” Dekate said, referring to the ubiquitous graphics processing unit chips that power training and inference for generative AI models. “It is no longer about the GPU chip; it is actually an AI supercomputer.
One way Nvidia is showing that it is not solely a GPU vendor is by combining diverse types of AI chips within the Rubin platform. The components include the Nvidia Vera CPU, Nvidia Rubin GPU, Nvidia NVLink 6 Switch, Nvidia ConnectX-9 SuperNIC, Nvidia BlueField-4 DPU, and Nvidia Spectrum 6 Ethernet Switch.
The Rubin platform is the successor to the widely used Nvidia Blackwell platform. The Rubin platform uses Nvidia’s NVLink interconnect technology and various transformer technologies to accelerate agentic AI, advanced reasoning, and the scale of mixture-of-experts models, compared to Blackwell.
With the platform, Nvidia is trying to inspire its customers and audience to look beyond just GPUs and see the whole underlying infrastructure component as more of an AI factory, Dekate said.
“What Nvidia is trying to highlight is whether you’re trying to solve a problem in the context of model training, or if you’re trying to deploy models at scale, either directly or as part of your agent tech strategy, the underlying infrastructure is likely going to be an AI factory scale problem,” Dekate said. He added that this is a problem that Nvidia wants to address for data center operators, hyperscalers, and enterprise clients.
“AI is no longer just a small, simple device issue; it is actually multifaceted and multi-form factor,” Dekate continued.
This focus on AI as more than just a GPU is part of what differentiates Nvidia from its competitors, he added. Competitors include AMD, Intel and Qualcomm.
“Many of the competitors struggle to meet them,” he said. “They’re starting to get there, but they’re not there yet.”
New Open Models
The new models arrive less than a month after the release of the Nemotron 3 family of open models, designed to build and implement multi-agent systems. They include Nemotron speech, which includes a new automatic speech recognition model that provides real-time, low-latency speech recognition for live caption and speech AI applications, Nvidia said. Also, Nemotron RAG technology has new embed and rerank vision language models. Nvidia also released datasets, training resources and blueprints for the models.
In addition to Nemotron, Nvidia expanded the World Foundation Model line with Cosmos Reason 2, Cosmos Transfer 2.5, and Cosmos Predict. Cosmos Reason 2 is a vision language model that enables robots and AI agents to interact with and understand the physical world. Transfer 2.5 and Predict 2.5 generate synthetic videos across different environments and conditions.
The Alpamayo 1 model is a reasoning vision language model for autonomous vehicles.
While Nvidia is not the first vendor to release open models, the way it specifies what each model is for is unique, said Mark Beccue, an analyst at Omdia, a division of Informa TechTarget.
“This is a little different,” Beccue said, specialized open models are not a common approach. However, specializing open models makes sense because it enables customers with open models to start using them faster, Beccue said.
The specialization of the open models confirms one of the trends the Futurum market research firm has identified for 2026, which is faster implementation of AI models, as opposed to generalized models, said Bradley Shimmin, an analyst at Futurum.
“You can see that in what Nvidia is rolling out,” Shimmin said. “They’re tackling particular problems.”
“They’re applying those models within specific domains like healthcare, autonomous vehicles and very specific use cases in the enterprises,” Shimmin added. “What they’re doing is not just trying to be the best frontier model maker, but to be the best applied intelligence maker.”
However, despite these models being open and Nvidia releasing the weights and recipes for them, enterprise adoption is still a challenge, Beccue said.
“Companies are still using proprietary models more than they are using open source right now,” he said.
Another challenge is that Nvidia’s innovation in the model and AI infrastructure market will make it harder for enterprises to avoid being dependent on the vendor, Dekate said.

