Nvidia Unveils Vera Rubin AI Accelerator, Eyes $1T AI Chip Market by 2027
Nvidia chief executive Jensen Huang used Nvidia’s annual GTC 2026 conference in San Jose, California, to unveil a new AI accelerator named Vera Rubin and to frame this year as a turning point for AI inference. He projected AI chip sales reaching about $1 trillion by 2027, saying the industry has entered a phase of rapid growth in inference workloads.
The Vera Rubin accelerator combines Nvidia’s Vera central processing unit with its Rubin graphics processing unit to power AI data-center servers designed to handle surging inferences as intelligent agents become more prevalent in enterprise and consumer applications.
Nvidia described the Vera CPU as delivering about twice the efficiency and 50% faster performance than its predecessor. It is designed to boost throughput and responsiveness for tasks such as coding AI and deploying AI agents; Nvidia suggested that 256 Vera CPUs can be assembled into a data-center rack.
The company said discussions are under way with major data-center operators, including Meta and Alibaba, about adopting Vera. It also noted that server makers such as Dell Technologies, Hewlett Packard Enterprise, and Lenovo are developing Vera-based servers. A software company referred to as Cursor was cited as adopting Vera to enhance AI coding-agent performance.

Nvidia also announced Dynamo 1.0, an open-source software platform intended to orchestrate AI workloads by tightly coupling GPUs and memory in data centers. Nvidia claimed Dynamo 1.0 can boost the inference performance of its Blackwell GPUs by as much as seven times and reduce token costs in large-language AI tasks.
In another move, Nvidia introduced a new GPU interconnect technology called Kiber, designed to assemble 144 GPUs in a compute tray with a vertical stacking approach to reduce latency. The company indicated Kiber will be used in its next-generation Vera Rubin Ultra accelerators.
For U.S. readers, the announcements matter because Nvidia’s hardware backbone underpins much of the global AI infrastructure, including services used by American tech firms and cloud providers. The projected surge in AI chip demand emphasizes continued investment in semiconductor supply chains and the competitiveness of U.S.-based AI ecosystems, with potential implications for pricing, vendor dominance, and the pace of enterprise AI adoption.
Industry analysts note that the shift from AI training to deployment—where inference dominates—tightens the focus on specialized processors and software stacks. Bank of America projects the CPU market will more than double—from about $270 billion in 2025 to roughly $600 billion by 2030—highlighting the strategic importance of Vera CPUs alongside Nvidia’s GPUs for future data-center ecosystems.