NVIDIA will invest up to $100 billion in OpenAI as the ChatGPT maker sets out to build at least 10 gigawatts of AI data centers using NVIDIA chips and systems. The strategic partnership announced today is gargantuan in scale. The 10-gigawatt buildout will require millions of NVIDIA GPUs to run OpenAI's next-generation models. NVIDIA's investment will be doled out progressively as each gigawatt comes online.<br /> The first phase of this plan is expected to come online in the second half of 2026, and will be built on NVIDIA's Vera Rubin platform, which NVIDIA CEO Jensen Huang promised will be a "big, big, huge step up," over the current-gen Blackwell chips.<br /> “NVIDIA and OpenAI have pushed each other for a decade, from the first DGX supercomputer t [...]
Jensen Huang walked onto the GTC stage Monday wearing his trademark leather jacket and carrying, as it turned out, the blueprints for a new kind of monopoly.The Nvidia CEO unveiled the Agent Toolkit, [...]
Nvidia on Monday took the wraps off Vera Rubin, a sweeping new computing platform built from seven chips now in full production — and backed by an extraordinary lineup of customers that includes Ant [...]
OpenAI on Thursday launched GPT-5.3-Codex-Spark, a stripped-down coding model engineered for near-instantaneous response times, marking the company's first significant inference partnership outsi [...]
OpenAI just announced a massive funding round of $110 billion, which is one of the biggest investment rounds in Silicon Valley history. The investors feature many of the usual suspects, including Amaz [...]
OpenAI has struck a deal with Oracle to add an astounding 4.5 gigawatts of US data center capacity to power the massive workload required by its large language models. The companies haven't speci [...]
Nvidia on Monday unveiled a deskside supercomputer powerful enough to run AI models with up to one trillion parameters — roughly the scale of GPT-4 — without touching the cloud. The machine, calle [...]