Destination
Leaked finances hint that OpenAI's inference may be swallowing its revenue

According to Techcrunch and blogger Ed Zitron, new internal documents outline how much money is moving between OpenAI and Microsoft. The documents suggest that the cost of simply running OpenAI's models is enormous. Based on these figures, a profitable OpenAI still seems far off.<br /> The article Leaked finances hint that OpenAI's inference may be swallowing its revenue appeared first on THE DECODER. [...]

Rating

Innovation

Pricing

Technology

Usability

We have discovered similar tools to what you are looking for. Check out our suggestions for similar AI tools.

venturebeat
Baseten takes on hyperscalers with new AI training platform that lets you own your model weights

Baseten, the AI infrastructure company recently valued at $2.15 billion, is making its most significant product pivot yet: a full-scale push into model training that could reshape how enterprises wean [...]

Match Score: 128.40

venturebeat
OpenAI deploys Cerebras chips for 15x faster code generation in first major move beyond Nvidia

OpenAI on Thursday launched GPT-5.3-Codex-Spark, a stripped-down coding model engineered for near-instantaneous response times, marking the company's first significant inference partnership outsi [...]

Match Score: 116.43

venturebeat
The team behind continuous batching says your idle GPUs should be running inference, not sitting dark

Every GPU cluster has dead time. Training jobs finish, workloads shift and hardware sits dark while power and cooling costs keep running. For neocloud operators, those empty cycles are lost margin.The [...]

Match Score: 90.98

venturebeat
Together AI's ATLAS adaptive speculator delivers 400% inference speedup by learning from workloads in real-time

Enterprises expanding AI deployments are hitting an invisible performance wall. The culprit? Static speculators that can't keep up with shifting workloads.Speculators are smaller AI models that w [...]

Match Score: 89.69

venturebeat
AI inference costs dropped up to 10x on Nvidia's Blackwell — but hardware is only half the equation

Lowering the cost of inference is typically a combination of hardware and software. A new analysis released Thursday by Nvidia details how four leading inference providers are reporting 4x to 10x redu [...]

Match Score: 86.02

venturebeat
Inference is splitting in two — Nvidia’s $20B Groq bet explains its next act

Nvidia’s $20 billion strategic licensing deal with Groq represents one of the first clear moves in a four-front fight over the future AI stack. 2026 is when that fight becomes obvious to enterprise [...]

Match Score: 70.87

venturebeat
OpenAI upgrades ChatGPT with interactive learning tools as lawsuits and Pentagon backlash mount

OpenAI on Monday launched a set of interactive visual tools inside ChatGPT that let users manipulate mathematical and scientific formulas in real time — a genuinely impressive education feature that [...]

Match Score: 69.16

venturebeat
Train-to-Test scaling explained: How to optimize your end-to-end AI compute budget for inference

The standard guidelines for building large language models (LLMs) optimize only for training costs and ignore inference costs. This poses a challenge for real-world applications that use inference-tim [...]

Match Score: 68.40

venturebeat
Gong study: Sales teams using AI generate 77% more revenue per rep

The debate over whether artificial intelligence belongs in the corporate boardroom appears to be over — at least for the people responsible for generating revenue.Seven in ten enterprise revenue lea [...]

Match Score: 59.79