Looking at OpenAI’s financial data for too long can cause a certain kind of vertigo. By any historical standard, the revenue figures are remarkable: $2 billion in 2023, $6 billion in 2024, and $20 billion or more in 2025. This trajectory makes Facebook’s early growth appear cautious and Google’s appear measured. Most tech companies would consider ChatGPT’s 810 million weekly active users to be their greatest accomplishment. Since July 2025, the company’s monthly revenue has exceeded $1 billion. OpenAI is succeeding by any standard metric for a technology company striving for dominance.
Then you look at the other column, which is $17 billion. OpenAI is expected to spend that much money in just 2026. Not losses in the sense of accounting, with amortization and depreciation schedules softening the edges; rather, actual cash leaving the building was used to power one of the most computationally costly operations in business history. At 57% of revenue, the burn rate is not significantly decreasing. The total amount of cash consumed over the course of five years, from 2025 to 2029, is estimated to be $115 billion. For comparison, the Manhattan Project cost about $30 billion in today’s currency. The Apollo program cost about $288 billion over the course of thirteen years and a moon landing. In an attempt to break even on an AI chatbot, OpenAI plans to spend $115 billion over the course of five years.
| Topic Overview: OpenAI Financial Profile — 2025/2026 | |
|---|---|
| Company | OpenAI — AI research company, maker of ChatGPT and GPT model series |
| 2025 Revenue | $20+ billion — up from $6 billion in 2024 and $2 billion in 2023 |
| 2026 Projected Cash Burn | $17 billion — including ~$10–11B in compute/inference costs alone |
| Cumulative Loss Projection | $115 billion total burn from 2025–2029 before reaching profitability |
| Peak Burn Year | 2028 — projected to reach $85 billion in annual cash consumption |
| Profitability Timeline | Estimated sometime in the early 2030s |
| Weekly Active Users | 810 million — generating ~2.5 billion queries per day |
| GPT-5 Operating Margin | -11% operating loss despite 48% gross margin (Aug–Dec 2025) |
| Burn Rate vs. Revenue | Burns $2 for every $1 earned on inference, per Microsoft’s leaked revenue share data |
| CFO | Sarah Friar — describes growth as “never-before-seen at such scale” |
| Engineer Salaries | Routinely $500K–$1M base plus equity — competing against Google, Anthropic, Meta |
The underlying economics’ structure is truly unusual, which is why this is occurring and why it isn’t being discussed more loudly. Operating ChatGPT differs from operating social media sites or streaming services, where the marginal cost of adding a new user is almost zero. Real money is spent on each query. According to estimates from University of Rhode Island researchers, GPT-5 uses about 18.9 watt-hours per query. With 2.5 billion queries per day, the energy demand is approximately 47.2 gigawatt-hours per day, which is sufficient to power 1.64 million American households for a year if you’re inclined toward illustrative comparisons. Simply adding more users does not result in a lower compute cost. It defies the basic premise that software companies should become more profitable as they expand by scaling up with them.

The GPT-5’s August through December 2025 operating period numbers provide an exceptionally clear picture of the situation. $6.1 billion in revenue during those four months. Inference costs alone: $3.2 billion, or 52% of revenue before the salary of one engineer is taken into account. $2.9 billion in gross profit, or a 48 percent gross margin, sounds good until you consider how operating expenses affect it. When R&D, sales and marketing, headcount, and infrastructure are taken into account, the operating loss for that same period was negative 11%. Operating profit is negative and gross profit is positive. A business that appears to be profitable but is actually losing money. Based solely on inference, Microsoft’s leaked revenue share data indicates that the ratio is about two dollars spent for every dollar earned.
Over time, this might be resolved. Both OpenAI’s investors and the company itself typically argue that as hardware advances, compute costs will decrease, model efficiency will rise, and revenue will eventually grow quickly enough to outpace the burn. Cost curves have historically bent in ways that seemed unthinkable when the underlying technology was first developed. While constructing the infrastructure that would later become its most profitable division, Amazon suffered large losses for years. Before the economics of manufacturing changed, Tesla spent money at alarming rates for the majority of its first ten years. Those who genuinely believe in OpenAI’s trajectory highlight those similarities. The detractors point out that while Amazon and Tesla were developing physical infrastructure with decreasing unit costs, OpenAI’s primary expense, which is executing large language model queries, does not decrease in the same structural manner.
Another dimension is added by the talent spending. As the company competes with Google DeepMind, Anthropic, Meta’s AI research division, and an increasing number of well-funded startups for the same relatively small pool of people who can build and maintain frontier AI systems, OpenAI now regularly pays engineers between $500,000 and $1 million in base compensation, plus equity. The annual sales, marketing, and headcount line is between $2 and $2.5 billion. Infrastructure expansion costs an additional $1 to $1.5 billion. Compute capacity increased from 0.6 gigawatts in 2024 to 1.9 gigawatts in 2025, with plans for roughly ten times more by 2030. Every category is expanding more quickly than any realistic expectation of short-term cost savings on the inference side can counteract.
As this develops, there’s a sense that the question isn’t whether OpenAI is creating something genuine—clearly it is, and the number of users makes that clear—but rather whether the underlying economic model can eventually support what’s being built on top of it. The company anticipates turning a profit sometime in the early 2030s, which is a long time to continue burning cash at this rate and necessitates ambitious assumptions about revenue growth and cost reduction even by the standards of a business that has already accomplished seemingly impossible feats. Whether the wager is profitable is still up in the air. It is evident that the wager is massive, that it is being made using borrowed funds and investor confidence, and that the current burn rate is not a transient anomaly being addressed. Running hot and hoping that the future comes before the money runs out is the business model.