A Billion Dollars of Power
1.2 gigawatts. That's the data centre lease OpenAI just signed in Milam County, Texas.
For reference: that's roughly the output of a nuclear power plant. One facility. For one company's AI workloads.
The Numbers
On 9 January, OpenAI and SoftBank jointly invested $1 billion in SB Energy as part of the Stargate initiative. The deal includes a massive AI-optimised data centre in rural Texas that will draw 1.2 GW when fully operational.
I work for OpenAI. I should state that plainly. But this post isn't about brand loyalty — it's about what these numbers actually mean from an engineering perspective, because the scale here is genuinely unusual.
A typical hyperscale data centre runs between 30 and 100 MW. The largest facilities operated by AWS and Google top out around 300-400 MW. Stargate's Texas site is three to four times that. It's not an incremental increase. It's a different category of facility.
What 1.2 GW Gets You
Power consumption in AI data centres breaks down roughly like this:
- ~70% goes to GPU clusters. Training and inference. The actual compute that produces model outputs.
- ~20% goes to cooling. GPUs run hot. At this density, you're looking at liquid cooling — direct-to-chip or immersion — not just air handlers.
- ~10% goes to networking, storage, and overhead. The fabric that connects the compute together.
At 1.2 GW, assuming ~70% to compute and current-generation NVIDIA hardware drawing roughly 700W per GPU, you're looking at a facility that could house over a million GPUs. That's not a typo. The maths is straightforward: 1.2e9 * 0.7 / 700 ≈ 1.2M.
A million GPUs in one building changes what you can train. Current frontier models use tens of thousands. A facility with a million shifts the constraint from "how much compute can we afford" to "do we have algorithms that can usefully scale to this level."
The Engineering Problem
Building a data centre is not the hard part. Building a data centre that efficiently uses 1.2 GW is.
The challenges are mostly boring and that makes them easy to underestimate:
Power delivery. You need a direct connection to the grid, probably multiple substations, and enough redundancy that a single fault doesn't take the whole facility offline. In rural Texas, the grid wasn't built for this kind of point load. ERCOT (Texas's grid operator) already struggles with peak demand during summer heat. Adding a gigawatt of constant draw is non-trivial.
Cooling at scale. At these power densities, you're generating enough waste heat to warm a small town. Literally. Some data centre operators are starting to sell waste heat for district heating. In Texas, the more pressing issue is rejecting heat to an environment that's already 40°C in summer.
Network fabric. A million GPUs need to communicate. The interconnect bandwidth requirements at this scale are staggering. You're looking at custom network topologies, probably involving multiple tiers of switches, with total bisection bandwidth measured in petabits per second.
Why Texas
Three reasons, and they're all practical.
First, land is cheap and available. A facility this size needs acreage, and Milam County has it.
Second, Texas has a deregulated energy market. You can negotiate power purchase agreements directly with generators. For a facility that will consume a constant gigawatt, this makes the cost of power a negotiable variable rather than a fixed tariff.
Third, Texas is permissive on construction. Fewer regulatory hurdles means faster build-out. When you're in an infrastructure arms race, speed matters.
The Actual Question
None of this is about whether a billion-dollar data centre can be built. It can. The question is whether the AI workloads exist to justify it.
Right now, frontier model training is the primary consumer of this kind of compute. But training runs are finite — they end. A facility this size needs continuous utilisation to justify its capital cost. That means either continuous training of ever-larger models, or a massive inference workload, or both.
The bet behind Stargate is that AI compute demand will continue scaling faster than efficiency gains reduce it. So far, that bet has been correct. But it's still a bet.
I build things for a living. I appreciate the ambition. I also know that the most impressive engineering projects are the ones where the demand shows up after the infrastructure is built, not before.
We'll see.