A few months ago, a power plant in Chicago looked like it was finally heading for the exit.
The Fisk Generating Station — an old site that includes oil-fired “peaker” units — had a retirement date on the calendar. Then the economics changed. Higher prices and tighter reliability margins made the plant suddenly useful again, and its owner pulled back the retirement notice, arguing it now had a case to keep running.
That small reversal captures a bigger shift happening across parts of the US power system: the clean transition is still moving forward, but the AI boom is bending the path. In PJM Interconnection — the country’s largest grid operator and the region most exposed to data-center growth — Reuters reported that about 60% of fossil plants slated for retirement have postponed or canceled those plans, with many of them peakers.
For climate-minded readers, this feels like bad news with an uncomfortable logic: when demand rises faster than new clean supply can connect, grids lean on what is already built. And what is already built, in many places, is fossil capacity — including some of the most polluting units in the fleet.
Table of Contents
ToggleWhat “peakers” do — and why they matter now
A peaker plant is the grid’s emergency responder. It is designed to start quickly when demand spikes, when a large power line trips, or when wind and solar output changes faster than expected. In theory, peakers run only a small number of hours each year.
But “only a small number of hours” does not mean “small impact”.
The Government Accountability Office (GAO) counted 999 fossil-fueled peakers in the US using a practical definition: plants that run at a capacity factor of 15% or less. GAO found these peakers represented about 19% of total nameplate capacity, yet they produced only 3.1% of annual net generation.
That imbalance matters because it tells you what peakers really are: a large reserve of capacity the grid can call on when it is short — a reliability backstop sitting mostly idle, waiting for a stress event.
The second reason peakers matter is pollution. GAO found peakers often have higher pollution rates when they run (partly because many are older and less efficient). For example, GAO’s analysis found the median sulfur dioxide emission rate for natural gas peakers was 1.6 times that of non-peakers.
So if the system begins to lean on peakers more often, the climate and air-quality consequences can show up quickly — and locally.
AI changed the demand story faster than the grid can respond
The US grid was built for a world where electricity demand grew slowly. Utilities and regulators planned on long timelines. Then, very quickly, demand began to change shape.
Electric cars and heat pumps added new loads. Industrial policy started pulling factories back onshore. Extreme heat pushed peaks higher. And then came data centers, increasingly driven by AI workloads that can be both energy-hungry and always-on.
The data is moving in one clear direction. Pew Research Center, citing International Energy Agency (IEA) estimates, puts US data-center electricity use at 183 TWh in 2024 — more than 4% of total US electricity consumption — and projects it could rise to 426 TWh by 2030.
DOE’s own analysis, based on work from Lawrence Berkeley National Laboratory, frames it similarly: data centers were about 4.4% of US electricity use in 2023, and could rise to 6.7%–12% by 2028, depending on how fast AI demand grows.
The IEA adds an important point: data centers cluster geographically, and that makes integration harder than national averages imply.
A grid is not stressed by a percentage. It is stressed by a location. If a huge new load lands in one area — Northern Virginia, for example — the challenge is not whether the US as a whole has enough capacity. It is whether the local power system can deliver firm electricity reliably without triggering massive costs or delays.
That is how we end up with peakers staying online. They are already there, already connected, already permitted, and already paid off. In a tight system, those are valuable traits.
PJM’s warning signs: fast load growth, slow supply growth
PJM is the key region to watch because it is large, it has major data-center hubs, and its reliability outlook has become more anxious.
PJM’s 2025 long-term load forecast projects rising peaks over time, and it explicitly notes adjustments made for data-center growth in multiple zones.
Regulators have also signaled concern about how quickly forecasts are shifting. In a February 2025 statement, two FERC commissioners pointed to “extraordinary load growth expectations” and highlighted that PJM had increased its 2030 load forecast by large amounts, while the amount of new generation and storage entering service in 2024 was comparatively small.
Meanwhile, PJM has been trying to fix a different problem that turns into a climate problem: the interconnection queue.
On paper, the queue is full of proposed clean projects. In practice, many do not get built quickly. PJM has been reforming its process and says it has studied roughly 160 GW of projects during its transition work, reducing the transition queue and aiming for faster timelines going forward.
This matters because interconnection is the gate between “clean capacity that exists in spreadsheets” and “clean capacity that actually supplies power.” If that gate is slow, existing fossil plants gain leverage — even if their long-term future looks bleak.

The market signal that made owners rethink retirements
If you want the simplest indicator of grid stress in PJM, look at the price of reliability.
In PJM’s capacity market — where the grid pays resources to be available in the future — the 2027/2028 auction cleared at the price cap: $333.44 per MW-day.
PJM’s auction report notes the reliability requirement increased in part due to a jump in forecast load, “largely” tied to additional large loads.
Capacity auctions are not a perfect mirror of reality, but they tell you what the system is willing to pay for “being there” when needed. A capped price is the market’s way of shouting: reliability is scarce.
That is exactly the kind of signal that makes retirement plans wobble. If you are a plant owner, and your old peaker can suddenly earn more simply by staying open, “temporary extension” starts to look like rational business.
That is the dynamic Reuters captured: plants that were supposed to retire are now being kept around because the region is worried about having enough capacity — and because the market is paying for it.
Why this becomes a climate and community issue — not just a grid issue
From a climate perspective, the danger is not that peakers exist. The danger is that “short-term fixes” can harden into long-term dependence.
Peakers were meant to cover rare spikes. But if load growth is continuous and grid bottlenecks persist, peakers can move from “emergency” to “routine backup.” Even if they run only a few more hours, emissions can rise and air quality can worsen — especially when the plants are old and concentrated near populations.
The Fisk example highlights the local dimension. Reuters reported EPA data showing sulfur dioxide emissions from the site’s remaining oil-fired units ranged from a few tons to a couple dozen tons per year, despite limited operations.
And then there is the equity question: where are these peakers located?
GAO found that historically disadvantaged racial or ethnic communities tend to be closer to peaker plants.
Reuters also cited research that peakers are disproportionately located in lower-income communities of color, and referenced work suggesting formerly “redlined” areas have been more likely to have peakers built nearby in recent decades.
This is why the “AI power story” can become politically sharp. Data centers are often out of sight — warehouse-like buildings with few people inside. The backup generation they revive is not out of sight. It is in neighborhoods that already carry a heavy environmental burden.
So the question becomes hard to avoid: who gets the upside of AI, and who gets the exhaust?
Who pays: the ratepayer fight is beginning
The reliability problem is physical. But the politics comes down to bills.
Capacity costs are only one part of an electricity bill, but they can push bills higher — and the public response tends to be immediate. Reuters reported notable bill increases in parts of PJM territory as system costs climbed.
States are starting to treat big loads differently, partly to protect other customers. Virginia regulators approved a new rate class for very large customers — those with demand of 25 MW or more — effective in 2027, along with minimum demand charges designed to reduce the risk that ordinary customers pay for infrastructure built for mega-users.
In plain terms, regulators are moving toward: if you are going to add city-sized electricity demand, you should pay like it.
This approach could spread. It will not solve the supply problem, but it changes the incentives. If data centers face more transparent costs, they have a reason to pursue flexibility, onsite resources, or better siting choices.
The “shortcut” everyone is watching: co-location
There is another path emerging — one that could either ease grid stress or deepen the fairness debate: co-locating data centers next to power plants.
The pitch is simple. If a data center sits beside a generator, it may draw power without relying on as much new transmission. That can speed deployment. It can also reduce strain on the broader grid during certain conditions.
But it comes with thorny questions: does it shift costs to other customers? Does it reduce supply available to the grid during tight periods? Who gets priority in emergencies?
FERC has intervened, directing PJM to revise its tariff and create clearer rules for co-located “behind-the-meter” loads, arguing the current approach lacks consistency and clarity.
This is likely to become one of the defining governance fights of the AI power era: whether co-location becomes an efficient pressure valve — or a way to privatize reliability while socializing grid costs.
How to break the peaker trap — without pretending it is easy
It is tempting to say: “replace peakers with batteries.” Batteries absolutely help, and the economics keep improving. But replacing what peakers do is harder than replacing what peakers are.
Peakers provide fast response, local reliability support, and capacity during rare events. Many grid batteries are built around shorter-duration output, and GAO notes the practical constraints that can make it difficult to replace peakers, especially in dense urban areas.
So the realistic near-term goal is not “eliminate peakers tomorrow.” It is: reduce the hours we need them, quickly enough that delayed retirements do not turn into a decade-long extension.
Four moves matter most:
1) Make large loads part of reliability, not just a driver of it.
A portion of AI workloads can be flexible. If data centers commit to demand response — real, enforceable curtailment during tight hours — that can reduce peak stress and shrink the need for peakers. This is the fastest “new resource” available in many places.
2) Speed up interconnection, so clean supply can actually arrive.
Queue reform is not glamorous, but it is decisive. When clean projects spend years waiting for studies and upgrades, existing fossil capacity gains bargaining power. PJM’s reforms are moving, but the system needs to turn “paper capacity” into real capacity faster.
3) Build transmission — and stop treating it like optional infrastructure.
Transmission does not just move electrons; it increases flexibility and reduces the need for local backup plants. It is also slow to build, which is why the pressure is rising now.
4) Be honest about “clean firm” and contract for what works.
If data centers want round-the-clock power, the clean transition needs more than intermittent generation. That means more storage, but also more credible clean firm options where they can be built and permitted. The system needs solutions that meet real reliability constraints, not press-release reliability.
The bigger lesson: the grid is being asked to evolve at software speed
The peaker comeback is not a failure of the clean-energy transition. It is a collision between two timelines.
AI infrastructure can be built quickly. Power infrastructure cannot. Permitting, interconnection, transmission, and local opposition are slow-moving parts. In the gap between “new demand arrives” and “new clean capacity connects,” peakers fill the space — because they are already there.
That is why this moment matters. If the response to AI-driven demand is to keep old fossil plants running for “just a bit longer,” the climate consequences will be felt — and the local consequences will be concentrated in communities that have hosted the burden for decades.
But there is another possible outcome: AI becomes the forcing function that finally modernizes the grid — faster interconnection, more transmission, serious demand flexibility, and a more honest approach to who pays for what.
Fisk in Chicago is a warning. The question for 2026 is whether it becomes a pattern — or a turning point.











