What AI consumes: Status Update

1. AI Energy Consumption Today: Between Power and Opacity

 

Imagine a diligent student, coffee in hand, pulling an all-nighter to study. Training an AI model is somewhat similar, but instead of a brain, there are thousands of GPUs, and instead of coffee, it’s megawatts of electricity. All of this just so we can ask ChatGPT a few weeks later to write a poem in alexandrines about sad pandas.

 

Welcome behind the energy curtain of AI!

 

For those who, like me, work in the production of educational content: texts, images, videos, the energy consumption of AI remains largely a black box. We know energy is consumed, a lot even, but it’s often hard to understand where, how, and how much.

 

1.1. Training

Training a large AI model is like teaching a five-year-old everything on the Internet multiple times, while trying to instill moral and social understanding despite all the toxic tweets stored in its memory.

Take GPT-3, launched by OpenAI in 2020: it required about 1,287 megawatt-hours (MWh) to train, with an estimated 552 tCOe in emissions. Depending on the location of the data centers used (France, China, Iceland…), the carbon footprint can vary by a factor of 50.

Why? Because electricity, depending on whether it is generated from coal or hydropower, does not carry the same CO footprint.

 

To put this into more concrete terms: training GPT-3 is equivalent, in CO emissions, to more than 200 round trips between Paris and New York by plane. And what about GPT-4 and GPT-5? Ten times more? A hundred times more? Unfortunately, there are no official figures, but an article from Towards Data Science estimates that training GPT-4 required more than 50,000 MWh. One thing is certain: the more powerful these models become, the more energy-intensive they are.

 

Now, one might think that training only happens once, and that in the grand scheme of things, this is relatively minor if it enables billions of queries afterward. But in reality, the bulk of AI’s energy consumption is still to come…

 

1.2. Usage

Training is just the tip of the iceberg. Once the model is ready, it needs to be run.

 

According to available studies, a single text-based query can consume anywhere from a few tenths of a watt-hour to several watt-hours, depending on the model used, the length of the response, the level of reasoning required, and the infrastructure involved.

 

Here are some estimated energy consumption figures reported in 2025, based on an interactive comparison table (which is worth consulting):

  • 0.61 Wh for GPT-4o mini (300 tokens)

  • 2.26 Wh for GPT-4 (300 tokens)

  • 7.4 Wh for GPT-5 (medium) (300 tokens)

  • 14.1 Wh for GPT-5 (high) (300 tokens)

  • 27.79 Wh for GPT-5 (high) (1500 tokens).

However, there is still a degree of uncertainty due to several factors:

  • Manufacturers and companies do not always disclose their figures (thanks to NDAs and commercial strategy);
  • Models evolve at an extremely rapid pace (what is true today may no longer be true tomorrow—as was the case while writing this article);
  • Each response is effectively a mini-algorithm that depends on a multitude of technical parameters, making standardization very difficult (should it be measured per token generated, per type of query, or per prompt size?).

Another study reports different results, estimating that a text generation query consumes on average 60 times less energy than an image generation request.

 

If we adopt a cautious (and somewhat optimistic) assumption of 2 Wh per text query of around 300 tokens (1 token ≈ 0.75 words), and consider a volume on the order of one billion GPT-4 queries per day, annual consumption exceeds 700 GWh. Of course, this figure depends on the assumptions used (current estimates suggest closer to 2.5 billion daily queries for ChatGPT, mostly on GPT-5), but it highlights a key phenomenon: the cumulative effect.

 

According to The Shift Project, inference becomes the dominant phase over training after just a few weeks. More importantly, the think tank estimates that AI usage in Europe could double by 2030. In Ireland, data centers already consume more than 20% of available electricity—exceeding household consumption. Without changes, AI could account for up to 35% of data center electricity consumption by 2030, compared to 15% in 2025.

 

1.3. Orders of magnitude

To grasp the scale of the phenomenon, we need to zoom out.

Data centers consumed more than 450 terawatt-hours (TWh) per year in 2024 (according to the IEA; 420 TWh according to The Shift Project). AI is estimated to account for 10 to 15% of this energy—and that share is rising!

 

At this pace, data center consumption could exceed 1,000 TWh by 2030, and even reach 1,300 TWh by 2035 (or as much as 1,500 TWh by 2030 without major changes in current trends, according to The Shift Project)—nearly double today’s levels. And part of this surge is driven by our ever-expanding artificial intelligence systems.

 

In Europe, demand could quadruple by 2035, representing up to 7.5% of France’s total electricity consumption. Without action, this trajectory could generate up to 920 MtCOe per year by 2030—roughly twice France’s current annual emissions. A scenario clearly incompatible with European climate targets.

 

Key Takeaways

  • AI is not intangible: it relies on silicon, cables, water for cooling, and significant amounts of electricity.
  • Each query is not “free”: it carries a real, albeit invisible, cost.
  • While figures may vary, they all point in the same direction: AI consumes—a lot, and increasingly so.

We might have imagined artificial intelligence to be as light as a cloud, but in reality, it is “cloud” in name only.

 


 

2 Where Does This Consumption Come From? The Energy Chain of AI

 

Artificial intelligence may appear immaterial, almost abstract. We talk about the “cloud,” models, and algorithms. Yet behind every interaction lie very real machines, energy infrastructures, and material resources.

To understand AI’s impact, we need to look beyond the screen and trace the chain—from computing chips to the electricity that powers their cooling systems, and the materials they are made of.

 

2.1. Data centers

Imagine an air-conditioned warehouse filled with machines humming nonstop, like a hyperactive hive—but powered by electricity rather than pollen. These are the famous data centers where all AI operations take place.

Every time we interact with ChatGPT, a machine somewhere wakes up, works intensely for a few seconds, then goes back to standby (spoiler: with around one billion queries per day for ChatGPT alone, GPUs don’t get even a millisecond of rest). The result is data centers running continuously.

These facilities consume energy to:

  • run the chips (primarily GPUs, which are highly energy-intensive);
  • cool the entire system (using air, water, etc.);
  • operate the network (storage, power supply, redundancy, security, etc.).

Data center efficiency is often measured using PUE (Power Usage Effectiveness): a PUE of 2 means that for every 1 kWh used by servers, another 1 kWh is consumed for cooling and other functions. Today, the most efficient data centers achieve a PUE as low as 1.1 or even 1.05, while the global average remains around 1.56.

 

2.2. Upstream Electricity

We sometimes tend to think of electricity as clean, simply because we don’t see smoke coming out of the outlet. But if that electricity is generated by coal or gas power plants, it carries a significant—albeit invisible—CO footprint.

 

The energy mix refers to the distribution of different electricity sources (nuclear, solar, coal, hydropower, etc.), and it varies greatly by country—and even by region.

Examples:

  • A data center powered in Iceland (almost 100% renewable energy) will have a very low carbon footprint.
  • The same data center in Virginia (United States), partially powered by gas or coal, could see its impact multiplied by a factor of 5 to 10.
  • In China or India, where coal still dominates, the impact is often significantly worse.

This is precisely why two identical GPT-3 training runs can produce CO emissions that vary by a factor of 50 depending on their location.

 

2.3. Material Resources

But the story doesn’t end there. Producing AI also means producing hardware—and that hardware doesn’t grow on trees.

Here’s what it takes to power artificial intelligence:

  • Electronic chips: packed with rare metals such as neodymium, tantalum, and cobalt. These materials are often extracted under questionable conditions, with significant social and environmental impacts.
  • Water: a lot of it. Water is used both to manufacture chips and to cool data centers. Some facilities use millions of liters per day to maintain acceptable temperatures. While water usage in itself is not always an issue, it can become one in regions where resources are scarce—such as in the case of data centers being developed in parts of Mexico.
  • Silicon, copper, aluminum… along with a considerable amount of embedded CO from the manufacturing process.

An NVIDIA H100 GPU (currently a flagship component for AI computing in data centers) weighs about 1.7 kg. But its environmental footprint is far heavier: energy-intensive production, reliance on global supply chains, and very limited recyclability.

 

Each GPU contains dozens of interwoven materials, encapsulated in highly durable resins and layered at nanometric scales, making recycling extremely difficult. While some material value can be recovered—such as precious metals, copper, and aluminum—the rest is often too costly to recycle compared to extracting new resources, not to mention the energy required for recycling itself.

 

In short, AI is not just digital—it is a heavy industry disguised as a conversational assistant.

 

Key Takeaways

To summarize this second part:

  • AI does not exist in some ethereal “cloud”: it relies on vast amounts of very real machines.
  • These machines require electricity, cooling systems, rare metals, and water.
  • Their environmental footprint depends heavily on where they are located, how they are built, and our ability to recycle them.

And above all: the more we demand fast, powerful, and universally available AI, the more we need to fuel it—like a high-performance engine—with both energy… and physical resources.

 

3. Measuring and Comparing to Inform Decision-Making

Artificial intelligence is no longer a technological curiosity. It has become infrastructure. And like any infrastructure, it consumes resources, mobilizes industrial supply chains, and generates externalities.

The question is no longer whether AI has an energy impact—it does.

The real question now is: how should it be managed?

 

3.1. More Reliable Measurement

The environmental impact of AI is often approached through orders of magnitude—sometimes striking, often inconsistent. Methodologies vary, scopes differ, and comparing models remains complex.

Yet any serious energy policy begins with a shared measurement framework.

Recent initiatives aim precisely to structure this evaluation. The Ecologits project, for example, offers an open-source methodology to estimate the energy and carbon footprint of AI model inference based on explicit and reproducible assumptions.

 

The goal is not to produce a perfect figure (none exists), but to establish a common basis for comparison.

This reflects an important shift: the focus is no longer solely on computing power, but also on energy efficiency per task.

 

3.2. Making Measurement Visible

Public platforms are now beginning to apply these methodologies to compare conversational AI models based on their estimated energy consumption per query. The compar:IA platform, developed within the beta.gouv ecosystem, is one such example.

 

For the first time, models can be placed side by side not only in terms of performance, but also in terms of their estimated energy efficiency.

 

This is crucial, because in environmental matters, visibility shapes behavior. Once a criterion becomes comparable, it becomes a factor in decision-making.

 

In the future, a public administration, university, or company could incorporate energy footprint into its technology selection criteria—alongside security or financial cost.

 

Key Takeaways

The energy impact of AI is no longer entirely invisible.

 

After a phase marked by opacity and inconsistent estimates, shared methodologies are emerging to measure the impact of models in use. Public tools are now beginning to make these differences comparable.

 

The figures remain imperfect and dependent on assumptions. But an important shift is underway: impact is becoming measurable—therefore debatable, and ultimately manageable.

 

In energy matters, transparency is often the first step toward efficiency.

AI appears to be entering this new phase—and I hope this article has contributed to it, however modest.

 

Bonus

If you’d like to explore this further… because let’s be honest, it’s not easy for anyone to see clearly—here are a few additional resources (in English):

 

Measuring the environmental impact of AI inference | Google Cloud Blog

Measuring_the_environmental_impact_of_delivering_ai_at_google_scale.pdf

We did the math on AI’s energy footprint. Here’s the story you haven’t heard. | MIT Technology Review

 

A tip: take these figures with caution. As we’ve seen, economic interests do not always promote transparency. That said, these readings remain highly informative.

 

 

Article written by Jérémy Demolliens, Head of the Immersive Realities Hub, Lab e·nov™, IFP School.