- Byte-sized Intelligence
- Posts
- Byte-Sized Intelligence July 17 2025
Byte-Sized Intelligence July 17 2025
AI that runs on water; the cost of smart
This week: We dig into Google’s hydropower bet and how big tech is rethinking energy for AI. From data center strategy to prompt level awareness, we explore what it means to build and use AI more responsibly.
AI in Action
Google’s $3B Hydropower Bet [AI Infrastructure/Sustainability]
In a landmark move, Google has signed a $3 billion deal with Brookfield to secure long-term access to hydropower from two Pennsylvania dams. This 20 year agreement is the largest corporate hydropower deal to date. It is designed to fuel its growing AI infrastructure across the PJM grid, a massive electricity network that spans 13 U.S. states and Washington, D.C., and hosts many of the country’s most demanding data centers.
Hydropower offers something common clean energy sources struggle to offer: steady, around the clock electricity with a low carbon footprint. As demand for AI surges and workloads grow heavier, Google is betting on water to deliver the reliability that solar and wind can’t always guarantee. It’s a way to expand its ecosystem of model training, inference, and deployment without expanding emissions.
Behind the scenes, this deal reflects a broader shift. It’s reshaping how tech companies plan their energy future. Microsoft is targeting 100% carbon free electricity by 2030. This includes advancing nuclear options with Constellation energy and exploring small modular reactors to support AI growth. Meta, already running its global operations on 100% renewable energy, continues to ink large scale solar and wind deals across key U.S. states. Locking in power over decades provides price stability, helps meet emissions goals, and strengthens their positions as a responsible operator in a fast-expanding industry.
While Google hasn’t confirmed whether it will tap the full 3-gigawatt potential of the deal, the direction is clear. As AI systems become more powerful and energy hungry, clean and reliable power is no longer optional. In the race to build smarter machines, control over infrastructure may be the real differentiator.
Bits of Brilliance
Ask your AI to summarize a report or draft an email, and it responds in seconds. However, behind that instant output is a surprisingly heavy footprint of carbon, water, and electricity. AI may be invisible on your screen, but it’s physical somewhere, and the infrastructure powering it is anything but light.
Recent research shows that training a single large language model can emit nearly 500 metric tons of CO₂, which is about the same as 98 U.S. homes in a year. That’s just the beginning. Each interaction you have with a chatbot, depending on the model and location, draws power and often water too. Training GPT-3, for example, used an estimated 700,000 liters of clean water, and some projections put global AI-related water withdrawals on track to reach over 6 billion cubic meters annually by 2027, rivaling the total consumption of countries like Denmark.
There’s also a public health cost to consider. A 2025 study by researchers at Caltech and UC Riverside estimated that air pollution from U.S. data centers could lead to 1,300 premature deaths annually by 2030, with total health-related costs reaching $20 billion per year. These effects include not just early deaths, but also asthma cases and missed school and work days. And because large data centers often cluster in specific regions, they raise concerns about surveillance risks and local control over sensitive data.
So what does this mean for the rest of us? For professionals, it’s a reminder that using AI isn’t always free, even if it feels that way. The energy and water behind each response add up, especially as usage scales. That makes it worth choosing our AI tasks more wisely, reserving automation for work that truly benefits from it. For enterprises, this may soon become part of ESG reporting, with pressure to disclose the environmental impact of AI tools. For policymakers, it’s a push to treat AI not just as innovation, but infrastructure that demands oversight. And for all of us, it’s a prompt to ask: what kind of intelligence are we really building, and at what cost?
Curiosity in Clicks
Large AI models like GPT-4 or Claude 3 were trained on tens of thousands of GPUs over weeks or months. That training can consume as much electricity as hundreds of U.S. homes use in a year. It’s also not a one-time cost. As models improve, companies retrain or fine tune them, energy needs get even higher.
This week, try asking your chatbot: “How much energy was used to train your model?”
You might not get an exact number, but the conversation is worth having. Many AI providers don’t yet share full transparency on training emissions, though some, like Anthropic and Google, are starting to publish sustainability reports.
As users, we don’t always think about the servers humming in the background. But as AI becomes central to our work and lives, energy awareness matters. It’s not to shame individual use, but to push for smarter, cleaner systems.
Do you think AI tools should disclose their environmental footprint by default?
Byte-Sized Intelligence is a personal newsletter created for educational and informational purposes only. The content reflects the personal views of the author and does not represent the opinions of any employer or affiliated organization. This publication does not offer financial, investment, legal, or professional advice. Any references to tools, technologies, or companies are for illustrative purposes only and do not constitute endorsements. Readers should independently verify any information before acting on it. All AI-generated content or tool usage should be approached critically. Always apply human judgment and discretion when using or interpreting AI outputs.