AI Infrastructure: Questions

by Kimberly Mullen

After reading Niall Ferguson’s recent argument (Niall Ferguson: OpenAI’s House of Cards) that the AI boom may be a “house of cards,” I want to explore what AI and its infrastructure really means for society, power, and the long-term trajectory of our world.

From guiding multi-billion-dollar capital allocations to wrestling with the philosophical implications of technology, I’ve come to see AI infrastructure as more than hardware and software. These investments are moral, environmental, economical, and strategic choices for our world that will shape which societies thrive, which economies adapt, and how power is distributed over the coming decades.

Here’s the thing: the benefits of AI are real. The partnership between AI and science is already changing how we diagnose illness, design materials, model climate systems, and uncover patterns in data that once took teams of researchers years to see. AI had handed the average person new tools for communication, creativity, and problem-solving. That’s only part of the upside, and it shouldn’t be minimized.

The other side of the ledger is subtler and harder to quantify. Like me, economists, technologists, policymakers, and historians alike are grappling with long-term consequences, trying to anticipate developments several steps ahead. These are not apocalyptic warnings; they are complex, unsettled questions that public and scholarly discourse worldwide has only just begun to address.

First: the scale and cost of the physical footprint.

Data centers demand staggering amounts of energy and water. Data centers consume massive resources — in only the U.S., 183 TWh of electricity was used in 2024 and about 66 billion liters of water used for cooling in 2023, per Pew Research and Berkeley Lab via Circle of Blue. The industry says the efficiencies will come, but we haven’t done a sober cost-benefit analysis of what it means to build and sustain a global infrastructure that could dwarf anything since the industrial revolution. Add to that the environmental impact, the strain on local grids, and the geopolitical jockeying for chips, land, and rare minerals, and it becomes clear we’re building something vast without fully understanding the long-term bill.

What will this look like 25 years from now…100 years from now…1,000 years from now?

Second: the question of data.

That leads me to the next point. We’re accumulating data at a velocity no known civilization has ever seen.

Regarding corporate data growth, Visual Capitalist tracked the global datasphere and showed about 2 zettabytes in 2010, which rose to roughly 149 zettabytes in 2024, and projecting to approximately 182 zettabytes this year. That’s not simply tremendous linear growth—it’s an explosion.

On an individual level, even without universal smartphone access, the shift is staggering. Ten years ago, the average person generated only a small amount of digital data each day. Now anyone using a modern phone, laptop, streaming service, smartwatch, or cloud-integrated device can produce gigabytes daily—photos, location metadata, messages, purchases, app activity, biometrics, backups. That’s only one individual. Scale that to billions of people and the curve becomes astonishing.

What will the world look like when we’ve stored a century’s worth of personal and institutional information at near-perfect resolution? How will that shape governance, memory, accountability, and privacy? I don’t have the answer, and I’m not convinced anyone does. It’s not only about surveillance; it’s about the architecture of the future’s collective record.

Third: systemic fragility.

Ferguson warned that our AI boom may resemble a house of cards. Whether you agree or not, the core point stands: we’ve built technologies that can scale faster than our institutional capacity to regulate or secure them. We depend on systems that are both powerful and brittle. Thinking historically, the pace of change often outstrips our capacity to adapt, and that tension is alive in AI today.

Fourth: power concentration.

As AI capability grows, ownership of the compute, the data, and the models becomes the real axis of influence. The question becomes: who gets to steer this transformation, and on what terms? Corporations? Governments? Others? A hybrid? What happens with displaced workers in need of re-skilling and economic support. Where is UBI or an alternative? Also, what does democratic accountability look like when so much power accumulates in technical infrastructures that most people never see? What does it look like for non-democracies around the world?

I don’t come with all the answers. No one does. But what I do see is a gap between the pace of technological expansion and the depth of expert and public conversations needed to diagnose and design solutions. We need a more honest debate about tradeoffs, about what we’re building and why, and about how to balance innovation with resilience. The future of AI isn’t just a technical project. It’s a civic, economic, and philosophical one—and the choices we make now will outlive us by a long stretch.

I’d love to hear your thoughts. Which of these, or others, keeps you up at night?

Next
Next

Political Theory Matters: Formulating the AI Century