Christian Graham04 Mar 2025
A decade or so ago, we took a team trip to Down House, home of Charles Darwin. Strolling around his garden and peering into the greenhouse there, it was easy to picture his own contemplative walks at what he described as "at the extreme verge of the world".
Darwin famously took his time with his theory of natural selection. I wonder whether modern technology would have helped – or hindered?
Picture Darwin in his study, papers spread across his desk, wrestling with the evidence for natural selection. But this time, he has an AI assistant: a large language model trained exclusively on 19th-century and earlier texts. Would this AI, shaped by Victorian-era scientific and theological discourse, help Darwin refine his arguments, stress-test his ideas and get On the Origin of Species into print faster? Or would it nudge him toward caution, mirroring the establishment’s resistance to radical ideas and warning him of the risks?
It’s a fun thought experiment, but it’s more than that. Darwin’s AI isn’t just his story. It’s ours. Technology never stands apart; it encodes the values of its time, shaping what follows. Right now, we’re building AI with our priorities etched into it. What might that lock in for tomorrow?
How technology can cause fossilisation of values
It’s easy to think of technology as something that’s innovative and drives change, but it often does the opposite: it locks in the assumptions and priorities of the era in which it was developed.
The telegraph: A 19th-century internet, or was it?
When the telegraph revolutionised communication in the 19th century, it wasn’t a neutral tool - it reflected the biases of its creators. Morse code, originally optimised for English and Latin-based alphabets, posed challenges for languages with non-Roman scripts, requiring complex adaptations that delayed or complicated their integration into global communication networks.
Urban planning: Machine-age thinking that still shapes cities
Mid-20th-century urban planning, driven by modernist obsessions with efficiency, reshaped cities around cars, concrete and rigid zoning laws that split homes from businesses. Decades later, we’re stuck with car-dependent sprawl, disconnected communities and urban designs that thwart sustainability - proving how deeply 20th-century priorities are now embedded in our built environment.
These examples show how once a technology embeds particular values, they can persist for generations - even when they stop serving us. So what happens when today’s AI, trained on early 21st-century data, carries our assumptions into the future?
A future written by today’s AI
Let’s look towards a possible future.
It’s a grey, damp morning in Manchester, 2040, the air thick with wet concrete and bus exhaust. At 7:30 AM, Amal steps out, glancing at the AI bulletin glowing by the door: “Personal Carbon Footprint: 70th percentile. Adjust to meet Net Zero 2050 targets.”
Heading to the tram, Amal queries their headset AI: “What’s the greenest route today?” The answer: a 20-minute detour that cuts travel emissions by 5%. It’s Amal’s call, but the system’s nudge is loud and clear.
Across town, protestors mass outside the Manchester City Carbon Tribunal. Its AI-driven system fines residents who bust their monthly emissions cap. Critics shout it hammers working-class neighbourhoods; those stuck with shoddy transport and drafty flats. A city official, facing cameras, shrugs: “The AI runs the numbers. Don’t like it? Change your life.”
No one built this to be unjust. It’s just an AI doing its job: optimising for carbon cuts, personal accountability and cold, hard data. Yet baked into it are early 21st-century blind spots: climate as your burden, not the system’s. And AI as some impartial oracle.
This is technological lock-in. Not a grand conspiracy, but a thousand quiet choices, hardening into a future we can’t easily unwrite.
The future impact of AI-driven values lock-in
At first glance, the idea of AI preserving today’s values might sound appealing. In some cases, it could even be beneficial. For example:
- Protecting against authoritarian shifts – AI systems trained on democratic principles and human rights frameworks could help prevent future backsliding, ensuring that core protections don’t disappear with political cycles.
- Keeping institutional knowledge alive – AI could act as a long-term memory bank, preventing the loss of hard-won lessons in public health, environmental governance, and social justice.
- Making governance and decision-making more efficient – AI could speed up climate action, urban planning and crisis response, creating more stable and effective systems.
But stability isn’t always a good thing. The same features that make AI useful could also make it inflexible, resistant to new ideas and blind to the possibility of better alternatives:
- Old biases risk becoming permanent – AI trained on today’s moral and economic frameworks might prioritise corporate-led sustainability initiatives over grassroots action or overvalue GDP growth as the main measure of success.
- Future breakthroughs could struggle to take hold – Just as a Victorian-trained AI might have discouraged Darwin from publishing, future scientists, activists, and policymakers could find themselves fighting against AI-driven inertia.
- AI governance might become self-referential – If AI models continually cite their own outputs as authoritative sources, they could create self-reinforcing knowledge loops, making early 21st-century assumptions feel like eternal truths.
- Technology stops being a tool for change – If AI systems shape environmental, legal, and economic policies based on past precedent, it becomes harder for movements that challenge the status quo to gain traction. Instead of being a force for progress, AI becomes a force for keeping things exactly as they are.
We can already see early signs of this happening. AI is being used in policing in ways that prioritise past data over future possibilities. The more we embed today’s norms into these systems, the harder it will be to course-correct later.
Designing AI for an open future
We don’t have to accept a future where AI fossilizes the values of the early 2020s. We still have choices about how these systems evolve. But we need to act now.
- Make AI adaptable, not static – AI models should be trained on diverse and evolving perspectives, not just historical data.
- Expand who gets to shape AI’s priorities – Governance should include civil society, environmental groups and marginalized communities, not just tech companies. AI providers should be transparent and provide accessible plain language explanations of how their tools work, with a focus on enabling critical public engagement.
- Ensure AI can challenge precedent, not just follow it – AI should help us question, test and refine ideas, not just reinforce existing systems.
- Advocate for alternative data governance models that respect digital rights and allow communities to make fully informed decisoins about how their data is used.
- Embed a sustainable approach to developing AI - promote energy-efficient AI models and ensure that reducing societal and environmental harms at every stage of AI’s lifecycle is prioritised.
So the question is: Do we want AI to challenge us - or keep us stuck?
Friends of the Earth has collaborated with environmental organisations and digital rights campaigners to produce a guide to principles and practices that has environmental justice embedded in the development and use of AI. Want to know more? Please get in touch.