How might we influence AI for good?

Mary Stevens ponders on where the power lies to influence the future direction of AI for the good of planet and people.

Mary Stevens23 Aug 2023

What if… when the UK Government hosts a global conversation about AI regulation later this year, the conversation was not (mostly) about business productivity, national security and future existential risk but the AI-fuelled existential crises that are happening right here, right now. Both the crises in people’s everyday lives AND the damage that we are doing to the planet - turbo charging extractive capitalism and creating new hunger for resources. As Meredith Whitaker, former Googler and president of the tech company Signal puts it, “"I think it’s stunning that someone would say that the harms [from AI] that are happening now—which are felt most acutely by people who have been historically minoritized: Black people, women, disabled people, precarious workers, et cetera—that those harms aren’t existential."

Can small interventions  influence the direction of AI?

In our recent AI lab we asked ourselves the question “How might we act to shape the direction of the new class of AI technologies to serve the goal of human flourishing in a healthy, regenerating ecosystem?”. Ultimately, this is a question about the purpose of new technologies. Do they exist to accelerate and extend the logic of extractive capitalism, or to serve human and planetary wellbeing? There is a huge imbalance in the resources we bring to addressing these questions, pitting civil society and campaigning organisations on the one hand against the seemingly limitless capital of big tech on the other. This is where a ‘leverage points’ approach can help: how can we make a small intervention that has the potential to be picked up and amplified to influence the bigger picture?

In our lab we surfaced some possible interventions. These included:

  1. A targeted influencing strategy for the tech workers. A number of the emerging AI companies have recently opened offices in London. OpenAI, the developer behind ChatGPT opened its first international office here in June. But they still employ a very small number of people. What would an influencing strategy that focused on this small but highly influential community look like? How might that shift the debate?
  2. A Citizens Assembly for AI. The Citizens Assembly model has proved a powerful social technology to engage citizens with complex and contentious issues and find a path through. DemocracyNext is starting work in exactly this area. But how will the voice of the more-than-human world be represented? And how can their work be amplified and achieve a higher profile? What can we, as an environmental justice organisation, uniquely contribute?
  3. A conversation about the opportunities and risks of generative AI for environmental justice. A number of international organisations, and in particular the APC (Association for Progressive Communications) and the Engine Room have laid important foundations in mapping out the intersection between climate justice and digital rights. But to what extent are these areas of mutual interest on the radar of the environmental movement in the UK (where we as an organisation can have influence)? And are organisations like ConnectedbyData or Data for Black Lives (is there a UK equivalent?) thinking about the environmental story? Is there a way for us to help bridge this gap and use an intervention – a discussion, a media event – to bring the intersections into the mainstream?
  4. A plan to put nature on the board of big tech. Nature now ‘sits’ on the board of the beauty products business Faith in Nature. What would it mean for nature to have a voice around the table of organisations like OpenAI? What shifts could this unlock?
  5. A coalition focused on disinformation risk in upcoming elections. There is also an urgency about these issues, driven by upcoming elections, here in the UK but also in the US. Previous elections have shown the power of AI-curated content on social media to influence public opinion. Generative AI weaponises this further, shifting from sharing persuasive content for people ‘like’ you to generating personalised content targeted directly at you. And as we know from past experience, much of this targeting is invisible to progressives, whose algorithms don’t show what other people are seeing. Just as there are campaigns like Green New Deal Rising building a movement to define the narrative at the next election, so there are powerful forces looking to use these technologies to harness climate disinformation, outpacing electoral regulations. If we do one thing, now, should it be to build a coalition to sound the alarm on climate disinformation and AI-based electoral ‘grooming’? And it might be too late for 2024, but are there lessons we can apply from the idea of ‘innoculating’ people against fake news. Could this work for AI?

Possible leverage points

Few of these ideas are really tangible ‘experiments’ – but they are good examples of developing policy around possible leverage points, the places where we as a small organisation, in collaboration with others, could make the most impact.  We do not have the resources or capacity to lead on these proposals but we would love to see them happening, and we would like to contribute. We also know that we don’t know enough about where these conversations are already happening and could be amplified. How are justice narratives informing the work of the Ada Lovelace Institute, the Alan Turing Institute or other academic-industry coalitions?

If you like these ideas could you, or someone you know, run with them. Just let us know at [email protected] and please don’t forget to credit us.  

This is the third blog post from a series or four, exploring the outputs from our recent design lab  looking at the influencing opportunities. If you are interested in collaborating on any of these ideas please get in touch at [email protected]