5 alternative models of power and ownership in the AI space

How could alternative models of ownership be adopted to democratise AI as a power for good?

Mary Stevens23 Jul 2018

As part of our recent exploration into AI and its applications for good, we’ve asked ourselves, what would a world look like in which AI was used to empower and augment humans, instead of as a replacement for humans?

One theme that has come up is that of ownership. As a grassroots organisation we believe in prioritising the voice of the people involved in an issue, especially those already discriminated against. It’s fair to say that the vast majority of people in the world right now couldn’t create an AI from scratch, and this creates an automatic power imbalance. Currently it remains the realm of data scientists and technical experts, a tiny minority of the global population, more likely than not working for big corporations and VC-backed start-ups that can afford to put resources into developing AI programmes.

We’ve imagined some scenarios in which the power behind AI can be shared equally — in which AI could not only be prevented from exacerbating current inequalities, but could be harnessed as a tool to empower those currently disadvantaged and change the balance of power.

1. Making Tools Accessible

Already a few companies are making forays into the world of accessible AI; indeed, Peltarion literally brand themselves as ‘the Wordpress of AI’. The voices best placed to understand their own problems and come up with the right solutions are the ones currently missing in the conversation — why not develop AI in such a way that anyone with basic computer literacy can use it? Countless small businesses and individuals have found success through making their own webpages or social media accounts, giving themselves a platform alongside huge corporations — imagine if AI harnessed the creative power of all society in the same way, rather than a tiny minority!

2. Collaborative Design

Alternatively, including a diverse range of people in the design stage of AI would be a great way to make sure that products developed are representative of and useful for communities, and help identify problematic elements before they are released. But it’s not enough to involve people in just the initial design. Without that variety of voices being included every step of the way, companies developing AI risk diverting from their original purpose or building features that alienate and disempower. One way to make participation a natural part of the product lifecycle is through open source code and data, inviting users to continue to build upon the product in their own ways, creating a cycle of constant creativity. Another option would be making product roadmaps for AI crowd-sourced, similar to the Open Food Network, who allow users and suppliers to help them plan their product development through open online forums.

3. Charging for our data

What if our data, which is currently used by corporations in exchange for our use of services, was treated as our property and compensated accordingly? Imagine a new industry created around internet firms paying us for the data we generate every day. Rather than passively providing data to private companies and trusting they will use it wisely, we need to recognise our data is a valuable commodity. We can change the balance of power by choosing what data and activities to share and who to share it to, and being paid for the value of that data. This could even potentially counterbalance the effects of automation on employment, as online data could become a valid source of income.

4. Cooperative Ownership

There is a growing movement in the tech industry in general towards cooperative models of business, seen particularly in the fintech and blockchain areas. What if this was applied to machine learning? Users could have a say in what and how AI is being developed, even if they aren’t the ones building the AI, by providing both training data and feedback and being compensated with ownership and governance of the technology. Following a cooperative business model for AI technologies would give people a say in what’s being developed, a stake in its success, and a motivation to make it the best it could possibly be.

5. AI Accountability

One of the key problems of AI is the ‘black box’ of decision making, which prevents even the developers themselves from understanding how AI comes to its conclusions. This has resulted in an ethical conundrum — if you don’t understand how a decision is being made, how can you prove an outcome is biased or unethical, or prevent that from happening? In putting humans first when developing AI, building frameworks for accountability and understanding how AI makes decisions is crucial. This should be an essential consideration for all companies developing AI right now, and it can be done — Nvidia has created a tool which allows you to identify which factors the AI is using to make its decisions, meaning it can be held accountable.

6. Public Ownership of Data

At Evgeny Morozov’s talk on the geopolitics of artificial intelligence at Nesta’s FutureFest, he suggested taking all data out of the private sphere and making it government-owned. Corporations could still use the data — they would just have to apply to the government for access first, meaning that in theory it could be properly regulated. It’s a really interesting idea, though obviously not foolproof — while in theory governments should be best-placed to represent and defend the rights of their citizens, we know from experience that often isn’t the case. Perhaps an independent body with socially representative membership instead?

It’s been really exciting to consider all the ways in which AI could be developed in the future, outside of the typical commercial model. What do you think of these ideas? Do you have any suggestions for models of AI ownership that empower, rather than alienate?

Blog
Inspiration
Justice
Technology