Five things we learned about Artificial Intelligence

Could AI help us deliver our work and could we make a meaningful contribution to the AI debate?

Hugh Knowles02 Sep 2019

In an article in World Economic Forum Rhodri Davies askedhttps://www.weforum.org/agenda/2018/04/civil-society-charities-artificial-intelligence where are the NGOs in the ‘great AI debate’. If the hype is to be believed, the developments of this technology will have enormous consequences and shape much of our world. So he was right to call it out.

A few months ago Friends of the Earth started a quick project to explore artificial intelligence. We started out from almost complete ignorance with two questions. 1) Could we meaningfully contribute in this space? and 2) How might technological developments help us deliver our work?

We read a lot (links at the bottom for some key articles), we interviewed people, we hosted lunches at AI conferences, we spoke to computer science students and we tried building an image recognition device. We have even been trying to use machine learning to recognise bees…because that is how we roll.

We are, obviously, still very early in our journey but here are five things we’ve learned so far:

1. Civil society…AI needs you

The pace of change is fast and the cracks are starting to appear. As with much of the rest of the tech industry, artificial intelligence development often lacks diversity and sometimes…well…let’s say good problem definition. Firstly, the predominantly rich, and often male, creators of AI technologies are struggling to walk in the shoes of significant chunks of society — for example, women, ethnic minorities and the poor. Secondly, the vast majorityhttps://medium.com/mmc-writes/artificial-intelligence-in-the-uk-landscape-and-learnings-from-226-startups-70b9551f3e4c of activity is in delivering value creation, optimisation and efficiency for the few not the many. This is supposed to be a revolutionary technology but it is not being aimed at the biggest problems we face.

There is considerable danger we are baking in, or even accelerating, current social injustice and other societal issues into the development of this technological paradigm — from the patriarchy to inequality and fractured communities. If you want a microcosm of what this is going to look like in the future then look at Amazon. To paint a simple caricature…a few rich (mostly men) at the top…built on lots of robots and algorithms to optimise consumption and some workers living on food stamps at the https://www.newsweek.com/jeff-bezos-amazon-employees-food-stamps-782714. We are in danger of this being replicated across the world.

‘Big tech’ obviously isn’t unaware of this. There are amazing people working in the industry and outside it trying to tackle it. But too often companies are coming up with the equivalent of clothing care instructions for AI i.e. self-generated voluntary codes of practice on a company by company basis. Wash this AI at 40 degrees, they say, and you’ll be OK. There are attempts at global governance and applying values before development but you have to wonder if they will be used.

Initially we wondered what on earth Friends of the Earth could do to help. The topic is bewildering and moving so quickly and we are unlikely to become experts. So how could we meaningfully contribute?

What we have done is spend a lot of time thinking about big problems and values. We began to realise we had a lot to offer there. We don’t need to be experts but we urgently need to understand the art of the possible. Then we can embrace some of the developments and take a lead from a values perspective. What is the goal? What kind of society are we trying to create? How could this contribute to the major challenges we face? Only by engaging now do we stand a chance of altering the direction of travel.

This needs to happen fast. You only have to watch the congressional hearingshttps://video.newyorker.com/watch/highlights-from-mark-zuckerberg-s-congressional-hearings with Mark Zuckerberg to see how far behind legislators are in their understanding of current technology and its implications on our society and economy. We cannot afford to be complacent. There is a lot of work to do.

2. There is an awful lot of hot air

But getting involved is difficult as it is so hard to navigate. Putting aside the jargon and enormous technical learning curve, it is tough to get a balanced perspective. There are a lot of people who are giddy with expectation about how developments in artificial intelligence are going to radically reshape our world. In some cases, this borders on religious fervour.

Even a cursory journey into this world leaves you feeling you better get on the AI bandwagon as soon as possible or be dead in the water.

For a different perspective, Gartner produces a report called the Hype Cycle https://www.smartinsights.com/managing-digital-marketing/managing-marketing-technology/gartner-hype-cycle-2018-most-emerging-technologies-are-5-10-years-away/ that tracks the developments of a range of technologies and the journey they tend to follow in their development. Currently many artificial intelligence technologies are at, or on the way to, a massive peak of inflated expectation of Everest size proportions. As a result, it is quite hard to understand either the immediate practical possibilities or the longer-term implications.

Another report from Gartner showed that only 4% of companies had deployed some form of AI - and that included chatbot applications. We are clearly a long way from AI applications being fully integrated into our personal and work lives.

 

3. It isn’t big yet…but don’t underestimate it

It would be easy and understandable to be caught up in the hype. Or dismiss it completely. Either, would be premature. Here’s why. Look past the hype and a different story emerges.

Much of recent progress (and hype) is based on an approach that has been around for some time called deep learning or deep neural networks. Deep learning is a subset of machine learning and this is a subset of artificial intelligence. There is a good primer on this listed at the end.

Geoffrey Hinton, one of the pioneers in the field first worked on neural nets in the 1970s but not a great deal of progress was made until the last decade. Whilst the approach is not new, the technology it requires has advanced dramatically — in particular kind of computer chip (thank you gaming), huge datasets (thank you internet) and better connections (again…thank you internet).

There is a school of thought that we are almost at the end of that development cycle. Whilst there will be huge gains from this technique in image recognition and language processing etc, there are some massive limitations. That is because the systems that are built are very narrow in their abilities. i.e. the system that beat the Go champion cannot play chess let alone do some of the rudimentary tasks outside that paradigm. We have a lot of so-called narrow intelligence.

But this narrow intelligence is producing incredible results. Google made huge progress in its translate service early last year when using deep learning systems. Deep Mind also used deep learning to increase the efficiency of its data centres by 40% which is staggering.

And if there is money in the data expect big progress soon.

Huge data sets are like fuel. Expect anywhere you have or can build large data sets - particularly those that can be monetised - to go bananas. The obvious issue here is that many of the most important things in human life are not easily put into nice neat data sets. Cities have massive data sets on public transport that are worth something but the data on walking is less available (mobile phone companies have some) and not able to be monetised. As already stated there will be big progress in areas that might not seem like the most important changes in life.

4. Putting humans in the loop

Reading the news, it can be easy to think that AI and humans do not mix well. From robots taking our jobs to an all-knowing intelligence killing us all. At Friends of the Earth we quite like people and want to find the positive narratives that don’t involve us all uploading to a super intelligence.

Where are the futures where AI considerably improves the lives of many or helps us do things we never could before without diminishing our humanity in some way? Where is the human-centred future? There are bucket loads of money being invested into optimising humans out of the system eg that digital assistant who can phone restaurants for you or a robot that can ‘learn’ to clean your factory.

Just when it was all looking a bit glum we read an article by Nicky Chase called How to Become a Centaurhttps://jods.mitpress.mit.edu/pub/issue3-case. This hit a nerve. The article challenged us to think about how we maximise the potential of human and machine collaboration for the benefit of society and the environment. We are humans and we can design a human shaped future as Sandy Pentland from MIT has said.https://www.youtube.com/watch?v=lpvZfdW31t0 

The most compelling aspect of the ‘centaur approach’ was the possibility it was accessible. In the human and machine collaboration it was not how amazing the human was, or how powerful the computer, but how good the process that joined them. This has big implications for the accessibility of this technology. Potentially, we can build tools that can be accessed by many, do not cost a lot, will change people’s lives and importantly will give them agency.

5. There are big issues around power and ownership

If you were asked if you would rather Google or the government was to own huge data sets that reveal everything about you and make decisions about your life, what would you say? What about Facebook? What about Palantirhttps://www.palantir.com/about/? (If you don’t know who Palantir are then you need to). Obviously, Google and Facebook (and Palantir) already have that data and it is already affecting your life.We need some powerful alternatives that put people back in control of both the data and the technology that uses it. What are the alternative ownership models? We explored some of these in https://medium.com/@disruptiveinnovationteam/5-alternative-models-of-power-ownership-in-the-ai-space-cc30af217533.

Now what…

It has been a dizzying journey. We are going to keep reading and we have a few pilots and experiments that will hopefully tell us more about how we might engage with this technology. If you are interested in the centaur approach to tackle social and environmental issues and build communities then get in touch.

Some links to articles…

Blog
Technology