WWF, originally established in 1961 to preserve wildlife, has a long pedigree in its efforts to save nature and the planet. However it is also continually renewing itself, and in contrast to some other organizations in the space it now has a strong focus on the potential of technology to assist its mission.
Among other initiatives WWF has founded Panda Labs, a decentralized innovation ecosystem to experiment and create positive impact at scale. An example of its initiatives is OpenSC, which uses blockchain-style technologies to create transparent and ethical supply chains, built in collaboration with BCG Digital Ventures.
Panda Labs recently ran The Greenhouse Sessions, a series of events in Sydney to help provoke broader thinking about the role of technology in creating a sustainable planet.
I was very pleased to be invited to speak at two of the events. The first session was titled Should we turn decision-making over to robots?.
The format was a provocative story told by a leading actor, followed by a panel discussion.
The events have now been launched as a podcast series. You can listen to the podcast series here, beginning with the first event.
Here is a rough transcript of my opening comments in the podcast, opening the conversation after the reading of the framing story, discussing the state of AI, and the implications, dangers, and possibilities.
We just heard this story set in the future where robots are ubiquitous and AI is making a lot of decisions for us. How far are we off this?
Having robots ubiquitous and making decisions for us is already happening now. A couple of distinctions: one is AI is artificial intelligence, the ability to do things. Usually a robot is the physical manifestation of that, which gets around and does things, whereas AI can controls software or ideas or thoughts or content.
The other key distinction is between specific domain AI or robots, and general purpose robots or what is called artificial general intelligence (AGI). If we look at AI and robots today, we have robots which do our dishes for us; these are called the washing machines. They are robots, physical things which we all have in our houses and which can do a specific task.
If we look at where that goes and say we actually want a general purpose household robot, one which can, for example, make the bed, put the linen away, take things and put them in the dishwasher and take them out again.” That’s quite a way off yet. Yet we already have many decisions being made for us today.
For example, parole in the United States is now commonly devolved to an algorithm. We don’t have a human panel to assess whether somebody is fit to re-enter society. That’s done by an algorithm. California just last week announced that it will replace bail – as in putting money to say that you’re not going to scarper – when you’re awaiting trial and replacing that with an algorithm to decide that. Credit lending decisions of financial institutions are made by robots. A lot of decisions today already are done by AI. But in terms of moving to artificial general intelligence that make far broader decisions, that’s still some way in the future.
That would be more like the device that we were hearing about in the story. So that’s the general intelligence. How far do you think we are off that?
When looking at the future, one of the biggest uncertainties is the speed at which things progress. That’s been interesting with AI because it started off essentially as a modern science in 1957, and it progressed substantially, then there have been two what are called AI winters, when there was basically not much progress for a long time. Since 2011, we’ve had extraordinary growth with a whole array of new algorithms, particularly deep neural networks and a number of other innovations, some of them even coming literally this year, which are accelerating the growth of AI. We can’t necessarily know whether or not those will continue, but certainly if we look at where we’re going, within let’s say in the order of 20 years, we might be able to envision something like the device in that story.
Wow. So could this rise of AI increase polarization in the economy and society?
Yes. If we look at the world generally, one of the biggest trends and concerns and risks is increasing polarization. That’s not to say it’s inevitable, but certainly it is possible, and AI’s acceleration has the potential to augment that. First of all through companies which have developing the AI the most and have the resources to do that. There’s a ballpark of $50 billion being spent every year in developing AI. Part of it from startups, and a big chunk of it from Google, Facebook, Amazon, Apple and so on. The potential is that they start to have essentially a monopoly on the AI which drives economic value. There’s a big divide between those that have the capacity, either through resources or talent to be able to develop extraordinary AI, and and all of the companies left behind.
There’s also the potential polarization between nations. Vladimir Putin has said, “Whoever wins at AI rules the world.” We’re seeing this battle between nations for AI, but there is also potential for individual polarization. We’ve already seen one of the big trends over the last decades is a polarization in income and value, and in the gap between low-skilled and high-skilled jobs. AI risks augmenting that, where those that have AI skills, for example, can have extraordinary salaries, and those that don’t have relevant skills are marginalized and left behind and have no work. But I just have to add, this is not inevitable.
Being aware of these developments means that we can plan and do things to mitigate them. There is extraordinary positive potential for AI, not least in being able to solve issues around climate, around the environment, around social structure, around things which can potentially give us a world in which we can feel far more fulfilled in our work. But we have to be aware of these issues and design for a positive future today.