The most dangerous idea ever is that humans will be vastly transcended by AI

By

The advent of next-generation AI has brought into sharp focus one of the biggest divides of all: our perception of humanity’s place in the Universe.

I endlessly read people arguing that humans will be to AI as animals or insects are to humans. They envision a future where AI’s relentless advancement transcends every faculty we possess.

The countervailing stance is that human potential is unlimited. We have deliberately and consistently increased our capabilities and knowledge, and now we will use the tools we have created to continue to advance.

The rise of AI has intensified this debate, leading us to question: Are we, as humans, inherently limited or unlimited?

My thinking on this was clarified and crystalized by reading David Deutsch’s seminal work “The Beginning of Infinity”, in which he lays out an extensive and powerful case for human knowledge and abilities being unbounded.

He argues that humans are “universal explainers” who have created and learned to use scientific principles to indefinitely improve our theories, knowledge, and understanding,

At any point our understanding is limited and incorrect, but by continuing to apply the same principles we will consistently and indefinitely advance our knowledge.

He writes:

The astrophysicist Martin Rees has speculated that somewhere in the universe ‘there could be life and intelligence out there in forms we can’t conceive. Just as a chimpanzee can’t understand quantum theory, it could be there are aspects of reality that are beyond the capacity of our brains.’ But that cannot be so. For if the ‘capacity’ in question is mere computational speed and amount of memory, then we can understand the aspects in question with the help of computers – just as we have understood the world for centuries with the help of pencil and paper. As Einstein remarked, ‘My pencil and I are more clever than I.’

In terms of computational repertoire, our computers – and brains – are already universal (see Chapter 6). But if the claim is that we may be qualitatively unable to understand what some other forms of intelligence can – if our disability cannot be remedied by mere automation – then this is just another claim that the world is not explicable. Indeed, it is tantamount to an appeal to the supernatural, with all the arbitrariness that is inherent in such appeals, for if we wanted to incorporate into our world view an imaginary realm explicable only to superhumans, we need never have bothered to abandon the myths of Persephone and her fellow deities.

So human reach is essentially the same as the reach of explanatory knowledge itself.

The track record of humanity, which in just the last few thousand years has grown from superstitions to understanding in great depth our Universe from sub-atomic particles to the structure of the cosmos, sitting on a planet in a far-flung galaxy, is testament to our ability to develop knowledge.

Indeed, the pace of (human) scientific progress and knowledge is not just fast, it is accelerating, by just about any measure you choose.

People compare the exponential technologies underlying AI with our finite cognition and conclude we will be left behind. This is so deeply flawed that I will address this in more detail in another post.

In short, humans are very clearly not static. We co-evolve with the technologies we have created, constantly extending the boundaries of what it means to be human.

It is possible that to keep pace we will need to augment ourselves with brain-computer interfaces and other cognitive amplification technologies. As I wrote in my 2002 book Living Networks,

the real issue is not whether humans will be replaced by machines, because at the same time as computing technology is progressing, people are merging with machines. If machines take over the world, we will be those machines.

Believing we’re doomed to be dwarfed by super-intelligent AI is essentially betting against humanity.

It is giving up. It is lacking faith in humanity. It’s a surrender to superstition, to the idea that there is something undefined and unknowable beyond our capacity to imagine or understand.

The essence of being human is this: We face and solve problems and we progress. The concept of things we cannot understand goes against everything that humans have demonstrated themselves to be.

In an era defined by the rapid rise of AI, it is crucial that we maintain faith in the human capacity for limitless growth and expansion. The unfolding story of our species is not one of succumbing to imagined limits but one of constantly redefining what is possible.

Our future is inevitably one of Humans + AI – our species amplified by the intelligences we have created.

We will not be subsidiary players in this union. Our ability to understand and grow and frame what this incredible pairing can achieve is unlimited.

At this point we have no solid evidence how this will turn out. It is your choice:

Believe humans are intrinsically limited and that we will be as cockroaches to superior intelligences.

Or bet on a species that is intelligent and adaptable enough to have created everything we have so far, and our ability to continue to progress and grow, harnessing the power of our inventions.