It is ridiculous to predict when AI reaches human-level intelligence

By

I get very annoyed when I see discussion or predictions of “when AI will reach human-level intelligence”. That implies that intelligence is just one thing that you can measure linearly. Humans do not have even just 7 intelligences, as proposed by Howard Gardner. There are more dimensions to intelligence than we can imagine.

Machines have already vastly outperformed human “intelligence” in myriad domains, including of course almost all games we have invented, among them chess and Go, and a multitude of data-driven judgments and decisions. AI will inevitably continue to exceed the capabilities of both average and exceptional humans in more domains every year.

One of the many upsides of advancing AI is that forces us to examine and surface where humans are uniquely capable. I have previously pointed to Expertise, Creativity, and Relationships, which I think is a useful framework but these are crude aggregations of what humans are best at.

At its current pace of development AI will be able to match many aspects of what we consider to be human intelligence in what is likely to be a brief period (in evolutionary terms). Yet there will still inevitably be domains in which our intelligence – or what we perhaps might need to term wisdom – will still transcend machines, for what I believe will be as long as we can imagine.

The very fact of considering intelligence to be a single linear dimension stops us from thinking about the domains humans excel, where our capabilities are unique, and how we can continue to develop what it is we are best at.

So stop trying to predict “when AI will reach human-level intelligence”.

Use your human intelligence to think about this in a more nuanced and useful way.