Credit: Getty Images

“If one could devise a successful chess machine, one would seem to have penetrated to the core of human intellectual endeavour.ā So claimedĀ an influential 1958 paperĀ about the future of AI.
Then in 1997, Chess Grandmaster Garry Kasparov lost to the IBM’s Deep Blue. Needless to say, the core of human intellectual endeavour remained unpenetrated. Now there are any number of grandmaster-level phone apps around. It seems silly, looking back ā the idea that human intellect could be encapsulated in something as constrained and limited as chess. We forget just how huge a task it was for early AI developers.
We are no longer so shocked when computers beat us at games: when the algorithm AlphaGo beat the worldās greatest Go player in 2016, we were surprised but hardly bowled over.
Even though Go is a far more complex game than chess, and even though AlphaGo was a much more āintelligentā player than Deep Blue ā it largely taught itself, rather than being taught by humans ā we now think of games as something that computers are good at.
But we still donāt think that about the āsofterā aspects of human thought. Emotional intelligence, verbal skills. We donāt feel threatened by computers doing things that feel computery, like playing games or recognising images of faces ā even though they didnāt feel computery once. But computers having conversations, or writing poetry, that feels different.
Inevitably enough, though, itās on its way. Elon Muskās nonprofit OpenAI has just announced a new toy: a text-writing AI which, if you give it a few lines to start it off, will generate an amazingly plausible passage in the style you gave it.
Its stab at The Lord of the Rings reads an awful lot like a teenager trying to write a sequel to The Lord of the Rings; itsĀ essay about the American Civil War sounds like Donald Trump free-associating when he doesnāt know the answer to a question about American history. The Guardianās Alex Hern gave it āroses are red, violets are blueā and it came back withĀ genuinely haunting blank verse. (As well as a a āweird but bafflingly compelling piece of literary memoirā when Hern tried again.)
When I first read about the OpenAI work, my instinct was that it must be a fake. It seemed so close to the Turing test: actual human language, understanding of context and so on. But itās not fake. They trained the AI on billions of pieces of writing that were given a positive rating on Reddit, and you can sort of see how they could be made from stitched-together pieces of other texts.
This is how modern AI works. AlphaGo trained by playing itself billions upon billions of times. Google engineers call this the āunreasonable effectiveness of dataā. You can solve a lot of messy, complex problems just by throwing trillions of data points at them.
But you see how, immediately, Iām doing the same thing we all did when Deep Blue won. We think chess-playing ability must require true intelligence, until a computer does it, whereupon we stop thinking that. We think that verbal skills are truly, unfakeably human, until a computer does it, whereupon (I expect) we stop thinking that. Itās just big data, I can see how it works, etc.
The AI pioneer John McCarthy once said āAs soon as it works, no one calls it AI any more.ā And there is good reason for that. A problem that can only be solved by creating a true general intelligence is known as an āAI-completeā problem. We plainly *havenāt* cracked true, human-level general intelligence yet; anything that present-day computers can do must, therefore, be some lesser species of intelligence.
But we keep doing things that used to seem impossible for AI ā facial recognition, for instance, was a dream for a long time; now your phone probably has five apps that can do it for fun, and then add stupid bunny ears. Understanding natural language was near-impossible, now Siri does it with ease. The space of āonly humans can do thisā is shrinking: human intelligence looks ever more like a collection of tricks and functions, cobbled together.
Thereās a visual metaphor used by Max Tegmark, the physicist and founder of MITās Future of Life Institute, which works on making general AI safe: a landscape of human skills, with a rising sea level that represents AI. Some bits of the landscape ā chess, arithmetic, Go ā are under water. Some bits are on the coast, like driving, and translation. Others are on higher peaks, like science, or art. But the water level keeps rising. This OpenAI breakthrough represents the waves lapping at the bit of the landscape marked āwritingā.
Eliezer Yudkowsky, the blogger and AI theorist, once wrote that there will be āno fire alarmā for artificial general intelligence. We could be five years away from it and still not realise; it wonāt be until itās absolutely about to happen, and perhaps not even then, that everyone acknowledges itās happening.
Enrico Fermi said there was only a 10% chance that nuclear fission was possible; three years later he built the first fission reactor himself; the Wright Brothers thought powered flight was 50 years off, two years before they built it. General AI ā an AI that can do everything a human can do ā probably isnāt three years away, but itās not clear whether things would feel any different if it were.
OpenAI hasnāt released the code for its new toy, against its usual policy. It thinks itās too open to abuse. Itās not so much the generation of fake news articles that would concern me; itās not a shortage of content that stops fake news spreading further, itās that it can only spread among people who lack the skills or the desire to check its references. (That still means it can spread a long way: I once wrote a piece which noted that half of the most-shared scientific stories about autism were false or unevidenced.)
Itās more that something like this could render review sites like TripAdvisor useless by swamping them with fake reviews, or turn Twitter into even more of a swamp of lies and hatred than it already is. Even without OpenAI making the code available, something like it will come soon enough.
I donāt know how worried we should be. But I think it’s important to recognise that the waters of AI are rising. There are real reasons to worry that general AI, when it arrives, could be dangerous ā that there is a small, but non-negligible, chance that it will eradicate human life. OpenAI itself was set up, in part, to reduce that risk.
Itās probably still decades until the real thing appears. But each year it feels like some bastion of humanity has fallen to AI. Itās time to take it seriously, so that when it happens, weāre prepared.
Join the discussion
Join like minded readers that support our journalism by becoming a paid subscriber
To join the discussion in the comments, become a paid subscriber.
Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.
Subscribe