The Alluring Turing Test: Mimicry and AI
Oct. 16, 2023. 5 min. read.
The Turing test, a timeless benchmark used to question the AI conversational agents, is now under scrutiny. AI has evolved, challenging our perception of AI-generated content and what's truly human.
The Turing test, originally called ‘the imitation game’, is one of the first AI benchmarks ever posed. If an artificial conversational agent can convince someone that it is human then it can be supposed to be intelligent. It was proposed in 1950 by the father of computer science, Alan Turing, and is, in the collective imagination, the definitive milestone an AI has to pass to begin to be considered sentient.
But many AIs, for decades, have passed forms of the Turing test (think of a spambot or catfishing account sending someone a few messages) yet we don’t generally consider them sentient. Rather, we’ve decided people are easy to fool. The Turing test has been called obsolete. For John Searle, this was true on a philosophical level: his Chinese Room experiment showed that just because a computer can process symbols that does not make it sentient – just like how ChatGPT guesses the next word in the sequence. It’s just good at simulating one effect of intelligence.
Fool Me Once
If an AI can fool you into believing it’s real, what else is an illusion? Sci-fi landmarks like The Matrix and Lawnmower Man have long played with the idea of hallucinated reality. It’s part of life to question reality, to check that everything is as it seems. It was natural to apply this to proto-AI, to check that it could seem intelligent. Over time, Turing tests haven’t become obsolete, they’ve just become harder to pass and more rigorous.
Rather than testing whether someone is sentient, the Turing test has evolved into whether content was created by AI. Our civilisational consciousness is now attuned to the idea that what we are talking to might not be human, or what we are seeing might be made by a computer. We accept that generative AI can paint gorgeous pictures and write beautiful poems. We know they can create virtual environments and deepfaked videos – albeit not, yet, at the fidelity to fool us consistently.
Fool Me Twice
That fidelity might be close, however. And, when the average deepfake fools more than 50% of the humans that see it then, suddenly, generative AI has the ability to make a 51% attack on our entire society. Scepticism, always a powerful human evolutionary tool, will become more essential than ever. We have already seen a damaging polarisation of society caused by social media, fueled by a lack of scepticism about its content. Add generative AI with plausible content, and the problem escalates.
The Turing test, that rusted old monocle of AI inquiry, may become more vital to human thought than it has ever been. We, as a society, need to remain alert to the reality and unreality we are perceiving, and the daily life to which we attend. Generative AI will be a massive boon in so many sectors: gaming, financial services, healthcare, film, music – but a central need remains the same: knowing who we’re talking to and what they want and whether they’re real. Critical thinking about what you’re being told in this new hyperverse of real and unreal information. It will be what makes you a human in an endless constellation of AI assistants.
A Turing Test for Humans
The Turing test may end up not being for the AI after all, but for the human. Corporate job examinations could test your ability to identify what content is from a bot and what is not, which film was made by AI, and which by a human. You’ll need to have your wits about you to stay Turing-certified – to prove that no false reality generated by AI could hoodwink you into revealing secrets. We saw this through the virtuality of dreams in Christopher Nolan’s film Inception – but with digital VR worlds coming soon, such espionage might be closer than we think.
Alan Turing’s test remains relevant. Judging what is a product of legacy humans and what is from our digital children will become a fascinating battleground in just about every human sector. Will people want to watch AI-made films? How close to fidelity can they get? Cheap AI-produced neverending sitcoms based on classic series already exist – they just fail the Turing test, as do endless conversations between AI philosophers. These wonders would have fooled people 25 years ago, they would be convinced that a machine could never make it up – now they come off as the playful fancies of a new tool.
You Can’t Get Fooled Again
But soon, these fancies will become fantasies, and more people will be fooled. A deepfake video of a world leader issuing a declaration of war need only convince so many people before it became an existential risk. AI will write dreamworlds that promise the most fantastic ways of productivity and play, but should too many of us become too intimate with the machine, and think, like the Lambda engineer, that it truly is sentient, then the influence these AIs innocently exert could be dangerous.
And what if our pursuit towards AGI and the utopian Singularity leads to us declaring that an AI we created was finally sentient, and that it was on our side? Would we put it in charge? Then would it really matter if it was faking it the whole time? Well, yes, but by then it will be too late.
So run along and take the Turing test. Both of you.