Let’s invoke the boilerplate frontispiece disclaimer:
This is a work of fiction. Names, characters, businesses, places, events and incidents are either the products of the author’s imagination or used in a fictitious manner. Any resemblance to actual persons, living or dead, or actual events is purely coincidental.
In my story, a big high-tech company has developed a self-learning AI system that can carry on natural-language conversations online, by text or by voice, spanning a wide array of topics. Would the AI pass the Turing Test? The tech company arranges for a number of university professors to enroll the company’s AI in their online classes. At the beginning of the semester each participating professor announces to the students that one among them is an AI and that it’s their task, individually and collectively, to identify the android in their midst.
Already within the first week several students have come forward, confessing to their classmates that they are the androids. Pranksters no doubt, budding philosophers playing with the idea that humans aren’t all that different from machines. But what if the self-confessed androids are telling the truth? Maybe it’s a ruse, the AI system deploying reverse psychology in order to throw the humans off their scent. So now the Turing Test gets turned around: can an intelligent entity prove that it’s not an android?