
A recent study, currently awaiting peer review, suggests that OpenAI’s GPT-4.5 model has been recognised as more human-like than actual humans after successfully passing the Turing Test, which measures human-like intelligence. According to the findings, the Large Language Model (LLM) was identified as human 73 percent of the time when instructed to adopt a persona—significantly higher than the random chance of 50 percent, indicating that the Turing Test had been surpassed convincingly.
The lead author of the study, Cameron Jones, a researcher at UC San Diego’s Language and Cognition Lab, mentioned that participants were no more successful than chance in distinguishing humans from either GPT-4.5 or LLaMa when the persona prompt was used. Jones further noted that these results imply LLMs could effectively replace humans in short interactions without anyone being able to tell the difference.
He cautioned that this advancement could lead to job automation, enhanced social engineering attacks, and broader societal disruptions.
What is the Turing Test?
The Turing Test, established in 1950 and named after the British mathematician and computer scientist Alan Turing—often celebrated as the hero of “The Imitation Game”—has been the standard for assessing artificial intelligence. Machines are evaluated on their ability to display intelligent behavior, typically in conversations or gameplay, in such a way that a human observer cannot differentiate them from a real person.
Content retrieved from: https://www.indiatvnews.com/technology/news/new-research-indicates-this-ai-model-outperforms-humans-in-turing-test-evaluation-2025-04-04-983950.