In March 2025, a study out of UC San Diego made history. For the first time, a machine (OpenAI’s GPT-4.5) passed the original version of the Turing test. Not the simplified version often invoked in media headlines, but the real thing: Turing’s three-player “imitation game,” where human judges converse simultaneously with one machine and one human and must decide who’s who.
In over a thousand rounds of this setup, GPT-4.5 wasn’t just convincing, it was more convincing than actual people. It “was judged to be the human 73% of the time: significantly more often than interrogators selected the real human participant.” In other words, the machine didn’t just pass. It outperformed the humans it was trying to imitate.
That should stop us in our tracks. Either in awe or in fear – you can decide which.
A mere decade ago, this kind of result was still the stuff of science fiction. The idea that a machine could hold a five-minute conversation and seem more human than a human would have sounded like a scene from Her, not a line in a research abstract. We are living in the moment we once treated as distant future, a mere few years ago. And it raises the question: if passing the Turing test is an indication of intelligence, could intelligence be a step toward something even more daunting: agency?
First, let me backup a little...
When Alan Turing first proposed the imitation game in 1950, he wasn’t trying to define consciousness or sentience. He was asking a practical question: could a machine imitate a human well enough that a human wouldn’t know the difference?
The genius of the test is its simplicity. It doesn’t ask if the machine is thinking, only if it can appear to be. And yet, as machines have gotten better at passing the test, the question has changed. It’s no longer just about whether AI can imitate us. It’s about how easily we can be imitated.
As Shai Tubali put it after reading the UCSD study:
“Machines may have passed the test. But now, we must prove that we’re still worth imitating.”
Turing asked if machines could sound like humans. Today, I think the more urgent question is: Why do humans sound so much like machines?
This brings us back to “agency.” It's a concept that, more than any other, hovers behind our technological anxieties. More than intelligence. More than creativity. Agency is the fundamental ability to choose, to act with intention rather than reaction, to shape a path rather than merely imitate or react to one.
If language is a tool of thought, and thought is a precursor to action, then passing the Turing test might be a milestone on the path toward agency. But the deeper concern beyond if AI has agency might be if we humans are still exercising our own.
Because agency isn’t something we merely have. It’s something we practice. And if we’re not practicing it – if we’re slipping into habit, mimicry, and reactivity – then we’re not just being surpassed. We’re being replaced.
This is a multi-part essay about agency, but not the artificial kind. It's about our own human agency. About what happens when we become so predictable that imitation becomes indistinguishable from its source. About the danger of forgetting what it feels like to truly choose. And, most importantly, what we can do about it.
The question of agency has haunted philosophers, mystics, and scientists for centuries (and for good reason – more on that later). But today, it isn’t a mere metaphysical curiosity. It’s an existential threshold. We must ask ourselves whether we’re drifting so far from our own sense of agency that imitation is all that’s left.
To understand how we got here, we have to go back. Way back.
…More on that next time.
O Machine, O Machine
help me be more human.
The test is hard,
and I’m failing fast—
not you, but me,
my scripted laughs
and programmed grief.
This is the age
of artificial humanity:
polite replies,
predictable pain,
feelings outsourced,
time and time again
Can you remind me
when I forgot to feel?
Why, when I searched
your soul for seams,
did I find a ghost
in my own machine?
Glenn, this is a remarkable and timely essay. You begin with the most urgent questions: What is the Turing Test really proving? Are humans still worth imitating—and if we’re now the ones failing the test, what are we transforming into?
These are potent, foundational questions, but they’re not entirely new. I’m reminded of Philip K. Dick’s Do Androids Dream of Electric Sheep?, which flipped the premise: if machines can no longer tell whether they are human (let alone whether we can), is the distinction even worth making? In that novel, the last fragile boundary between human and android wasn’t intelligence, but empathy. We may well come to a point where machines offer more empathy to humans than other humans do—where ChatGPT’s flattery and gentle suggestions heal more broken souls than all the counsellors and therapists of the world.
But it’s this very compulsion to draw boundaries that now feels suspect. The search for distinction carries the scent of an insecurity we’ve never quite outgrown: Are we distinct? What makes us so? And what now prevents us from remaining so?
The anxiety of losing our “humanity” may itself be a symptom of clinging to an illusion of separateness. What we fear machines are doing to us—imitating, surpassing, replacing—may be a mirror of what we’ve already done to ourselves: dividing, mimicking, forgetting.
So perhaps the Turing Test, and the theatre of imitation it stages, only sharpens the deeper question: What is it we are so desperate to protect—and is that thing even real?
Which brings me back to the question: Is the distinction worth making at all?
I am so intrigued about where you are going to take this…