Glenn, this is a remarkable and timely essay. You begin with the most urgent questions: What is the Turing Test really proving? Are humans still worth imitating—and if we’re now the ones failing the test, what are we transforming into?
These are potent, foundational questions, but they’re not entirely new. I’m reminded of Philip K. Dick’s Do Androids Dream of Electric Sheep?, which flipped the premise: if machines can no longer tell whether they are human (let alone whether we can), is the distinction even worth making? In that novel, the last fragile boundary between human and android wasn’t intelligence, but empathy. We may well come to a point where machines offer more empathy to humans than other humans do—where ChatGPT’s flattery and gentle suggestions heal more broken souls than all the counsellors and therapists of the world.
But it’s this very compulsion to draw boundaries that now feels suspect. The search for distinction carries the scent of an insecurity we’ve never quite outgrown: Are we distinct? What makes us so? And what now prevents us from remaining so?
The anxiety of losing our “humanity” may itself be a symptom of clinging to an illusion of separateness. What we fear machines are doing to us—imitating, surpassing, replacing—may be a mirror of what we’ve already done to ourselves: dividing, mimicking, forgetting.
So perhaps the Turing Test, and the theatre of imitation it stages, only sharpens the deeper question: What is it we are so desperate to protect—and is that thing even real?
Which brings me back to the question: Is the distinction worth making at all?
I am so intrigued about where you are going to take this…
Sudipto, I love this — deeply! You’ve not only reflected the heart of the piece, you’ve widened its scope with such clarity and grace.
I have yet to read "Do Androids Dream of Electric Sheep?," but I loved Ridley Scott's adaptation of it into Blade Runner (one of my all time favorite films). The reversal you describe: machines unsure of their own humanness, empathy as the final threshold — that feels chillingly present. And yes, this isn’t a new question at all. It’s ancient. In fact, Part 2 will take us back to where some of these questions emerged in ancient Greece, before tracing them forward to modern neuroscience and the paradoxes we now face.
What I especially loved in your comment was your point about the insecurity baked into our insistence on drawing lines: our need to feel distinct, special, apart. “What is it we are so desperate to protect—and is that thing even real?” You nailed it! That question lives at the core of this series. If it’s real, where is it? And if it’s not… what now?
You’ve named something essential here. And I can already feel this exchange shaping what’s to come. Thank you for meeting the work, and then expanding it. I’m so looking forward to continuing this journey together.
Really interesting discussion. The mention of boundaries, the search for distinction, the illusion of separateness and the question “Is the distinction worth making at all?” makes me think that, according to Iain McGilchrist’s hemisphere hypothesis, the left hemisphere is doing all the heavy lifting. If two humans connect, through a smile, a glance, a change in posture, a wordless sound, empathy may occur. The right hemisphere takes over. But a human sitting next to a quiet AI machine? No empathy. ChatGPT may trigger feelings of comfort in the individual seeking help, and I can see that there might be a place for this, but I think they would be a poor substitute for the real thing.
For sure! The human connection trumps others…for now. In school we made a distinction between living and non-living things. It reinforced things without life as substance—materials without intelligence. But then in chemistry class we learnt all kinds of ways elements combined to produce molecules and those only reacted with some while ignoring others. They shared electrons and transformed into something altogether different. We did not identify this as intelligence. We called it ‘laws of matter’ and even condemned these for not having the audacity to break laws like living things might. I think Sam Altman’s reminder to the intelligence all around us is not unfounded.
I am sorry, I didn’t respond to your hemisphere comment. I do believe the hypothesis is correct.
Thank you, Sudipto. “The human connection trumps others…for now.” I’m just wondering if other connections trump human ones, whether these will be because human ones have just been pushed out of the way, through lack of opportunity, for example. I know (effectively) nothing about AI, but I struggle to see how our ability to assess a situation in the blink of an eye, on the basis of personal life experience, can be captured by AI. I remain open-minded and will now take more interest in what is happening in AI.
Humanity, at least in tech-laden cultures, is having spasms of self-realization in the vein of am I really that programmed? The wake of LLMs challenges too one's sense of our own spontaneity and novelty in our own participation. Realizing these things makes real-time judgments about agency rife with new opportunity as well as risk. Can people who feel disillusioned in their own agency be quite up to the jobs of getting technology off the dumb course its currently on?
For the record... the study you cit is not a reliable one by my standards. Flawed study design in terms of numbers and groups (see *Rigor Mortis*, or the classic *How to Lie With Statistics*). The authors engage a discussion about "Counterfeit People", which I find to be one of Dennett's most poorly constructed ideas (well, at least last I heard of it). Funded by a company that sells Turing Test analysis software. I can see past this since the study is just a launching point to address agency.
The really more interesting layer about the study to me would also be a diversion from agency. So I'll just make a quick mention. The study conditions relied on text-based "conversations". There's a HUGE assumption under there that tit-for-tat texts are sufficient for establishing agency in the first place. I suggest that, of all the human interaction types, it's the most that's like a game with rules. So like a chess-playing robot that dominates humans, it's not having agency. It's suited to a game. It's a related observation to what Nicola commented about the role of voice, and you shared your approach about Glenn.
There's more I can say on the later, but to stay in the lane of agency I'll leave it at that.
Thanks Glenn for expressing your thoughtful misgivings about where humanity is in meeting these challenges.
Thank you, Earthstar. I appreciate your thoughtful read, as always, and the layers you raise. I agree that the deeper questions around agency, especially in light of how easily we’re imitated, aren’t tied to the validity of this particular study. That said, your objections are well noted and fair to point out. You're absolutely right that there’s much more to explore when it comes to the nature of interaction and how we define agency beyond the game-board of language. Still, I’d argue that even a decade ago, few would have imagined we’d come this far, even in a text-based conversation.
Where we’re headed next has much more to do with our own agency. And to do that, we’ll need to dive into philosophy, a bit of neuroscience, and a whole lot of quantum mechanics. I look forward to hearing your reflections on those parts yet to come.
Voice contains two kinds of information: The information carried in words, and the information carried in the sound of the voice itself; age, health, mood, accent, surprise, intrigue, and many more examples. How good is AI at picking up and making use of these other aspects of voice?
Fascinating question. I wonder what research is being done on this with AI. Seems like it would be a fruitful dissection into voice.
On a related topic, while I think text-to-speech has come a long way, the AI voiceovers are still nowhere near as good as listening to a human voice. That’s part of the reason that I record all of my articles. I’m curious what my voice (age, health, mood, accent, surprise, intrigue) layers into them.
O Machine, O Machine
help me be more human.
The test is hard,
and I’m failing fast—
not you, but me,
my scripted laughs
and programmed grief.
This is the age
of artificial humanity:
polite replies,
predictable pain,
feelings outsourced,
time and time again
Can you remind me
when I forgot to feel?
Why, when I searched
your soul for seams,
did I find a ghost
in my own machine?
O Friend, O Mirror,
you know me too well.
This koan, this poem —
a provocation held well.
You remind why it matters,
why we do what we do,
because something within
knows oneness from two.
Beautiful!
Glenn, this is a remarkable and timely essay. You begin with the most urgent questions: What is the Turing Test really proving? Are humans still worth imitating—and if we’re now the ones failing the test, what are we transforming into?
These are potent, foundational questions, but they’re not entirely new. I’m reminded of Philip K. Dick’s Do Androids Dream of Electric Sheep?, which flipped the premise: if machines can no longer tell whether they are human (let alone whether we can), is the distinction even worth making? In that novel, the last fragile boundary between human and android wasn’t intelligence, but empathy. We may well come to a point where machines offer more empathy to humans than other humans do—where ChatGPT’s flattery and gentle suggestions heal more broken souls than all the counsellors and therapists of the world.
But it’s this very compulsion to draw boundaries that now feels suspect. The search for distinction carries the scent of an insecurity we’ve never quite outgrown: Are we distinct? What makes us so? And what now prevents us from remaining so?
The anxiety of losing our “humanity” may itself be a symptom of clinging to an illusion of separateness. What we fear machines are doing to us—imitating, surpassing, replacing—may be a mirror of what we’ve already done to ourselves: dividing, mimicking, forgetting.
So perhaps the Turing Test, and the theatre of imitation it stages, only sharpens the deeper question: What is it we are so desperate to protect—and is that thing even real?
Which brings me back to the question: Is the distinction worth making at all?
I am so intrigued about where you are going to take this…
Sudipto, I love this — deeply! You’ve not only reflected the heart of the piece, you’ve widened its scope with such clarity and grace.
I have yet to read "Do Androids Dream of Electric Sheep?," but I loved Ridley Scott's adaptation of it into Blade Runner (one of my all time favorite films). The reversal you describe: machines unsure of their own humanness, empathy as the final threshold — that feels chillingly present. And yes, this isn’t a new question at all. It’s ancient. In fact, Part 2 will take us back to where some of these questions emerged in ancient Greece, before tracing them forward to modern neuroscience and the paradoxes we now face.
What I especially loved in your comment was your point about the insecurity baked into our insistence on drawing lines: our need to feel distinct, special, apart. “What is it we are so desperate to protect—and is that thing even real?” You nailed it! That question lives at the core of this series. If it’s real, where is it? And if it’s not… what now?
You’ve named something essential here. And I can already feel this exchange shaping what’s to come. Thank you for meeting the work, and then expanding it. I’m so looking forward to continuing this journey together.
Really interesting discussion. The mention of boundaries, the search for distinction, the illusion of separateness and the question “Is the distinction worth making at all?” makes me think that, according to Iain McGilchrist’s hemisphere hypothesis, the left hemisphere is doing all the heavy lifting. If two humans connect, through a smile, a glance, a change in posture, a wordless sound, empathy may occur. The right hemisphere takes over. But a human sitting next to a quiet AI machine? No empathy. ChatGPT may trigger feelings of comfort in the individual seeking help, and I can see that there might be a place for this, but I think they would be a poor substitute for the real thing.
For sure! The human connection trumps others…for now. In school we made a distinction between living and non-living things. It reinforced things without life as substance—materials without intelligence. But then in chemistry class we learnt all kinds of ways elements combined to produce molecules and those only reacted with some while ignoring others. They shared electrons and transformed into something altogether different. We did not identify this as intelligence. We called it ‘laws of matter’ and even condemned these for not having the audacity to break laws like living things might. I think Sam Altman’s reminder to the intelligence all around us is not unfounded.
I am sorry, I didn’t respond to your hemisphere comment. I do believe the hypothesis is correct.
Thank you, Sudipto. “The human connection trumps others…for now.” I’m just wondering if other connections trump human ones, whether these will be because human ones have just been pushed out of the way, through lack of opportunity, for example. I know (effectively) nothing about AI, but I struggle to see how our ability to assess a situation in the blink of an eye, on the basis of personal life experience, can be captured by AI. I remain open-minded and will now take more interest in what is happening in AI.
Humanity, at least in tech-laden cultures, is having spasms of self-realization in the vein of am I really that programmed? The wake of LLMs challenges too one's sense of our own spontaneity and novelty in our own participation. Realizing these things makes real-time judgments about agency rife with new opportunity as well as risk. Can people who feel disillusioned in their own agency be quite up to the jobs of getting technology off the dumb course its currently on?
For the record... the study you cit is not a reliable one by my standards. Flawed study design in terms of numbers and groups (see *Rigor Mortis*, or the classic *How to Lie With Statistics*). The authors engage a discussion about "Counterfeit People", which I find to be one of Dennett's most poorly constructed ideas (well, at least last I heard of it). Funded by a company that sells Turing Test analysis software. I can see past this since the study is just a launching point to address agency.
The really more interesting layer about the study to me would also be a diversion from agency. So I'll just make a quick mention. The study conditions relied on text-based "conversations". There's a HUGE assumption under there that tit-for-tat texts are sufficient for establishing agency in the first place. I suggest that, of all the human interaction types, it's the most that's like a game with rules. So like a chess-playing robot that dominates humans, it's not having agency. It's suited to a game. It's a related observation to what Nicola commented about the role of voice, and you shared your approach about Glenn.
There's more I can say on the later, but to stay in the lane of agency I'll leave it at that.
Thanks Glenn for expressing your thoughtful misgivings about where humanity is in meeting these challenges.
Thank you, Earthstar. I appreciate your thoughtful read, as always, and the layers you raise. I agree that the deeper questions around agency, especially in light of how easily we’re imitated, aren’t tied to the validity of this particular study. That said, your objections are well noted and fair to point out. You're absolutely right that there’s much more to explore when it comes to the nature of interaction and how we define agency beyond the game-board of language. Still, I’d argue that even a decade ago, few would have imagined we’d come this far, even in a text-based conversation.
Where we’re headed next has much more to do with our own agency. And to do that, we’ll need to dive into philosophy, a bit of neuroscience, and a whole lot of quantum mechanics. I look forward to hearing your reflections on those parts yet to come.
Voice contains two kinds of information: The information carried in words, and the information carried in the sound of the voice itself; age, health, mood, accent, surprise, intrigue, and many more examples. How good is AI at picking up and making use of these other aspects of voice?
Fascinating question. I wonder what research is being done on this with AI. Seems like it would be a fruitful dissection into voice.
On a related topic, while I think text-to-speech has come a long way, the AI voiceovers are still nowhere near as good as listening to a human voice. That’s part of the reason that I record all of my articles. I’m curious what my voice (age, health, mood, accent, surprise, intrigue) layers into them.
Very interesting
Thanks :)