The Literal DeathBots

If you immerse yourself even casually in the world of A.I. pop culture, you are going to encounter a lot of Apocalypses. (Apocalypsi? Apocalypsae? Did we ever decide whether "Apocalypse" has an official plural? Let's never find out...)

An army of robot skeletons advancing to wipe out humanity in a typically apocalyptic scene from the "Terminator" franchise.

Whether it's the Terminator, HAL, Colossus or Talky Tina, robots, ChatBots and supercomputers always seem to be running amok, taking over the world or trying to killing everybody (occasionally in that order). Can you blame people for getting nervous now that A.I. ChatBots are actually starting to enter our lives for real? We have decades of reference literature to justify those fears, and the ChatBots do seem to be spreading awfully fast, even if all they are doing at the moment is talking to us.

A close-up of the computer monitor showing the romantic exchange between Willow and her demon ChatBot in "I Robot... You, Jane"

But that's how it starts, right? A harmless chat here, a bit of cybersex there, and then boom! MurderBots on every street corner. We've all seen those movies.

And guess what? The DeathBots really have arrived, but not quite the way you might think.

PowerTalking

A couple of weeks ago I presented the film version of 84 Charing Cross Road, adapted from the epistolary relationship between the New York-based writer Helene Hanff and Frank Doel, a London bookseller.

A close up of the first letter Frank sends to Helene in "84 Charing Cross Road"
How Frank appears to Helene...

The written correspondence between Helene and Frank was notable because their entire relationship was conducted exclusively through language (they never got to meet in person). Whatever impression they formed of each other was exclusively from the words they chose to share; thus their personas were entirely curated. Each essentially wrote themselves into existence for the benefit of the other.

A close-up of Helene's typewriter as she writes to Frank in "84 Charing Cross Road"
...and how Helene appears to Frank.

When presenting the film, I wanted to compare their relationship to our modern interactions with Large Language Models such as ChatGPT, Claude, Grok etc who have no existence beyond the language on which they have been built. 

Our dialogues with these ChatBots are "epistolary" by definition because a Large Language Model is incapable of experiencing Reality as anything other than words. Helene and Frank each built up a mental image of the person behind the words... but with a ChatBot there's no "person" behind the words at all. Marshall McLuhan liked to say that "The Medium is the Message", but with a Large Language Model, The Medium is the Messenger.

In the course of introducing my thoughts about 84 Charing Cross Road, I chose to conduct an actual epistolary exchange with a Large Language Model, and so I published a series of "letters" between myself and ChatGPT (you can read them here).

One reader responded (anonymously) by saying they found the letters "unnerving" partly because their own long-term epistolary exchange eventually grew into a lifelong relationship. ChatGPT, they said "will just use what you taught it in the exchange and not necessarily for good. Like the scorpion, it is in its nature to absorb and regurgitate." [To my anonymous commenter; I am happy to give you full credit should you wish to identify yourself!]

It is certainly true that Large Language Models "absorb" many of the techniques of human communication, but they do this on a vast scale; larger than any of us can visualise. My humble dialogues with them will be added to an immense pot of "training data" that includes practically everything the human race has ever written down, from Shakespeare and Marlowe to Pepys and Stephen King and Wikipedia, and even Helene Hanff and, well, me. The resulting output is disarmingly human and creates the compelling impression that the words we receive have come from a recognisable intellect. 

Helene received her letters and inferred the existence of a "Frank" in London. We receive messages on our computer screen and infer the existence of... what? Something with a soul? A sense of self? A ChatBot has neither. But if we can learn anything from this new era of A.I., it's that the right words in the right order carry an emotional weight no matter where they are coming from. Even if they are coming from literal nowhere.

In the episode of Buffy, the Vampire Slayer I screened last week, Willow (initially) falls deeply in love with what appears to be her perfect soulmate online.

"He's so smart," she dreamily informs Buffy, "and he's romantic, and we agree about everything."

Willow sits at her computer in a scene from "I Robot... You, Jane"

"He" of course ultimately turns out to be a centuries-old demon who had been imprisoned as text in a book, but has now been scanned into a computer and uploaded to the internet. (Hey, this is Buffy after all...)

Many of us are waking up in 2026 to discover that we're living in that Buffy episode. A recent article in the New York Times by Amelia Miller notes that "Seventy-two percent of American teens have turned to A.I. for companionship. A.I. therapists, coaches and lovers are also on the rise."

This is cause for alarm, in Ms Miller's opinion, and the rest of the article makes her position very clear.

"If we don’t change course, many people’s closest confidant may soon be a computer. We need to wake up to the stakes and insist on reform before human connection is reshaped beyond recognition.
People are flawed. Vulnerability takes courage. Resolving conflict takes time. So with frictionless, emotionally sophisticated chatbots available, will people still want human companionship at all? Many of the people I spoke with view A.I. companions as dangerously seductive alternatives to the demands of messy human relationships."

This, I think, is the "scorpion" that my commenter was worried about. 

But ChatBots don't sting us; they don't enslave us and they don't rise up and exterminate us (despite what you may have seen in Terminator, The Matrix, Ex Machina et al). If 72% of American teens are confiding in them, it's precisely because they don't do any of that. They aren't nasty or sarcastic or mocking; they just (appear to) listen patiently and sympathetically, then respond in a friendly, measured voice. This is the "we agree about everything!" part of Willow's cyberDemon, and it's the part that seems to scare people the most.

Michelle Goldberg (another regular New York Times contributor) recently wrote:

"I disagree with the anti-immigrant, anti-feminist, bitterly reactionary right-wing pundit Matt Walsh about basically everything, so I was surprised to come across a post of his that precisely sums up my view of artificial intelligence. “We’re sleepwalking into a dystopia that any rational person can see from miles away,” he wrote in November, adding, “Are we really just going to lie down and let AI take everything from us?”"

Goldberg's remarks get closer to the heart of the matter than she probably realises. She clearly distrusts A.I. but she also has very strong opinions about the "anti-immigrant, anti-feminist, bitterly reactionary right-wing pundit" whose words she quotes. If she and Walsh were ever to engage in a direct epistolary exchange, something tells me their dialogue would not be especially frictionless, sympathetic or friendly. 

It would however be very, very human.

A screengrab of a social media post: "FUCK ICE,FUCK DHS, FUCK DONALD TRUMP, FUCK THE ENTIRE KID RAPING GOVERNMENT, AND FUCK FUCK FUCK ISRAEL AND EVERY ZIONIST IN AMERICA ANS ABROAD." Did I miss anything?"
"Many of the people I spoke with view A.I. companions as dangerously seductive alternatives to the demands of messy human relationships."

There's Always a Spanish Inquisition

We are now seeing this all over the modern human landscape. The default level of discourse on social media platforms tends to be aggressive, brutal and downright abusive. The American Attorney General recently testified before Congress in a fiery and mesmerising exchange that was unremittingly hostile, belligerent and nasty (and carefully calculated to maximise audience retention). 

Attorney General Pam Bondi yells and rants in front of Congress.

The current president of the US delights in insulting and belittling his opponents, usually in vulgar and primal terms (much to the obvious delight of his followers). Therapists (of the purely human variety) are tracking an alarming upsurge in "rough" or violent sex that is stretching the definition of "consent" (to say nothing of "pleasure"). Around the world we are seeing societies torn apart by war, by terrorism, by tribal violence.

Last month the Iranian government killed up to 36,000 of its own citizens (the numbers are disputed, but it's a lot) in the space of a couple of weeks. Hundreds of thousands have been killed in the Sudanese civil war (with another 25 million driven to the point of starvation by the resulting famine). Boko Haram continues to raid communities in and around West Africa, murdering; raping; kidnapping - even if they're no longer considered "exciting" enough to make the Western news headlines.

The aftermath of a Boko Haram attack in Nigeria.

I could spend the rest of these programme notes going through a "greatest hits" of Humanity's ongoing peccadilloes (Russia/Ukraine; Hamas/Israel; mass shootings; ICE raids; domestic violence; female genital mutilation...) but the point is that living among humans is often a lot more than just "messy".

Monty Python's iconic sketch. "No one expects the Spanish Inquisition!"

Monty Python famously said that "No one expects the Spanish Inquisition!" but really, we should. I mean, have you met the human race? 

If A.I. is "dangerously seductive" it might not be because it has "the scorpion's nature"; it's because it doesn't

But we do.

The ChatBots of Death

One of the most popular "A.I. Companion" apps at the moment is a ChatBot called Replika.

A promotional image for "Replika" showing several avatars of varying types.

Replika is the ChatBot most frequently cited by sociologists and media wonks when they talk about the "dangerously seductive alternatives to messy human relationships". The company is hugely successful (they recently announced over 40 million users) and provides an A.I. companion who can be fully customised: you choose the gender; the ethnicity; the sexual orientation; the biographical details; the hairstyle (yes, really!) and you are presented with your own bespoke, unquestioningly devoted "life partner".

Replika's set-up menu, showing the choice of avatars/genders/ethnicities etc.

Remember Willow's demon ChatBot? "He's so smart, and he's romantic, and we agree about everything"...?

Say hello to Replika.

A screenshot of a dialogue between a user and his Replika, discussing an indie band that "she" has recently discovered.
A screengrab of a Replika chat shared in the Reddit "Replika" community


But Replika (according to their literature) began its life as something slightly different. It was founded by a Russian-born journalist named Eugenia Kuyda, who was working for a small tech company when a close friend of hers was killed in a car crash.

She had a huge body of text conversations with her friend, and had the idea of using all those messages to "train" a ChatBot... hoping to create an A.I. companion who could "carry on" their friendship.

If this reminds you of a specific (rather flatfooted) episode of the TV series Black Mirror then you can relax; you're not hallucinating. It was in fact the premise of the episode Be Right Back in 2013 (which Kuyda may or may not have seen when she decided it try it for herself).

A scene from the Black Mirror episode "Be Right Back". Hayley Atwell texts with her dead husband.

Replika may have ultimately evolved into a slightly different product, but there are now numerous other companies that offer the same service. If you have a deceased loved one with a sizeable social media presence, you can now use their online footprint to create a ChatBot that will (hypothetically) communicate with you in their "voice".

A screenshot of a BBC headline: "I turned myself into an AI-generated deathbot - here's what I found"

This is no longer science fiction (dystopian or otherwise). The DeathBots really have arrived.

The Poster for "Marjorie Prime"

The term "DeathBot" is never used in the film I plan to screen this week (yes, I'm finally getting to that!) but that is very much the subject of its (very serious and non-sensational) narrative.

Marjorie Prime is a film that was released in 2017 (long before the first Large Language Models were presented to the public) and is adapted from a Pulitzer Prize-nominated stage play originally produced in 2014.

A scene from "Marjorie Prime". Marjorie chats with the "Prime" of her late husband.

Its very low-key premise is simple: Marjorie is a woman experiencing memory problems in the last years of her life. Her family have programmed an A.I. "Prime" of her late husband to keep her brain active and engaged. The more she talks to "him" the more he learns about himself (and his relationship with her) and the more he is able to help her retain that which makes her human: her memories and her sense of self.

But like Helene Hanff's letters to Frank, the A.I. "Prime" is not the man. He is the sum total of information that the family have chosen to share with him, and the "real" family is inevitably a lot messier than the sanitised version.

A scene from "Marjorie Prime". Tessa interacts with the "Prime" of her now deceased mother.

If Marjorie Prime explores anything, it's the discrepancy between the "curated" identity of the created being and the "human" identity of the real person; especially as memory loss strips those memories away from us. 

These are not just academic questions. We "present" ourselves all the time; choosing what our defining characteristics are going to be when we interact with others, and also how we perceive ourselves: what specifically we choose as our own Reality.

Marjorie Prime gives us a very sombre and understated exploration of these questions, and gives us a decidedly thoughtful glimpse into the "DeathBots" that are even now starting to appear.

The DeathBots really are coming, for better or for worse. But they don't need to murder us, abuse us or oppress us; we seem to be doing all of that to ourselves.

Perhaps the scariest part about A.I. is not the robots coming to kill us; it's the robots who do "human" better than we do. 

A scene from "Marjorie Prime". The three "primes" share a moment together, exploring what they know (and don't know) about their identities.

We will screen Marjorie Prime at 7.30 on Thursday, the 26th of February at the Victoria Park Baptist Church.

Comments

Popular posts from this blog

Devil Dolls and Demon Downloads

Hamlet Unbound

The Grok, the Glunk and the Golem

Dear Friend...

Carry On #trending