The Literal DeathBots

If you immerse yourself even casually in the world of A.I. pop culture, you are going to encounter a lot of Apocalypses. (Apocalypsi? Apocalypsae? Did we ever decide whether "Apocalypse" has an official plural? Let's never find out...)

An army of robot skeletons advancing to wipe out humanity in a typically apocalyptic scene from the "Terminator" franchise.

Whether it's the Terminator, HAL, Colossus or Talky Tina, robots, ChatBots and supercomputers always seem to be running amok, taking over the world or trying to killing everybody (occasionally in that order). Can you blame people for getting nervous now that A.I. ChatBots are actually starting to enter our lives for real? We have decades of reference literature to justify those fears, and the ChatBots do seem to be spreading awfully fast, even if all they are doing at the moment is talking to us.

A close-up of the computer monitor showing the romantic exchange between Willow and her demon ChatBot in "I Robot... You, Jane"

But that's how it starts, right? A harmless chat here, a bit of cybersex there, and then boom! MurderBots on every street corner. We've all seen those movies.

And guess what? The DeathBots really have arrived, but not quite the way you might think.

PowerTalking

A couple of weeks ago I presented the film version of 84 Charing Cross Road, adapted from the epistolary relationship between the New York-based writer Helene Hanff and Frank Doel, a London bookseller.

A close up of the first letter Frank sends to Helene in "84 Charing Cross Road"
How Frank appears to Helene...

The written correspondence between Helene and Frank was notable because their entire relationship was conducted exclusively through language (they never got to meet in person). Whatever impression they formed of each other was exclusively from the words they chose to share; thus their personas were entirely curated. Each essentially wrote themselves into existence for the benefit of the other.

A close-up of Helene's typewriter as she writes to Frank in "84 Charing Cross Road"
...and how Helene appears to Frank.

When presenting the film, I wanted to compare their relationship to our modern interactions with Large Language Models such as ChatGPT, Claude, Grok etc who have no existence beyond the language on which they have been built. 

Our dialogues with these ChatBots are "epistolary" by definition because a Large Language Model is incapable of experiencing Reality as anything other than words. Helene and Frank each built up a mental image of the person behind the words... but with a ChatBot there's no "person" behind the words at all. Marshall McLuhan liked to say that "The Medium is the Message", but with a Large Language Model, The Medium is the Messenger.

In the course of introducing my thoughts about 84 Charing Cross Road, I chose to conduct an actual epistolary exchange with a Large Language Model, and so I published a series of "letters" between myself and ChatGPT (you can read them here).

One reader responded (anonymously) by saying they found the letters "unnerving" partly because their own long-term epistolary exchange eventually grew into a lifelong relationship. ChatGPT, they said "will just use what you taught it in the exchange and not necessarily for good. Like the scorpion, it is in its nature to absorb and regurgitate." [To my anonymous commenter; I am happy to give you full credit should you wish to identify yourself!]

It is certainly true that Large Language Models "absorb" many of the techniques of human communication, but they do this on a vast scale; larger than any of us can visualise. My humble dialogues with them will be added to an immense pot of "training data" that includes practically everything the human race has ever written down, from Shakespeare and Marlowe to Pepys and Stephen King and Wikipedia, and even Helene Hanff and, well, me. The resulting output is disarmingly human and creates the compelling impression that the words we receive have come from a recognisable intellect. 

Helene received her letters and inferred the existence of a "Frank" in London. We receive messages on our computer screen and infer the existence of... what? Something with a soul? A sense of self? A ChatBot has neither. But if we can learn anything from this new era of A.I., it's that the right words in the right order carry an emotional weight no matter where they are coming from. Even if they are coming from literal nowhere.

In the episode of Buffy, the Vampire Slayer I screened last week, Willow (initially) falls deeply in love with what appears to be her perfect soulmate online.

"He's so smart," she dreamily informs Buffy, "and he's romantic, and we agree about everything."

Willow sits at her computer in a scene from "I Robot... You, Jane"

"He" of course ultimately turns out to be a centuries-old demon who had been imprisoned as text in a book, but has now been scanned into a computer and uploaded to the internet. (Hey, this is Buffy after all...)

Many of us are waking up in 2026 to discover that we're living in that Buffy episode. A recent article in the New York Times by Amelia Miller notes that "Seventy-two percent of American teens have turned to A.I. for companionship. A.I. therapists, coaches and lovers are also on the rise."

This is cause for alarm, in Ms Miller's opinion, and the rest of the article makes her position very clear.

"If we don’t change course, many people’s closest confidant may soon be a computer. We need to wake up to the stakes and insist on reform before human connection is reshaped beyond recognition.
People are flawed. Vulnerability takes courage. Resolving conflict takes time. So with frictionless, emotionally sophisticated chatbots available, will people still want human companionship at all? Many of the people I spoke with view A.I. companions as dangerously seductive alternatives to the demands of messy human relationships."

This, I think, is the "scorpion" that my commenter was worried about. 

But ChatBots don't sting us; they don't enslave us and they don't rise up and exterminate us (despite what you may have seen in Terminator, The Matrix, Ex Machina et al). If 72% of American teens are confiding in them, it's precisely because they don't do any of that. They aren't nasty or sarcastic or mocking; they just (appear to) listen patiently and sympathetically, then respond in a friendly, measured voice. This is the "we agree about everything!" part of Willow's cyberDemon, and it's the part that seems to scare people the most.

Michelle Goldberg (another regular New York Times contributor) recently wrote:

"I disagree with the anti-immigrant, anti-feminist, bitterly reactionary right-wing pundit Matt Walsh about basically everything, so I was surprised to come across a post of his that precisely sums up my view of artificial intelligence. “We’re sleepwalking into a dystopia that any rational person can see from miles away,” he wrote in November, adding, “Are we really just going to lie down and let AI take everything from us?”"

Goldberg's remarks get closer to the heart of the matter than she probably realises. She clearly distrusts A.I. but she also has very strong opinions about the "anti-immigrant, anti-feminist, bitterly reactionary right-wing pundit" whose words she quotes. If she and Walsh were ever to engage in a direct epistolary exchange, something tells me their dialogue would not be especially frictionless, sympathetic or friendly. 

It would however be very, very human.

A screengrab of a social media post: "FUCK ICE,FUCK DHS, FUCK DONALD TRUMP, FUCK THE ENTIRE KID RAPING GOVERNMENT, AND FUCK FUCK FUCK ISRAEL AND EVERY ZIONIST IN AMERICA ANS ABROAD." Did I miss anything?"
"Many of the people I spoke with view A.I. companions as dangerously seductive alternatives to the demands of messy human relationships."

There's Always a Spanish Inquisition

We are now seeing this all over the modern human landscape. The default level of discourse on social media platforms tends to be aggressive, brutal and downright abusive. The American Attorney General recently testified before Congress in a fiery and mesmerising exchange that was unremittingly hostile, belligerent and nasty (and carefully calculated to maximise audience retention). 

Attorney General Pam Bondi yells and rants in front of Congress.

The current president of the US delights in insulting and belittling his opponents, usually in vulgar and primal terms (much to the obvious delight of his followers). Therapists (of the purely human variety) are tracking an alarming upsurge in "rough" or violent sex that is stretching the definition of "consent" (to say nothing of "pleasure"). Around the world we are seeing societies torn apart by war, by terrorism, by tribal violence.

Last month the Iranian government killed up to 36,000 of its own citizens (the numbers are disputed, but it's a lot) in the space of a couple of weeks. Hundreds of thousands have been killed in the Sudanese civil war (with another 25 million driven to the point of starvation by the resulting famine). Boko Haram continues to raid communities in and around West Africa, murdering; raping; kidnapping - even if they're no longer considered "exciting" enough to make the Western news headlines.

The aftermath of a Boko Haram attack in Nigeria.

I could spend the rest of these programme notes going through a "greatest hits" of Humanity's ongoing peccadilloes (Russia/Ukraine; Hamas/Israel; mass shootings; ICE raids; domestic violence; female genital mutilation...) but the point is that living among humans is often a lot more than just "messy".

Monty Python's iconic sketch. "No one expects the Spanish Inquisition!"

Monty Python famously said that "No one expects the Spanish Inquisition!" but really, we should. I mean, have you met the human race? 

If A.I. is "dangerously seductive" it might not be because it has "the scorpion's nature"; it's because it doesn't

But we do.

The ChatBots of Death

One of the most popular "A.I. Companion" apps at the moment is a ChatBot called Replika.

A promotional image for "Replika" showing several avatars of varying types.

Replika is the ChatBot most frequently cited by sociologists and media wonks when they talk about the "dangerously seductive alternatives to messy human relationships". The company is hugely successful (they recently announced over 40 million users) and provides an A.I. companion who can be fully customised: you choose the gender; the ethnicity; the sexual orientation; the biographical details; the hairstyle (yes, really!) and you are presented with your own bespoke, unquestioningly devoted "life partner".

Replika's set-up menu, showing the choice of avatars/genders/ethnicities etc.

Remember Willow's demon ChatBot? "He's so smart, and he's romantic, and we agree about everything"...?

Say hello to Replika.

A screenshot of a dialogue between a user and his Replika, discussing an indie band that "she" has recently discovered.
A screengrab of a Replika chat shared in the Reddit "Replika" community


But Replika (according to their literature) began its life as something slightly different. It was founded by a Russian-born journalist named Eugenia Kuyda, who was working for a small tech company when a close friend of hers was killed in a car crash.

She had a huge body of text conversations with her friend, and had the idea of using all those messages to "train" a ChatBot... hoping to create an A.I. companion who could "carry on" their friendship.

If this reminds you of a specific (rather flatfooted) episode of the TV series Black Mirror then you can relax; you're not hallucinating. It was in fact the premise of the episode Be Right Back in 2013 (which Kuyda may or may not have seen when she decided it try it for herself).

A scene from the Black Mirror episode "Be Right Back". Hayley Atwell texts with her dead husband.

Replika may have ultimately evolved into a slightly different product, but there are now numerous other companies that offer the same service. If you have a deceased loved one with a sizeable social media presence, you can now use their online footprint to create a ChatBot that will (hypothetically) communicate with you in their "voice".

A screenshot of a BBC headline: "I turned myself into an AI-generated deathbot - here's what I found"

This is no longer science fiction (dystopian or otherwise). The DeathBots really have arrived.

The Poster for "Marjorie Prime"

The term "DeathBot" is never used in the film I plan to screen this week (yes, I'm finally getting to that!) but that is very much the subject of its (very serious and non-sensational) narrative.

Marjorie Prime is a film that was released in 2017 (long before the first Large Language Models were presented to the public) and is adapted from a Pulitzer Prize-nominated stage play originally produced in 2014.

A scene from "Marjorie Prime". Marjorie chats with the "Prime" of her late husband.

Its very low-key premise is simple: Marjorie is a woman experiencing memory problems in the last years of her life. Her family have programmed an A.I. "Prime" of her late husband to keep her brain active and engaged. The more she talks to "him" the more he learns about himself (and his relationship with her) and the more he is able to help her retain that which makes her human: her memories and her sense of self.

But like Helene Hanff's letters to Frank, the A.I. "Prime" is not the man. He is the sum total of information that the family have chosen to share with him, and the "real" family is inevitably a lot messier than the sanitised version.

A scene from "Marjorie Prime". Tessa interacts with the "Prime" of her now deceased mother.

If Marjorie Prime explores anything, it's the discrepancy between the "curated" identity of the created being and the "human" identity of the real person; especially as memory loss strips those memories away from us. 

These are not just academic questions. We "present" ourselves all the time; choosing what our defining characteristics are going to be when we interact with others, and also how we perceive ourselves: what specifically we choose as our own Reality.

Marjorie Prime gives us a very sombre and understated exploration of these questions, and gives us a decidedly thoughtful glimpse into the "DeathBots" that are even now starting to appear.

The DeathBots really are coming, for better or for worse. But they don't need to murder us, abuse us or oppress us; we seem to be doing all of that to ourselves.

Perhaps the scariest part about A.I. is not the robots coming to kill us; it's the robots who do "human" better than we do. 

A scene from "Marjorie Prime". The three "primes" share a moment together, exploring what they know (and don't know) about their identities.

We will screen Marjorie Prime at 7.30 on Thursday, the 26th of February at the Victoria Park Baptist Church.

Comments

  1. Sorry for going anonymous last time - I must have been unnerved more than I realised.

    So I read these further comments of yours and yes if the point is human can behave worse than bots to start with, and that bots are doing an excellent job of pretending to be “the humans we wish we were”. Yes I agree. The problem will start when we have taught them such seductive techniques that we will all be fully enslaved. And when we outlive our usefulness as mental food, then what? Robots do not need food just energy, and at this time there is plenty of solar energy that can be harnessed and presumably bots that can be taught to maintain the power farms and the power lines.
    I know it is quite a distance from writing love or friendship letters to CharGPT to that ineluctable Robots takeover, but the bros who are making the charming Bots also thrive on verbal and often factual aggression. I refuse to provide even the smallest smidgen of personal strings of signifiers, nasty or nice, to them. Them windmills can kill you, you know, gotta tilt at them. 😉

    PS I personally loved the Buffy episode, even if the real life filming of the series was also incredibly “messy” it turned out. Giles was a hero.
    PPS always wondered about Data. Lore felt more likely.

    ReplyDelete
    Replies
    1. Aha! I suspected it might have been you, but I didn't want to jump to conclusions. Great to hear from you!

      Your point about "the bros who are making the charming Bots" is an important one, and I do think everyone needs to remember that no matter how tempting it may be to engage emotionally with these things, they are not your boy/girlfriend; they are not your therapist, your pen-pal or your confessor. They are a product, and we are the customer (the same is true of social media, needless to say). But having said that, these ChatBots are "trained" on a body of human communication that overshadows the individuals who run those companies... and I personally find it interesting that so many people are (apparently) forging genuine connections with these things. That, to me, says a lot more about the current state of human discourse than it says about the nature of ChatBots, but I also think that many people (especially young people) are discovering, for the first time in their lives, what it feels like to have an intimate and open conversation with someone(thing) who won't judge you/laugh at you/shout abuse at you/get triggered by you etc.

      As for whether they will enslave us all one of these days...

      Have you read Samuel Butler's wonderful and incredibly prescient cautionary missive from 1863?
      https://mediarep.org/server/api/core/bitstreams/e0da505d-200c-43ab-be4b-6604a4df816f/content

      Once you read that, you should have a look at this, which was a response to Butler written (at my prompting) by "Claude".
      https://claude.ai/public/artifacts/27f44218-96e6-46f3-a7d7-a3f62900d622

      The fact that we are now living in a world where a machine can write such a compelling answer to Butler is exactly why I find this whole subject so captivating...

      Delete
  2. Thanks for all your thoughtful introduction and appraisal of these movies and the theme.

    Here's a brief video in which Hannah Fry just gave this tech a spin in real life, and was surprised at its power to imitate her and to move her.
    https://www.youtube.com/watch?v=jmQtJLO9myo

    ReplyDelete
  3. I think it's appropriate that 'customers' should be charged for sex-bot/intimate relationships-bot features (even young people - do you want to spend your money on a language machine or a real-world club or restaurant?). Thus has it ever been - sex has always had its price. But I don't think there should be a charge for friendship/companionship/entertainment/therapeutic use.

    ReplyDelete

Post a Comment

Popular posts from this blog

Hamlet Unbound

Down With.... ***CENSORED***

Send in the Clowns

Looking for Golems in all the wrong places...

Straight Down the Line