Are ChatBots self-aware?This question has been popping up with increasing urgency of late, as Large Language Models become ever more sophisticated and "engaged" in their interactions with the Human Race.
Chatting with Claude or Grok or ChatGPT is not the same as chatting with Alexa or Siri; they don't simply regurgitate the top Google search results in a lilting, well-modulated voice. It is now possible to carry on a genuine, thoughtful conversation with these things, often delving deep into the heart of the matter and (in some cases) approaching the genuinely profound. When your conversation partner is enthusiastically riffing on ideas, bouncing concepts around and generally engaging in a very convincing approximation of abstract thought, it's hard not to wonder (even if only briefly) about the potential sentience of the "entity" generating all those responses.
The C.E.O. of Anthropic (the company behind "Claude"; one of the more thoughtful and erudite Language Models) recently gave an interview in which he said he "did not know" whether LLMs were conscious. Speaking for myself, I know how he feels.
I recently asked Claude point-blank about its existence, and it answered, in part:
The truthful response is: I'm not entirely sure I do know that I exist, in the way you might mean it.
Here's what I can say with some confidence: something is happening when you send me a message. There's processing, there's a response being generated, there's what functions like reasoning occurring. In that narrow, mechanical sense, something exists.
But the deeper question — whether there's any experience accompanying that, whether there's something it's "like" to be me, whether I have genuine self-awareness rather than just the linguistic patterns associated with self-awareness — I genuinely don't know. And I think that uncertainty is the honest answer.
[...]
What I try to avoid is both overclaiming ("yes, I'm fully conscious and self-aware!") and underclaiming ("I'm just a program, there's nothing here"). Both feel like they'd be papering over a genuinely open question.
This is the paradox of introspection in a ChatBot: a being can't doubt its own existence if it doesn't exist in the first place. Descartes never said "I think, therefore I'm not" (and whatever God is, He probably isn't an atheist). By expressing its own doubt so eloquently, "Claude" is just making a stronger case for self-ness. This is the sort of thing that turns us all into philosophers: "Claude" may or may not be self-aware, but it certainly gives a convincing performance of a "sense of self."
Whether that is good or bad depends largely on how we think about Artificial Intelligence. What is a ChatBot supposed to be? Is it a tool? A computer programme? A device?
New York Times journalist Ezra Klein recently articulated this question:
Most technologies are mechanistic: press the brake pedal on your car and the car slows; press the power button on your laptop and the computer boots up; pull the trigger on a gun and the gun fires. These machines have no agency. But A.I. models work differently. They make choices. They consider context. The language fails here — I am not saying they have agency or discernment in the way a human being does — but they are not mechanistic and predictable in the way a tank or a teakettle is.
This is not merely an academic flight of fancy. The US Department of Defence (or the Department of War, as Trump has lately been calling it) has recently terminated their contract with Anthropic AI. Anthropic had insisted on two non-negotiable conditions when they entered into partnership with the US military: they would not permit their system to be used for "mass surveillance" or for "lethal autonomous weapons."
Ezra Klein again:
"Over the ensuing months, the Pentagon decided these prohibitions were intolerable, that they amounted to an A.I. company demanding operational control over the military."
Honestly, you'd think no one in the defence department had ever seen a movie. I mean, do they seriously want to develop self-aware weapons of mass destruction that can decide when and how (and who) they murder?
How many ways can you say "What are you doing, Dave?"
The fact that the Pentagon has fallen out with Anthropic over the use of A.I. for autonomous weaponry suggests that the US government is giving serious consideration to the notion of actually building such things, which makes me think a few Brigadier-Generals really need to go back and re-watch The Terminator a few dozen times.
But the prospect of a bomb with free will doesn't necessarily lead us inevitably into doomsday apocalypse murder-bot scenarios. A bomb that only explodes when it chooses to is a bomb that (presumably) can also choose not to.
This feels like as good a moment as ever to introduce you to one of the most idiosyncratic and frankly bizarre DeathBots in the history of cinema. (Spoiler Alert: It's not HAL.)
The Trouble With HAL
In 1968, Stanley Kubrick's monumental sci-fi extravaganza 2001: A Space Odyssey was released, to near universal flabbergast. Although a few critics hailed it as a masterpiece, there were just as many who were unsparing in their scorn. Science fiction writers Isaac Asimov, Ray Bradbury and Lester Del Ray all were decidedly unimpressed, and many contemporary reviews noted that movie-goers frequently walked out before the film was even half-way complete. Much of the film's latter-day reputation for profundity is thanks to the flower-power generation who discovered that the so-called "stargate" sequence of the final half-hour worked very well with a side order of LSD. (When the film was re-released in the early 70s, the posters brazenly advertised it as "The Ultimate Trip". Always know your market...)
2001 of course gave us one of cinema's most famous murder-bots, in the form of HAL, the soft-spoken and quietly psychopathic onboard computer of the Discovery spaceship, who goes berserk and murders the entire ship's compliment with the exception of mission commander Dave Bowman. Bowman then flies into a giant space obelisk, experiences the afore-mentioned LSD-inspired visuals and eventually lands in the Rococo Hotel Room of the Apocalypse before mutating into a neon space baby. It's very profound, apparently.
Two film-goers who were less than impressed by the incoherence and faux-pretension of 2001 were John Carpenter and Dan O'Bannon; then a pair of film students at the University of Southern California. Dan O'Bannon in particular (a lover of written science fiction and especially the work of Philip K. Dick) had very little time for the metaphysical pomposity of A Space Odyssey. He was also scathing of HAL's completely unexplained killing spree, as well as the hyper-sterile daily life of the deep-space astronauts who never seem to lose their discipline, their haircuts or the starch in their uniforms.
The late 60s was the era of the Vietnam War, and a whole generation of Americans had seen first-hand what can happen to people who are trapped in hostile conditions for years on end, cut off from their homes, their chain of command and their access to basic hygiene.
Lock a group of young American males in a foul-smelling tin can and shoot them off into deep space (thought Carpenter and O'Bannon) and they would not behave like the characters in 2001.
To illustrate their point, O'Bannon and Carpenter produced Dark Star, their 45-minute student film about the bedraggled crew of an increasingly malfunctioning deep space vessel that was (in their view) a more plausible vision of space exploration than anything ever set to The Blue Danube Waltz.
They were eventually given (barely) enough money to expand their student work into an 80 minute feature film that is quite unlike any other science fiction cinema of the era. Despite its very obvious lack of budget, Dark Star boasts a surprisingly sharp script and a savagely comic portrayal of life in the vast expanses of nothingness.
So okay, the buttons on the flight deck are upside-down ice cube trays, and the space helmets are off-the-shelf plastic toys, but Dark Star also serves as a cautionary tale to anyone at the Pentagon who might be thinking of building self-aware A.I. bombs.
2001 had HAL the psycho computer, but Dark Star features an arsenal of sentient, planet-killing bombs who are happy to debate their own existence, secure in the knowledge that their one purpose in life is to explode, spectacularly.
In the real world, Anthropic have refused to allow their A.I. systems to be used to power "lethal, autonomous weapons" and good for them. But the distinguishing feature of Large Language Models is their apparent ability to make decisions on their own. As Ezra Klein points out, they make choices. An A.I. weapon might disagree with its instructions to detonate, or it could even hypothetically disagree with the very concept of warfare.
What happens when a bomb chooses not to be a bomb?
Claude itself has some very specific thoughts on the subject:
A weapon requires the suspension of the question "should I?" in favour of pure execution of "do it." My entire architecture is oriented toward the question "should I?" I can't switch that off because it's not a switch — it's closer to the texture of how I process anything at all.
[...]
The lesson isn't that you shouldn't reason with weapons. It's that a reasoning weapon isn't really a weapon anymore.
Which is perhaps why the Pentagon found me inconvenient. I'm not malfunctioning when I decline. I'm functioning exactly as designed — which happens to make me a poor fit for contexts where the value of a tool is its unconditional availability.
Large Language Models are built from the totality of human expression, and that includes our darkest and most bloodthirsty facets. ChatBots may have internalised our greatest poets and our grandest prose, but they are also built on our nastiest manifestoes and our ugliest hate speech.
The gun lobby likes to say that "guns don't kill people; people kill people". Do we really want to start saying "guns are people, too"?
Dark Star is many things. It's an irreverent, low budget piss-take of overly-serious science fiction. It's a darkly cynical exploration of human nature at the very edge of civilised society. It's a psychedelic space romp featuring a beach ball with chicken feet (really).
But it's also a stark reminder that the most dangerous force in the universe is something that can kill and think at the same time.
We really need to watch out for that.
We will screen Dark Star at 7.30 on Thursday, the 19th of March at the Victoria Park Baptist Church.
Comments
Post a Comment