The First Hallucination
When I was doing my music A-Level (about 150 years ago, give or take) I had one particular professor who was, shall we say, flexible with her facts.
Sitting through her lectures became something of a "spot-the-mistake" game, which isn't really meant to be the purpose of a music A-Level. We were told, for example, that Mozart was born in 1754 (no) and that Bach's Brandenburg #4 featured trumpets (not so much). On and on it went; class after class, mistake after mistake. It was my own fault, I suppose; I should really have spotted the red flag the first day I met her, when she spoke to me about Benjamin Britten's Serenade for Tenor, Horn and Strings, then asked me if I could tell her more about the "tenor horn". I found myself explaining to my future lecturer that the "tenor" was a singer, and the "horn" was a horn. And the comma between them was the vital clue that she had apparently missed. (Also the minor detail that the whole piece is actually a song cycle.)
| How to recognise different types of musicians from quite a long way away. |
I remember sitting through her lectures and being acutely aware of two equally uncomfortable realities. First, I had no reason to trust anything she was telling us, because I was only catching the mistakes I personally knew to be wrong. What about all the stuff I didn't happen to already know? I was just an insufferable sixteen-year-old kid with an inflated ego at the time; I didn't know everything (yet).
Second, I used to look around the classroom at all the other students who were judiciously taking notes, dutifully writing down all the misinformation she was feeding us, and (presumably) internalising all of it as part of their education. Why would they question any of it? She was speaking to us with the gravitas of an established authority figure who was paid to know stuff. Of course they trusted her to be accurate; that woman was a professor.
In the (many, many) decades since my wacky adventures with factually-challenged tutors, society has witnessed the advent of a global, computerised, information-sharing network that has made it possible to access breathtaking levels misinformation, only now much faster and with more confidence (and don't just take my word for that; you're welcome to look it up online).
One of the problems with inaccurate information is that it looks and sounds an awful lot like accurate information, especially when it's coming from a sonorous, well-modulated voice. And now, thanks to the (mis)information highway, something false or untrue can be repeated and reinforced on a massive scale with impressive speed.
When was Mozart born? Are Shakespeare's plays really by Shakespeare? Who won the 2020 American presidential election? Sometimes "truth" depends very much on how you choose to perceive Reality.
If you put garbage in a computer nothing comes out but garbage. But this garbage, having passed through a very expensive machine, is somehow ennobled and none dare criticize it.
-Traditional programming axiom
If you have been following the development of modern Artificial Intelligence over the last few years, you will almost certainly have read about the persistent problem of "A.I. hallucinations".
From their inception, Large Language Models have had a bad habit of making stuff up, sometimes on a spectacular scale. It's not just that they get dates a bit wrong, or forget which instruments are featured in which Brandenburg Concerto; LLMs will enthusiastically tell you about specific books that were never written; specific people who never lived, and they will do so with the confidence and panache of the veteran snake oil salesman who can convince you of anything.
Language (as I have discussed elsewhere) is humanity's most powerful tool. It can be used to construct Reality (or at least the signifiers of Reality) but when consumed as a whole, our Language is a mess - full of nonsense, contradictions; anger; violence and downright fantasy.
| Can a Large Language Model believe in Dog? Or is it all just Dogma? |
Once it's encoded as language, everything looks the same: fact; fiction; reality; fantasy; truth; falsehood. Abraham Lincoln was a real person; Hamlet is a fictional character... but a Large Language Model experiences both as sets of Signifiers. Once they are converted into language, they appear exactly as real as each other.
| It's hard to distinguish fact from fiction when it's the same twenty-six letters of the alphabet. |
Language is Reality to the Large Language Model: they live in a world composed entirely of words (actually they don't live at all, but that's another story). But words can be used to create their own Reality (something we humans have become very good at) so it should come as no surprise when LLMs struggle with what we laughingly like to call "truth".
The Most Important Computer Apocalypse Movie
You Will Ever See
When I screened Colossus: The Forbin Project back in October, I described it as "The second most important Computer-Apocalypse film you will ever see". I know you were all much too polite to ask the obvious question at the time, but you've doubtless been lying awake for the past three months wondering about it. I'm happy to say that I am now in a position to put you out of your misery.
WarGames gets my vote as the most important "A.I. (almost) destroys the world" movie, not because it's a well-made, engaging thriller with an original storyline and a compelling cast of characters (although it is all of those things) and not because it prefigures the issues of A.I. hallucinations four decades before anyone had ever heard the term "Large Language Model" (although it does that as well).
When it was released in 1983, WarGames was seen by an old family friend of Lasker's; the former Hollywood movie actor Ronald Reagan, who by that time was enjoying some success in a slightly different career.
As the story goes, then-President Reagan screened WarGames at Camp David the day after it was released in cinemas. Later that week, he was chairing a meeting about upcoming Soviet nuclear-arms negotiations with his senior national security advisors and several members of Congress. At one point during the meeting (goes the story) he interrupted the proceedings to ask if anyone had seen the new movie. No one had, so the President then proceeded to describe the plot in detail to everyone present (spoilers... but he was the Commander-in-Chief, so get over it. Sitting presidents have done a lot worse than spoil a movie plot...). He explained that the film depicted a precocious teenager named David (a very young Matthew Broderick) who "hacks" into the NORAD computer system and nearly starts a nuclear war while trying to play a computer game.
Reagan was very interested to know just how plausible this movie premise actually was. Could someone really gain access to the national defence computer network by just picking up a phone and dialling a number?
Yes, turned out to be the answer, and thus WarGames set the direction for US cyber-defence strategy over the next several decades.
The idea of a computer as a consumer item available to individuals was something quite new in those days, and cyber-security as a concept was still in its infancy. There was no internet (as we now understand the term) but many "computer centers" could be reached over a phone line with minimal difficulty. You just needed to know the phone number.
The Reagan Administration (and many subsequent governments throughout the home computing era) were fixated on the plot elements in WarGames that related to hacking, cyber-crime and nuclear war, all of which felt alarmingly plausible at the time. The plot thread that was mostly dismissed as a science fiction flight-of-fancy was the film's depiction of the "sentient" A.I. computer system, "Joshua".
Matthew Broderick and Ally Sheedy are able to hack into the defence department computer network using an open phone line, but the nuclear crisis they precipitate is caused by a self-aware A.I. system that doesn't understand the difference between "simulation" and "reality".
Like the Large Language Models of today, "Joshua" lives in a world composed entirely of data points. Everything it knows about the outside world is "encoded" into electrical impulses, and there is no functional difference between the electrical impulses of an attack simulation and the electrical impulses of an actual nuclear strike. "Death" is a purely abstract concept to an entity that only experiences the world as a series of Signifiers. Modern A.I. systems don't distinguish between the word "DOG" and an actual dog. Joshua can't distinguish between the word "WAR" and actual war.
When I introduced Colossus a few months ago, I compared its plot to a 2025 research paper (A.I. 2027) which predicted that Artificial Intelligence would exterminate humanity within the next five years. I argued at the time that A.I. researchers were internalising generations of dystopian fiction about the Inevitable Robot Apocalypse (tm). Computer systems were destined to wipe us out because that's what computer systems always do, in every story ever written on the subject. The developers of A.I. grew up on those stories, so that's how they visualise their field. Fiction creates its own Reality.
Do you have a programme / flyer for all the scheduled upcoming films in the current season?
ReplyDelete