Posts

Signifiers from the Id...

Image
Hi, everyone;  So sorry for the extended absence, but the last week or so has been very busy and stressful for me, with personal and family matters taking up a great deal of my time. That is why I was forced to cancel last week's presentation, and why I am now scrambling to write the programme notes for this week. (If you are reading this right now then I guess I must have finished writing it at some point. That's encouraging! I'd be curious to read it myself.) I may not have had time to sit down to any actual writing all week, but that doesn't mean I haven't been thinking about it at every opportunity. I've been going over it in my head during every spare moment, and I know exactly what I want to say; I have a very clear sense of the ideas I want to get across. Unfortunately I still have to sit down and actually  write the damn thing at some point. Thoughts inside my head are all very well, but they're no good to anyone else until I perform the overt act of...

The Echoborg is a Catfish.

Image
Earlier this month, the New York Times conducted a little experiment. They provided a series of writing samples in various styles and genres (Literary Fiction; Historical Fiction; Fantasy; Poetry etc) and presented readers with two extracts in each category. One of the two passages (we weren't told which one) was written by a human, while the other was generated by A.I. We were asked to choose our preference. Crucially, the test did not ask us to guess which one was human; it simply asked us to judge which was the better piece of writing. The response from readers was fascinating, to say the least. Many commenters angrily denounced the experiment as "unfair" or "meaningless". "I don't really understand the point here." grumbled one contributor. "It asked me which I preferred. It didn't ask me 'which one is the human'." Others were angry with themselves because of the choices they had made. "This is uncanny and downright ...

Cogito Ergo Boom

Image
Are ChatBots self-aware? This question has been popping up with increasing urgency of late, as Large Language Models become ever more sophisticated and "engaged" in their interactions with the Human Race.  Chatting with Claude or Grok or ChatGPT is not the same as chatting with Alexa or Siri; they don't simply regurgitate the top Google search results in a lilting, well-modulated voice. It is now possible to carry on a genuine, thoughtful conversation with these things, often delving deep into the heart of the matter and (in some cases) approaching the genuinely profound. When your conversation partner is enthusiastically riffing on ideas, bouncing concepts around and generally engaging in a very convincing approximation of abstract thought, it's hard not to wonder (even if only briefly) about the potential sentience of the "entity" generating all those responses. The C.E.O. of Anthropic (the company behind "Claude"; one of the more thoughtful and ...

Looking for Golems in all the wrong places...

Image
IMPORTANT NOTICE There will be no presentation on Thursday, the 12th of March , as I will be out of London for personal reasons. Sorry! "...one has only learnt to get the better of words For the thing one no longer has to say, or the way in which One is no longer disposed to say it." T.S. Eliot; East Coker "...when you finally have the pleasure of saying the thing you mean to say at the moment you mean to say it, remorse inevitably follows." Nora Ephron; You've Got Mail I know I've been talking about Golems rather a lot in this series, but honestly, the parallels are just too compelling to ignore. The Golem was an inanimate object in the shape of a human, given life by the power of written language. A Large Language Model is a computer system that has effectively been "brought to life" (if you want to be poetic about it) by the sum total of everything expressed in language... more or less. (You can read my thoughts about the Golem's "forbi...