Please note, this post is based entirely on my own understanding of “AI” and the modern controversies surrounding these tools. It’s one of the topics I was given when I solicited topics for my daily blog posts.
AI in general
I am not a fan of what has been popularly called AI, as perpetrated on us by companies developing tools like ChatGPT or Midjourney or Dall-E or any of their equivalents. For one thing, the term “AI” — or “generative AI” — is a misnomer. There is nothing intelligent about these tools. They are not self-aware. They do not create anything. They do not feel anything. They are good at generating text or images based on large — I mean, HUGE — amounts of input. But intelligent? No. These tools are properly called “Large Language Models”, but that term is not precisely correct. It’s correct enough for my purposes, though.
One term that I have seen bandied about for such tools is “stochastic parrot”. 1, a term which basically means what I have just said above: they take large amounts of data and churn out predictive text. Another term I’ve seen online (usually in Facebook memes) is “plagiarism machines”. That’s certainly an appropriate term, because these tools are trained on, basically, the entirety of the internet, which includes a great number of copyrighted texts, and while OpenAI, the company that created ChatGPT, may have restrained their own tools from using these copyrighted texts, other corporations or open-source creators may not. Indeed, the issue of copyright for these tools has opened up a number of lawsuits and legal troubles. How will these issues be resolved? I don’t know. I can tell you, though, that the high court of Japan has ruled that copyrighted texts published there are fair game for LLMs.
Then there’s the issue of AI “hallucinations”, which is an entirely wrong term. AIs don’t hallucinate, any more than my cup of coffee does. They simply generate bad information. When I asked ChatGPT to tell me about myself, Richard S. Crawford the writer, it told me at first that I had written a number of stories and listed a couple of my publications; but then it also listed the number of awards I’d earned, and the fact that I lived in the Bay Area with my wife and children and a dog. All of this is, of course, false. I haven’t won any awards, I certainly do not live in the Bay Area, and while I do have a wife, we have no children, and there are definitely no dogs. This leads me to wonder what the point of ChatGPT even is, if I have to fact-check every statement it makes; and when people encounter “facts” that ChatGPT hallucinates into being, how many of them are going to bother fact-checking anyway. You thought the internet was bad at spreading dis/mis-information now, just wait until Google’s top results in its searches are AI-generated articles with no human intervention or fact-checking.
There are other problematic aspects of LLMs and the companies that create them, from the environmental resources required to keep them up and running (and you thought cryptocurrencies were bad), to the hordes of Kenyan workers paid at sub-subsistence levels to keep the tools from becoming Nazi parrots. But I’ll let other people address those issues. And I’ll only mention in passing the way AI grifters are using ChatGPT to scam Amazon’s KU program for money.
AI and Science Fiction
This is a topic I’m less familiar with, mostly because I don’t read a whole lot of science fiction. I know that there have been plenty of movies that feature AIs or self-aware computers — 2001: A Space Odyssey comes to mind — and most of the time, these tools are portrayed as dangerous and, well, not necessarily fans of the human race. HAL, in 2001, killed off all the crew in that movie, and attempted to do in Dave Bowman who figured out how to turn it off by removing its memory cores. And who can forget Skynet, the AI in the Terminator franchise, that started a nuclear war between Russia and the US? And finally, let’s not leave out the intelligent machines in that vastly silly Matrix series of films2.
On the other hand, the Pixar film Wall-E features an intelligent, presumably self-aware robot that basically saves the human race from itself. Or something. To be honest, it’s been at least a decade since I saw that movie.
I’m even less familiar with AIs in novels, but I do remember that in Becky Chambers’s truly outstanding Wayfarers series of novels, there are plenty of AI characters who run the gamut of ethical sensibilities, from benevolent to less so. The second book of the series, A Closed and Common Orbit, one of the best science fiction novels published in recent years, features an AI on a quest to find out what its own personhood means.
In short, AIs in media are more likely to be true artificial intelligences, sapient, self-aware, capable of emotion, all of that. LLMs are not that. They may seem it, but they are not. HAL is sapient, as is Skynet… LLMs, not so much.
Will the science fiction view of AI ever come into being? This question has been hotly debated by philosophers and technologists for decades. I remember reading John Searle’s essay in which he argued that a thermostat may be intelligent, simply because it “knows” what to do when climate conditions change3, and that was in 1991. A consensus has never been reached.
Personally, I don’t think so; at least, I don’t think we’ll ever see a human-equivalent AI. This is based on arguments I recall from college philosophy and psychology classes that human intelligence is not just a brain phenomenon, but a whole-body one; in other words, our sense of self-awareness is based not just in the brain, but takes input from all over our body, all our senses, all our organs, even the microbiome that makes up the population of our guts4. Unless we can build a human body from scratch and imbue THAT with an artificial intelligence, maybe we’ll see something human-like.
I feel like I’ve drifted from the topic. What was it? Oh yes, AI and Science Fiction.
Artificial intelligences were a part of the worldbuilding in Dune, as I recall (though it’s been decades since I’ve read that book). However, there was a “Butlerian jihad” which destroyed the AIs and made them illegal because they tried to take over and kill the humans.
In summary, I believe a Butlerian jihad may be just what we need right now.
I’ve decided that I’m going to recommend books I’ve read as part of this series of blog posts, and for this one I’m going to recommend the Hugo-award winning Wayfarers series by Becky Chambers. Start with The Long Way to a Small Angry Planet and go from there. You won’t regret it, I promise. And as I mentioned, while all these books are excellent, in my opinion, the second one, A Closed and Common Orbit, is the best.
- Vastly silly in many ways, and ethically dubious — though when seen as a metaphor for the transgender experience, less so.
- To be fair, this may have been Searle poking fun at the problematic definitions of AI, just as Schrödinger’s thought experiment was poking fun at what he believed were silly notions of quantum mechanics
- Yeah, there are implications for the concept of the soul and the afterlife, but I won’t get into those here for obvious reasons