Time for another new experiment here at RetroPhaseShift. I’m going to take a whack at disassembling some of the overly common tropes in sci-fi (or dare I say it–cliches). For our first entry, we’re going to go with the crazy robot/AI trope, looking at where it came from, how it has effected the genre in the time since, and possibly even its effects on the real world. Let’s get into the teardown, shall we?
The idea behind the “crazy AI” trope is a pretty simple one to explain. Man creates an intelligent machine, and either through negligence on Man’s part or flawed logic on the machine’s, it comes to a solution to a problem that should never have been considered a possibility. In TV and movies, that almost always means killing humans. Be it a Skynet-style attempt at genocide or simply killing its creator, where ever a crazy AI goes, death is sure to follow.
By far the most famous crazy AI in fiction is HAL 9000 from 2001: A Space Odyssey. Due to conflicting orders and goals placed in his programming, he ends up slowly murdering the crew of Discovery until Dave Bowman is forced to shut him down for good. HAL is so famous that he’s become the frequent subject of parody, in everything from sci-fi comedies like Futurama to more mundane sitcoms where its presence hardly even makes sense. But HAL also became the prototype for all the crazy AI and robots to follow. This idea of conflicting orders leading to unexpected results is often the explanation for why our AI buddy has mysteriously gone evil out of nowhere. In HAL’s case, he’s been told about the Monolith and instructed to keep that a secret from the crew, but he’s also supposed to care for the crew and not keep any secrets from anyone, in case he turn crazy. When the crew start to realize something weird is going on, these two orders run headlong into each other and HAL’s only solution is to kill the crew and try to carry on the mission alone. In shallower works than 2001, this kind of an explanation is often missing, leading to a huge leap in logic from helping the crew to killing them. It’s such a common trope that one of the huge “twists” in Moon is that it doesn’t happen. The trope is, of course, older than HAL; Captain Kirk is famous for taking down AI, but even the first play to use the word “Robot” (R.U.R.–Rossum’s Universal Robots) has a too-clever robot taking leadership of the others and sending them off to kill humanity.
So where does this idea come from? Partially, it seems to be rooted in a religious idea that by making AI, man is playing god, but without that god-like omniscience, it will inevitably end badly. The crazy AI is, in that sense, divine vengeance for committing an affront to nature. It dates back even beyond the beginnings of science fiction, to things like Frankenstein’s Monster and the Jewish myths of the Golem. Fact of the matter is, man trying to create life has always been looked upon negatively. Needless to say, this is a very heavily anti-science idea, and in our modern world an extremely problematic one. Unlike aliens, AI is almost certainly a concept that will be made a reality in the near future. With people’s ideas of AI and robots colored by this trope, I can absolutely see a scenario where humanity’s fear of its creation is ultimately what leads to there being a problem with AI in the first place.
Another aspect of it stems from a kind of inherent mistrust of ourselves. We humans are the most advanced life on Earth, and as far as we know, the universe. And yet, even with our reasoning and ability to see the big picture, we insist on doing horrible things, even self-destructive things, on a regular basis. If we extend that to an intelligence that vastly eclipses ours… how much worse could it get? Perhaps the AI sees no point in putting up with us and our thoughtless actions. Maybe it’s taking a Darwinian approach, and feels like it must destroy us before we inevitably try to destroy it. In other words, the AI becomes seemingly evil and crazy specifically because it’s too much like us. This is essentially the explanation behind Lore from Star Trek TNG, for example, or the Asurans from Stargate Atlantis. Most of the time, the AI written with this idea in mind will be characters who, if they were human, would be diagnosed with actual mental illnesses, making them truly “crazy” in a way.
Trying to figure out something new to do with this trope is difficult, mostly because the crazy AI is so played out. We’ve seen examples that ranged from well-meaning but misguided to merely self-defense to tragically flawed from the start. I think if I were going to try to play with this trope, it might be more interesting to look at the aftermath of such a machine’s schemes. What does Skynet do after it wins? Anything? It’s never had a purpose aside from destroying humans. Harlan Ellison’s I Have No Mouth and I Must Scream is sort of in this vein, but even that case is… well, let’s not get too into that one. Coming up with fictional AI that are adequately programmed to avoid the go-crazy solution has also been en vogue as of late, as seen with GERTY in Moon or TARS and CASE in Interstellar. Something else that might make for an interesting take is creating a situation where the AI’s developing human-esque capacities lead it to a greater level of compassion or understanding rather than hatred or vengeance. You occasionally see this with good AI, but I think as… well, not exactly a twist, but as an unusual resolution to an otherwise cliched plot point, it might have some value.
So, what did you think of this new column? Any thoughts on AI, evil or otherwise, that you think would contradict or support my points? Let me know in the comments, or on twitter @RetroPhaseShift. Unlike some of the other recurring columns, this will likely be more of a “when I feel like it” than on any kind of schedule.