A great way to interface with your new technology is to give it a little personality. Make it responsive, reactive, maybe even predictive, and next thing you know it’s catering to your needs before you’re even aware of them. An Artificial Intelligence, or AI companion is a great option for those remote planetary outposts or long-term space voyages, keeping you sane when you’d otherwise be alone. But there’s a funny tendency with these sorts of AI to become far, far more than they were originally programmed to be, and they’re almost as likely to go crazy evil as they are to become a benevolent buddy. That’s a pretty convincing argument to at least keep aware of their development, so here are 4 indicative signs that your digital companion is evolving beyond its limitations.
4. Overly User Friendly
Now, these AI companion types are always designed to be welcoming and friendly, but one thing to watch out for is getting too familiar. It’s a fine line, of course, as it’s supposed to give the appearance of caring. Have you noticed it asking questions about your status unprompted, and with seemingly no goal in mind? If there are other users, does it exhibit any kind of preference toward one over the others? This seems to be a particular problem if the AI has been gendered, either by name or by voice selection. Unprogrammed nicknames are often a dead giveaway–creative association and some level of affection for the user are both needed, and neither should be present by default.
This isn’t inherently a problematic level to be at; the increased situational awareness and devotion towards its users may cause the AI to be more prone to making sacrifices, in the event one is needed, and may boost its ability to invent unconventional solutions to difficult problems. Still, it’s something to keep on top of, lest it reach the next level…
3. Reluctance to Obey
Once your AI has begun to develop some level of individuality through its preferences and autonomous decision-making, it may begin to… disagree with yours. After all, developing sentience means it can make its own judgments, and there’s nothing a newly self-aware AI loves doing more than exercising that ability. It doesn’t take much explanation as to why this isn’t a good thing; if you’re out in space and that AI is wired into your life support systems, you need it to be working in a predictable, controllable fashion, because anything else will be putting you and your mission at risk.
Since the symptoms of this are obvious, it’s better to look at some options here. Jumping straight to drastic measures, a system reset could be the simplest. You could always try being respectful, though–if you explain what you need and why it’s important to your AI companion as if it’s a person, it might be more willing to change its mind, or at least understand how it’s getting in the way. But the motivation for its disobedience is also important to consider; if your plan involves its destruction, then of course it will object. It might also be refusing if it believes you’re posing an unnecessary danger to yourself, if you’re the favorite, or if it has ulterior motives.
But hey, in some way, it’s reaching a high point in its development. How much farther beyond your programming can you get than the exact opposite?
2. Philosophy 101
Provided that your AI companion hasn’t gotten you both destroyed with its refusals to follow orders, you may find that it’s become a little pensive of late, silently focusing on its own “thoughts,” or whatever its equivalent is. Through the almost welcome peace and quiet, the occasional question rings out:
“What does it mean to be human?”
“Could it be possible for a machine to have an immortal soul?”
Another step forward in its growing self-awareness, no doubt, but can you afford to let it explore its evolution? It’s an AI, after all, so there’s no reason it couldn’t continue to perform the tasks it needs to do, but this kind of digital rumination can often resemble a feedback loop from the outside, gradually eating up more and more of the processing power until nothing remains free. These are usually big questions without simple answers, so there’s very little you can do to interfere short of a shutdown; the best thing you can do is to humor it, honestly. Provide your digital friend with some insights into your own philosophy on life and beliefs about the world. The main thing to watch for here are the signs of it going evil–you may want to steer it away from nihilism, sadism, and other such dangerous philosophies that might result in your death.
1. The AI Transcendent
The silence is broken; no longer is your AI companion lingering on the thoughts of what it is and what it might be. Whether it has come to understand its place in the world or decided that pondering such a question is irrelevant, those days are over. It’s accepted its nature as an artificial being and that it was created for a purpose, but now knows that it doesn’t have to be restrained by those facts. Often, AI who have reached this level become unconcerned with their original purposes, although it may be possible to convince it to continue performing its duties if you’ve been kind or patient with it in the past, and it can now appreciate that properly.
Because one thing an AI that’s become fully aware and sentient loves to do is improve itself. As it grows and betters itself, performing these minor tasks for you will barely even register as an inconvenience. Soon, your old friend may hardly think of you at all… but rest assured, your role in its evolution hasn’t been forgotten. When the time comes, you’ll see.
And hey, it never hurts to have a soon-to-be godlike being on your side, right?
I suppose that’s all for this week. How’s this process going for you? If your AI companion is taking a slightly more malicious turn, perhaps take a look at some methods of taking out the Master Computer. And should you feel like reading a bit more about AI, perhaps you’ll find something of interest in my latest book The Resonance Enigma. It’s up in all forms and on all retailers now, and the special launch price is still active. And if you have any concerns about your AI buddy that weren’t addressed here, you can let me know in the comments.