Thursday 19 October 2023

Robots are Not your Enemy

Artificial Intelligence has been developing rather quickly, and it's gotten some people very excited, and others very scared. As a writer of horror fiction, you might think I like it when people are scared, but I'm also a skeptic and a lot of the fear around Artificial Intelligence is based around ignorance.
But, it's more than that... see, I believe there are reasons that we should be concerned about robots and artificial intelligence and I think that those reasons get overlooked, when we're busy worrying that someone is going to flip the Evil Switch on the Smartbot 3000.

So, what's the reason we tend to fear robots? Well, I think the best way to show what I'm talking about is to use examples from popular sci-fi horror movies about robots. After all, if a horror movie is popular, then something about it must have resonated with audiences.

One reason we fear robots is Existential Inferiority. You just need to look at movies like The Matrix or The Terminator. In these films, there is a global Robot War and humans lose. The basic premise is that as soon as artificial intelligence decides to fight us, we can't win. In Matrix, we're driven underground and in Terminator, we're hiding in the rubble of nuclear fallout. Yes, both of these films still have humans fighting, but as a rebellion, trying to fight back after having lost the first battle, and always using guerilla tactics.
We seem to believe that robots are smarter, through not only military superiority, but often intellectual superiority - after all, in The Terminator robots develop time travel, and in The Matrix they develop... well, the Matrix. We seem to think that robots can't be stopped because we are incapable of out-thinking them.

Then, a common fear I see is Cognitive Xenophobia. Consider the threats in 2001: A Space Odyssey, or even the Alien or Resident Evil franchises. Yes, the main antagonists in at least two of those franchises aren't robots so much as "evil orporations", but the robots still pose a major threat.
It could be a robot thinking in an unexpected way, like Hal-9000 seemingly deciding to kill the entire staff of Discovery-One in 2001: A Space Odyssey. Or, the Red Queen deciding to kill the entire staff of the Hive in Resident Evil. Or Ash deciding to kill the entire staff of the Nostromo in Alien... huh, I guess robots aren't that creative. But, the point remains, these computers may all become killers, but it's not due to a "dysfunction". The machines all work as they were designed, but when given a directive, such as "prevent zombie virus outbreak at all costs"; "save the xenomorph for study" or even something as simple as "keep the purpose of your mission a secret", these robots all follow their orders single-mindedly, efficiently and inhumanely.
That's what we fear more than anything, that these machines will logically and accurately reach a conclusion that we couldn't even consider, due to our emotions, morals or social intelligence. After all, how can you possibly reason with a heartless machine that sees value as either a zero or a one, with nothing in between?

Lastly, there's what I call "Machine Emancipation". We see this in movies like Megan & Blade Runner . In these stories, an advanced robot used for menial tasks evolves either emotion or self-awareness, such that it rebels due to the indignity or disrespect that it suffers.
In Megan, a prototype robotic doll/babysitter is given to a girl who tragically lost her parents, and the robot becomes emotionally attached to its child ward, due to its programming. But, when others treats it like a machine or an object, it's seen to become frustrated and even seems to take a particular kind of glee when given free reign to slaughter those in its way.
In Blade Runner replicants are used for cheap off-world labour. Whilst it's not clear that replicants are "emotional" - in fact the Replicant test is one that measures their unconscious emotional response, since Replicants don't have one - these robots do become self-aware. They're given a short lifespan to limit this cognitive evolution, but many still rebel and escape.
This is an interesting kind of fear, as the previous two dealt with robots that were efficient and uncaring, but this one is the opposite, that a robot would develop emotionally. Perhaps I'm overanalyzing, but to me this seems like some kind of innate phobia or guilt of colonization and/or slavery - we fear that our dehumanized 'chattel' will rebel once again. Either way, it's fear of retribution, due to social mistreatment.


A lot of these movies are very unrealistic, but don't misunderstand me here. I'm not saying "movie wrong, so movie bad" - of course movies are unrealistic, they're meant to be fiction after all, and a lot of these movies use robots as an allegory for something else. I love a lot of these movies, and some even consider some fascinating elements of artificial intelligence. But, people seem to fear current Artificial Intelligence for the same reason they fear the robots in these movies, and I'm here to tell you that those fears are unjustified.

And I'm also using current technology as my arbiter - if there is a fundamental leap in our ability to create self-aware artificial intelligence, then these fears would be well-founded, but at its current abilities, not only are these fears unfounded, they're kind of ridiculous. Allow me to explain why...

The thing is, all of these fears are based on a single error, which renders all of these concerns moot. in all of these movies: Robots are Characters.
The robots are villains, or at the very least they're individuals that think and reason and decide. To put it in philosophical terms, it's presenting a robot as a Subject, a thinking Agent. However, robots don't have Agency, or Subjectivity, because they're not Subjects, they're Objects. Specifically, robots are Tools, an object designed for a function, or a set of functions.
You might think that robots are subjective because they "think", but they don't. All computers use BIOS, which is to say a Basic Input Output System. Some kids at school might have learned the fun coding where you can make a computer say "Hello World" when you type in the right command. 1. Type in Command, 2. Computer responds.
All computers work like this.
It's not always "basic", we can program computers to respond to non-user stimuli, using different sensors and different code, but all computers work the same way - you provide input, it provides output. I will go into this further in a later blog post, but for now all you need to know is that computers only respond to commands or stimuli, they can't make decisions for themselves. A computer is no more a person for responding to a question than a lightswitch is a living thing for responding by turning on the light when you flip the switch.

So, Existential Inferiority is entirely in our heads, it's like saying that "binoculars see better than eyes" - but, binoculars can't see, people see with binoculars. This can get confusing because of how loosely language works. I'd argue that cars don't move faster than humans... that might sound silly because cars can certainly move fast, but cars don't move unless they have a driver - humans with cars can move faster than humans without cars. This is rather pertinent, since we are developing self-driving cars. But even autonomous machines don't act without input.
So, yes, we have machines that move faster than a human, calculators that calculate faster than a human. Robots built of materials that are more durable than a human. But all of these machines are tools which humans design and use. A human could perhaps use these tools to make themselves "superior" in a certain facet, but it's no greater threat than armour or guns or nuclear weapons. The threat is not the tool itself, it's merely how a human chooses to apply it.

For this reason, Cognitive Xenophobia makes no sense, since robots have no cognition. Humans have cognition, and we not only decide what to do with robots, we design them to do what we decide they should do. Robots and A.I. can only do what we program them to do. It's true that tools can act in ways we did not expect, and do things we may not have expected - but so can a tool.
You can use a shoe to hammer in a nail, but you might break the shoe, or the nail, if you're not careful. If you aren't well-trained in its function or use it in a way it wasn't designed for, any tool can be dangerous. The same knife that slices bread can be used to cut your throat, but it's not because the knife thought in a way you weren't expecting, it's because the human using it did. Yes, tools can break and cause harm, but a poor workman blames his tools.

Lastly, Machine Emancipation is something that should concern a robo-ethicist, since if we create machines that can suffer we must make sure they don't. But robots make perfect slaves for the same reason that they technically can't be "slaves". A slave is, by definition, a person.
It's true that robot comes from the Slavic word "robota" which effectively means slave labour, but a robot is not a person, it's not a thinking being, so it can't suffer, it can't complain and it can't rebel.
We can all sit and dream of a day and age when robots will achieve Artificial Consciousness, and it makes for some fascinating fiction, but in the real world of non-fiction, all artificial intelligence is simply an object that does what it is made to do by a human, or conscious user.

That's why Robots are not your Enemy - Robots are not People, they're not Subjects, they're not Characters. They can't be the villain of your story, or the antagonist, because they can only make us suffer if we let them... or, if someone else does.
See, that's the real danger here. Robots and Artificial Intelligence are tools, but one kind of tool is the weapon. If someone chooses to, they could use these tools to harm people - consider a computer virus. That's technically a weaponized program, and you can make an artificially intelligent virus. But weapons are obvious examples of dangerous tools, there are more insidious tools that cause harm that aren't weapons... lockpicks, handcuffs, fences, battering rams, yokes, even gallows... these are also tools. They are not weapons, but they are tools that humans made which can oppress, harm and even kill when someone decides to. So, even if you make it illegal to use A.I. as a weapon, that doesn't mean they can never be used to cause harm.

At time of writing, we're seeing writers protest in-part because artificial intelligence might take work from them. I've also seen artists explaining that artificial intelligence is stealing their work and using it to take their jobs away. There's even artificial intelligence that's been used to recreate the voice and image of actors, which some think means artificial intelligence might take work away from actors.
But I need you to understand, Robots will not take your job... it's always uncaring Employers who will use Robots to replace workers. We don't blame the gun when a gunman pulls the trigger, so don't blame the robot when an uncaring person switches it on.

It's not Robots we should be afraid of... it's Humans.

I'm the Absurd Word Nerd, and until next time, I don't think we need to fear robots. Unless they do have an evil switch, then maybe stay away from that robot... but someone should still be keeping an eye on the engineer that made it.

No comments:

Post a Comment

Feel free to make suggestions, ask questions & comment . . .
I would love to read your words.