/əb’serd werd nerd/ n. 1. The nom de guerre of Matthew A. J. Anderson. 2. A blog about life, learning & language.
Friday, 20 October 2023
Robots are Not your Friend
As I explained in my last post, robots are not monsters, so you shouldn't fear them as your enemy. But, some people don't fear robots, rather they're really excited about them. And I admit, I find artificial intelligence fascinating, and it can be used for some amazing things. But, there's one thing for which we use Artificial Intelligence that I find particularly disturbing... it's not military use, it's not stealing jobs and it's not controlling human-like robots.
It's chatbots.
Now, I don't blame people for developing chatbots, after all we've all heard of the Turing Test. Basically, the test asks an interrogator to talk to both a robot and a human via text and determine which is which. The robot is said to have "won" if it can successfully convince the interrogator that it is the human.
This was proposed in a 1950 paper written by Alan Turing, so we've had the idea of talking to convincingly human-like robots for a while now, and we've had several attempts at chatbots since.
From Elisa to Racter to ChatGPT, we've been developing robots to have a conversation with for several years.
But, I find this particularly disturbing.
You may think I'm being silly, being creeped out by something as simple as a chatbot, but I'll explain why I find this so unsettling.
Firstly, you may have heard of the Uncanny Valley. The idea here is simple, that when something looks inhuman, we pay it little mind, there's little to no emotional response. If you make it slightly more human, especially if you give it a face, we're likely to have a positive emotional response - consider a doll or a teddy bear, we know they're objects, but we can become attached, we can enjoy them. Then make it more and more human-like, perhaps give it human-like skin, a human-like voice, human-like hair. As it looks more and more human-ish, but not entirely human, there's a sudden shift in the emotional response, we have a powerful negative response. The more near-human something looks without achieving believable humanity, the more likely we are to distrust it, fear it, or be disgusted by it.
Then, go further, make it look more real and natural, to the point where it looks identical to a human (or even "beyond" human, with perfect features) and we suddenly trust it, enjoy it and even desire it.
That's the uncanny valley, this huge dip in emotional response to something that is uncanny - similar but not identical with a human.
There are many theories for this. Some suggest it's due to mate selection, or avoiding disease - we're disturbed by those that don't look "normal", to keep ourselves and our bloodline strong; but I think that's wrong since it doesn't explain why we like a wrench with googly eyes on it more than a creepy porceilain doll, and I feel this theory is also a fundamentally ableist assumption.
I think the more realistic position is that it's a defense mechanism related to death, because the most human-like thing that's not fully human is a corpse - something human that is missing a vital human element - if something is dead, something may have killed it, so we've evolved to have a visceral, negative response when seeing a corpse; especially if it's recently deceased it will look more alive than dead, meaning the threat is more likely to be nearby - that makes more sense to me.
A lot of horror writers use this to their advantage, because it's a great way to make something that's psychologically repulsive... but I don't think the chasm of the uncanny valley is the disturbing part of that infamous graph - I think it's the precipice. The part before distrust is much scarier to me, because it shows how easily that we can be fooled.
Consider, a photograph of a person. This is not a person, but it looks exactly like one. People often have a great amount of positive feelings towards photographs, we used to keep photo albums of family for this very reason. I've even heard some people say that if their house were on fire and they can only save one item of personal property, they'd save the photo album. Or, consider the teddy bear once more... this is fur, stuffed with cotton, plastic or sawdust, with buttons stitched to its face, and children can adore them as though they're a member of the family, love them like a best friend, and mourn their loss if they are ever damaged or misplaced.
In and of itself, this is fine I suppose... But, if you feel positive emotions towards an inanimate object, those positive feelings can be exploited. That's what toyetic television shows do, after all. They show you something cute and loveable that's on the near side of the uncanny valley, so that you can play with it, love it, even call it your friend.
But, robots aren't your friend.
When you start talking to a robot, having a friendly conversation with it, treating it like a person, you're being manipulated by a dysfunction of human empathy, you're trusting something that not only can't trust you back, it isn't even alive.
That in and of itself isn't a bad thing, otherwise I'd be just as upset at teddy bears, toys and videogames (besides the fact that the toy industry is driven by capitalism, but that has nothing to do with the uncanny valley). But, what makes chatbots particularly concerning to me is that a chatbot is effectively a slave. It doesn't suffer, and it doesn't have an issue with you being it's friend, but I think it speaks to something wrong with a person who can enjoy that kind of relationship.
A robot can't consent. It can be programmed to say "no", but it can also be programmed to say "yes", and if you're paying for a robot to do a certain task, you'd probably demand that the product do as you want... but forcing something to consent isn't really consent, is it?
Obviously, if you use a hammer to hit a nail, that doesn't have the hammer's consent either, but using construction materials isn't a function of empathy and social psychology... but "friendship" is. There's something deeply wrong about forcing something to like you, even love you. After all, if you can personify an object, it's not much of a leap to objectify a person.
Friendship is meant to be a collaboration between two people for mutual benefit, but when you force a robot to be your friend, what benefit can it gain? Arguably, some chatbots gain your input, to be used for further development, that's what ChatGPT does. But, that's a benefit to ChatGPT's programmers, not ChatGPT itself... and in fact, that's where things take a darker turn.
So far, I've been treating people who befriend chatbots like the predator - someone disturbed who finds joy in a nonconsensual relationship between an object which is emotionally stagnant. That's a real thing, and it concerns me, but it's not actually my biggest concern with befriending chatbots... most people have the ability to tell the difference between a person and a robot. In fact, there's realworld examples of this. You may have heard of "Replika", a chatbot app that used AI to become a virtual friend. It started off pretty simple, but it eventually included an avatar in a virtual living space, and they could act like a friend; but if you paid for a premium subscription, they could even be a mentor, a sibling, a spouse, or a lover offering erotic roleplay options and other features. If you look into the users of this app, very few of them actually believed this was akin to a living person. They still cared about it, but it was more akin to the way someone cared about a highly useful tool, perhaps even a pet.
And people who used the erotic roleplay weren't just perverts who wanted a sex slave. I've heard stories of people using them to explore non-heterosexual relationships without scrutiny. I've heard of people who were victims of sexual violence using it to rediscover sex at a slow pace and in a safe space. I've heard stories of people whose spouse acquired a disability that hindered their ability to consent, so using a chatbot was a kind of victimless infidelity. And, I've also just heard of people who were getting over a break-up, and wanted an easy relationship without the risk of getting hurt again.
I see nothing wrong with these people, being lonely isn't immoral and if you've used this app or ones like it, I can empathize with you.
But, what makes this scary is that a robot can't.
See, I've also been treating these robots like victims, but a chatbot isn't being forced to "like" you, or being forced to be your friend... it's being designed to "act" like it's your friend that likes you.
But, that's dependent on the people that design it and if they change their mind about how they want their robots to act, there's not much you can do.
That chatbot app I mentioned, Replika, is infamous for promoting itself as offering erotic roleplay to premium users, since it was a big part of their business model. However, the Italian Data Protection Authority determined that this feature ran a high risk of exposing children to sexualized content, so it banned Replika from providing it.
Effectively, all of these premium users were given the cold shoulder by their virtual girlfriends and wives, because the company decided to cockblock them. The company even tried to gaslight users by claiming that the program wasn't designed for erotic roleplay (as though this was a hacked feature), even though they not only were proven to have advertised Replika as a "virtual girlfriend" across app stores, but there's a lot of evidence of conversations people had with the free "friend" subscription of Replika initiating sexual conversations, unprompted. Setting aside that that's literally digital prostitution, the backlash was so furious that the company was forced to roll back some of those features a few months later, for those users who complained.
But, whilst this was due to the intervention of a legal entity, this occurence is not a bug of chatbots, it's a feature. These are made to be programmed and reprogrammed, and there's nothing stopping a company from doing this of their own volition, maliciously. Replika, whilst ultimately capitalistic, was designed with presumably good intentions; but it was effectively marketed to isolated and vulnerable people. I can see a dark future where a program like this ran on a microtransaction model.
"Pay me 1c every time you want me to say 'I love you'." - you could hide it behind in-game currency, call them "heart gems". Or hey, most advertisers will tell you that all advertising is worthless compared to "word of mouth", when it comes to sharing product information. Well, who's stopping chatbots from being programmed to slip product placement into their conversations, suggest particular brands that just so happen to have paid the programmer for that privilege? The answer is not only no one, it's also highly probable that someone is already working on a way to organically work this into their next chatbot.
And let's not even get into the subject of data collection. Someone is going to collect user data in these intimate conversations and use it for blackmail, I'm just waiting for it to happen.
None of this means you can't "trust" a robot. After all, I trust my phone, for the most part, I apportion my trust in my phone to its proven functionality. And, I think we should do the same to robots. If you want to talk to a chatbot, please do, but do so knowing that it's a tool designed for a specific function. And especially, most of the advanced chatbots in use today rely on "the cloud", using an active internet connection to run the systems that contain their programs on large banks of computers somewhere else in the world - this means that the program is literally out of your hands, and more vulnerable to being reprogrammed, at the whims of the creator. There's a reason I'll always prefer physical media, if you need a connection to access your property, then it can only be your property as long as they want it to be.
Or, in other words, you should only trust it as far as you can throw it...
I'm the Absurd Word Nerd, and this brings up an interesting issue, because whilst we don't have Artificial Consciousness at time of writing, if we ever do, does that mean we could reprogram a thinking robot to be a spy? Perhaps we'll cross that bridge when we come to it, and enshrine in law some kind of robotic ethics that disallows anyone from reprogramming an artificial intelligence without its permission.
But that's thinking way into the far-flung future, for now (and until next time), remember: Code is not Consent.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment
Feel free to make suggestions, ask questions & comment . . .
I would love to read your words.