Showing posts with label halloween countdown. Show all posts
Showing posts with label halloween countdown. Show all posts

Monday 23 October 2023

Sexual Objects

 

Okay... let's talk about sexbots.

Trigger Warning: Misogyny, Rape , Violence Against Women & Child Molestation - if any of these are triggers for you, take whatever precautions you deem necessary, as these topics are both heavily referenced as well as discussed in detail.

We've discussed Artificial Intelligence long enough, it's a burgeoning new technology. And, in the words of Dominic Noble "one of the first questions that mankind asks about any new piece of technology is 'can I fuck it'?", and the answer is Yes.
We've had sex dolls for a long time, in fact much longer than I ever thought. As early as the 1400s we have examples of advertising for life-sized sex dolls made out of cloth (I presume using lard or other lubricants for the "hole", but I didn't look into it). These dolls were called "dame du voyage" in French, or "dama de viaje" in Spanish, literally "woman of the journey" marketed towards sailors traveling overseas for months on end.
So, we've been fucking fake women for a long time, and I do mean women - not men. Sure, we've had dildos for a long time, but almost all sex dolls are female-presenting. Women seem less interested in sticking their fake penis onto a fake man... "fairer sex" indeed.
But, modern sex dolls are quite the upgrade from either the cloth or "inflatable" sex dolls from yesteryear - comprised of silicone, a steel skeleton frame, poseable limbs and "penetrable cavities", modern sex dolls are much more realistic than their ancestral sisters.

So, we have A.I. chatbots, we have sex dolls... put them together, and you have a functioning bimbo bot.
Well, I say "functioning", but even the most advanced real dolls don't have moving limbs. Servo motors that are powerful and dynamic enough to move arms around realistically, safe enough that they won't hurt customers (or the doll) and also cheap enough to be used for a commercially available sex doll... don't exist.
So, even the most advanced sexbots, like the RealDoll model "Melody", still don't have moving limbs, it's basically an upgraded head attached to a standard RealDoll body. But, for some people, this one addition was the beginning of the end.
Sorry, by people I mean "weird, horny men", and by the end, I mean "the end for women". Because there are some men who seem to genuinely believe that women are an endangered species now that fuckable robots exist.

And this is deeply disturbing. I wish I could say that this was just fearmongering, but there are a lot of incels and men's right's activists and MGTOWs who openly declare that no man needs a women anymore.
Seriously, I've done a lot of research into sexbots for this article, and inevitably the comments sections are flooded with sentiments like "this is what you get, feminists, we don't need you anymore" or "sex whenever you want, and no risk of nagging, periods or STDs, what more can you need?" and other despicable comments decrying women for being too frigid, opinionated or privileged, and blaming them for allowing them to be replaced by a superior sexual partner: the sexbot.
Let's set aside the fact that sexbots don't have a womb, so they can't have any children, because not all women have wombs either (or want children), and I really don't want these neckbeards procreating anyway...
More importantly, a sex robot is not a human being, it's not conscious, it's not alive, it's not a subject, it's just an object... an object that you can have sex with.
In case I somehow haven't spelled it out clearly enough, these people are literally equating women with sex objects. Or in some cases, they're even saying that sexbots are better because they are sex objects, literally saying that they prefer women that are sex objects.
Do these morons realize they're saying the quiet part out loud? Perhaps I gave misogynists too much credit, assuming that they'd have any degree of self-awareness.

Now, these are a minority. A loud minority that I wish were recieving psychiatric help for their mental health issues, but a minority nonetheless. Most people don't want to replace women with sexbots, because they see a sexbot as an expensive sex toy and I think that's fair. Most people also tend to think that sex dolls are only bought by sad, desperate or lonely people, and yeah I understand that as well. It's way too easy to dehumanize people for being weird and people see those that own sex dolls as sick and disgusting and as someone with automatonophobia, I find sex dolls horrifying - they look like dead bodies to me, with a fucken creepy dead-eyed stare and fingers that bend in unnatural ways.
I don't understand how anyone can find that attractive, let alone have them sitting silently in the corner of their room at night or lying in their bed expectantly, or god-forbid even waiting quietly in a dark closet next to your shoes. But that's just my opinion - I also think prawns taste disgusting, but that doesn't make people who love eating prawns degenerate tongueless freaks.
See, a sexbot is a sexual object, but I don't think that has to be a bad thing... a dildo is also a sexual object.

In fact, if we see a sexbot as little more than sex toy, surely that sets a lot of these concerns aside, doesn't it? In fact, as discussed at length in "Robots, Rape & Representation" by Rob Sparrow, (published in the international Journal of Social Robotics), a sexbot cannot be "raped", because lacking sentience it cannot provide, or withold, consent.  Even if we consider the idea that they can be programmed to "act" like they give consent, or even act as though they do not, if someone has sex with a robot that does not perform that act of consent, it too is not rape...
If you allow me to provide an example: A vibrating dildo is designed to vibrate when used for sex and you know it's working when it buzzes, but nobody would call you a rapist for masturbating with a vibrating dildo without putting the batteries in it just because it wasn't "acting like it was ready for sex".
The paper even discusses how even if you program a robot to act as though it witholds consent (for the sake of indulging in rape fantasy) this is still not rape, as there's no person being violated. It does accept that such an act would "simulate" rape, but simulating rape, or enjoying the simulation of rape, is not a crime anymore than enjoying the simulation of murder in videogames, or enjoying the simulation of torture in horror movies. The paper goes on to discuss the philosophy and morality of simulated rape, and how virtue ethics provides a model for determining why it's immoral... it's a fascinating read, and I recommend it, but I have a simpler, gut-reaction that explains the problem here.

Consider this thought experiment... let's accept for a moment that having sex with a sexbot that does not actively consent does not count as rape, and it is not a crime because there is no victim and no violation, and a sexbot is little more than an oddly-shaped sex toy.
If that were the case... then would there be a problem with designing sexbots that looked like children?

I, and I assume most people, find that idea abhorrent. But, there's no children getting hurt here and we agree that it's not rape - it's just an object, it's not a real child, it's not a real person, it's not even alive...
But that's not the point, is it? The point is that it speaks to the desires of the one using the sexbot for something like that. It's not the doll itself that might cause the person to act in that way - it's the fact the person's desire to act like that in the first place is disturbing, and there is a complicity in not only permitting this act, but condoning it by letting someone commit these acts, uncontested.

But we wouldn't do that, we know it's wrong - in Australia, it's illegal to make, own, or import sex dolls that are made to look underaged minors. I learned that when doing research and stumbling across a book called "Sex Dolls, Robots and Woman Hating" by Caitlin Roper. In the book (and in interviews discussing the book, as I have not actually read the book myself) Roper discusses how not only do these types of dolls exist overseas, they're sometimes advertised to look like they're scared or crying. Yep... that's a fact that's gonna haunt me for the rest of my life.
But more relevant to the topic of sexbots in particular, Roper also talks about how the argument that nobody is being hurt, and that raping robots doesn't cause people to become rapists is true, but a smokescreen. Of course media doesn't generate immorality but that's also not the point - nobody thinks that simulated rape causes rape anymore than simulated murder causes murder. But, it can perpetuate certain attitudes and beliefs.

It's the same as saying that a single muffin will not make you fat - of course it won't, it's a single muffin, you need a little fat in your diet - but if it's a larger part of a diet of unhealthy food, it will affect your health and metabolism. Just like how a media diet saturated with unrealistic portrayals of women, that objectifies, dehumanizes and commodifies their sexuality, beauty and companionship whilst devaluing their emotions, opinions, social worth or equality will also lead to unfair treatment of women - yes, even to the point of ignoring or condoning violence against women, and rape.
This is rape culture, or complicit culture (although my preferred term never caught on... c'est la vie); this is people blaming rape victims "dressing too provocatively", this is people excusing college boys from rape because "boys will be boys" and they "made a mistake". And, yes, it's saying that we should replace women with sexbots because women "reject men" and "have periods".

So, sure, a sexbot is just an expensive sex toy, and it's not the end of the world, it's not even the end of women. Life will find a way, that's what it does. But, we also need to recognize that it is social and psychological junk food - and that sex with sexbots does have issues regarding consent and unhealthy concepts towards how we treat and value women in our society. Just as pornography and online content and all manner of tropes in media can perpetuate these concepts.
And if you genuinely think that a sexbot is the same as or better than a woman, as the kid's say, you need to go outside and touch grass.

In conclusion, do you know what these people remind me of? In 1965, PSU researchers were doing a follow-up test to an earlier experiment trying to identify triggers for "social bonding", and they discussed their findings in the paper "Stimuli eliciting sexual behavior" (by Schein, M.W., & E.B. Hale). It describes how they propped up a taxidermied female turkey in a pen and put it before a horny turkey (i.e. one deprived of sex for a few days during mating season). They found that male turkeys still initiated a mating dance with these models, and would try to mate with it. In fact, it wasn't bothered if they removed the legs, or wings, or even the body - even if it was just a head on a stick, the male turkeys were equally as likely to want to mate with it.
Now, this has to do with how birds identify one another, but you can't tell me that the current models of sexbots that put a chatbot into an upgraded head aren't literally this... it's a head on a stick, that people want to have sex with.
Until next time, I'm the Absurd Word Nerd, and we may think humans are the dominant species on this planet, but shit like this just proves that we're all a bunch of turkeys.

Sunday 22 October 2023

More than Human


I've been talking about artificial intelligence a lot for this Halloween Countdown, because I have a lot to say about it. But, I get that it could get a bit much... after all, in my experience, everyone is talking about A.I. right now, it's all over news, media, art and culture right now. So, okay, today I'm going to talk about something else.
How about Superheroes? (ha, ain't I a stinker?)
Fine, maybe they're talked about perhaps even more than A.I., but when I was considering the theme of "inhuman", to discuss robots and A.I., "superhumans" was one of the first things that came to mind, and I have a lot to say about it and inhumanity.

Superheroes are really cool, and one of the reasons they're so popular is because they are wish fulfillment. It's a fantasy, to be strong, powerful and beautiful. Male superheroes are usually muscular, young and handsome and always get the girl, and female superheroes are usually powerful, young and sexy and wear revealing outfits - according to TV Tropes, the Most Common Superpower is having big breasts.
Unfortunately, a lot of this is heterosexual, male wish fulfilment; not all girls want to have a thin waist, and a slinky spine that can show off their bum and boobs at the same time, and not all men are straight and care about getting any girl, let alone a hyperfeminine model in a skintight catsuit.

But, besides the whole sexist legacy of superheroes that still affects comics to this day one of the issues with these being wish fulfilment fantasies is that they're unachievable.
We're meant to aspire to superheroes, and in fact a lot of people have spoken about superhero characters as mythological - they're meant to be a symbol of morals, of truth, of justice or even just of kindness. But, superhumans aren't just really great humans... they're "above humans" that's what the prefix super- means, something above, greater than, more than.
Often, superheroes are an allegory for some kind of injustice, some philosophy which the hero is standing for, but if you're presenting the iconic hero as someone who has power greater than any human can achieve, with inhuman strength, speed, ability, intelligence or morals... how can we possibly achieve that? I worry that superheroes, by being so much more able than humans, actually make their morals seem unachievable.

Now, that's a little pessimistic. After all, in these movies often the villains too are also superhuman. Sure, you need a Captain Planet to clean up the oil spill from a supervillain like Hogs Greedly, or to protect the animals from a Looten Plunder; but when facing earthly problems, even a kid like you can be an earthly hero - the power is yours!

That's what Captain Planet wanted to teach us anyway, and that's fair. But, I still can't help but notice that a lot of modern superheroes solve their problems with violence. Yes, we should be willing to fight for what we believe in... but often that fight is more metaphorical, but I can't think of a single Marvel Superhero in the MCU that hasn't thrown a punch. Seriously, can you name a single Marvel superhero who has never tried to punch their problems away?
This problem is twofold, because not only does it normalize violence, but it also reduces problems to ones that are purely physical. If you can't represent a problem with something that has a face which you can punch, then it's not a problem that a superhero can solve. But not all problems are physical...
Sexism, Racism, Homophobia, Capitalism, Corruption, Tyranny, Inequality. Systemic problems, all, but a superhero can't fight them unless there's one big, bad Keystone villain behind it, whose death kills it.
I won't spoil anything explicitly, but In movies like Black Panther; Black Widow & Captain America, heroes face issues of racism, sexism & nazism; in series like Daredevil; Falcon and the Winter Soldier & Loki, heroes fight corruption, injustice & tyranny...
But in every case, they stop these systemic issues by finding the one person responsible, usually a supervillain, and punching them in the face - or the equivalent, using arrows, magic, lasers or whatever their gimmick is. I'm not dismissing these movies, I like these movies, but you can't deny that they boil problems and conflicts down to the actions of "the people that do the bad thing", and then solve it by removing them from the equation.

That's not just inhuman, that's alien. If you think the way to stop inequality is "find the person who caused inequality, do a backflip and snap his neck", then I don't think you know what inequality is.

But, okay, these tend to be action movies... and some of this is to be expected. Stories are meant to be entertaining after all, and many stories pay lip-service to these ideas, whilst still being interesting. After all, whilst Captain Planet destroyed robots and stopped forest firest with his super-breath, he didn't exactly stop to pick up trash every day. There's a reason he left that crap to the pre-credits "educational segment", because otherwise the show would be boring. Unfortunately, in the real world, solving systemic problems takes education; political activism and protests, most of which aren't necessarily "boring", but it's sure as hell not what I want in my sci-fi action movies. It's meant to be thematic, that's fine...

But, what's not fine is that when you look at movies thematically, solving systemic problems is not only "not heroic", it's downright villainous.
Almost half of the villains in the MCU want to change the world. Sometimes, sure, it's because they're selfish, greedy or evil - Iron Monger wants money and power; Abomination wanted military might; Whiplash wanted revenge; Red Skull wants Nazis to rule the world; Loki wants royalty and power & Malekith wanted to destroy life and light to empower dark elves... do y'all remember Malekith? the "Thor: The Dark World" villain...
Anyway, my point is, they want to destroy for their own ends, to get their own power. But, that's not the only change supervillains aspire to.
Ultron's ultimate goal was world peace; Iron Man first fought Captain America for the sake of transparency and accountability (prior to revenge); The Vulture's goal was economic freedom for an oppressed lower class; Killmonger's goal was social freedom for an oppressed ethnic minority & Thanos's goal was to prevent societal collapse, on a cosmic scale.
You could even argue that Ego's goal in Guardians of the Galaxy: Vol 2 was ultimately family and community, but that might be pushing it... either way, these supervillains are fighting for change, and sometimes they're even fighting for good change. Obviously, if your goals are greedy or selfish at the cost of others, that's wrong... but what of the ones fighting for good? Well, they usually do that by, say, killing dozens, hundreds, thousands, even millions, billions of people - hell, I don't even know what "-illion" of people Thanos killed, but I'm pretty sure it was half a zillion.

Their goals are worthy, but their methods often aren't. But, this is worrying because the more and more that superhero movies become mainstream, the more we start to absorb their tropes.
Consider this, even if you don't know who the villain will be in a story - let's say it's either kept a secret, or you just avoided all marketing - unless they're covered in blood and screaming like a maniac, the obvious giveaway will often be that it's someone confident and charismatic who wants to change the world.
And yes, some confident charismatic people who want to change the world are bad... I'm just going to say the word "Hitler", we will acknowledge Godwin's law, and the fact that Nazis suck, but then we're going to move past it and look at more examples: Susan B. Anthony; Martin Luther King; Nelson Mandela; Harvey Milk; Sylvia Rivera; Greta Thunberg.
These people look at the problems, speak out, and have changed the world for the better.

Superheroes are reactionary, which takes away a lot of their agency, but more importantly, their goals are often to stop people changing the world. No, I don't want people to die, and I don't want some greedy villain to get more power, money or meaningless revenge... but, if all superhero fiction had an overarching theme, it appears to be: "true heroes sacrifice everything to keep things the way they are"... which is a depressingly regressive point of view.
The only way to improve the world is to change it, and sometimes, yes, that does mean we have to destroy what we once had. Change is scary, especially if you don't trust the person doing the changing, but I just want to ask one thing...
Rather than React to supervillains changing the world for the worse, when will a superhero Act to change the world for the better?

I'm not saying superhero movies are bad, or that you shouldn't enjoy them. I'm just saying that if there's one thing that superhero movies are missing, right now, it's change. I will keep watching them, I am a geek after all, but I won't truly be happy until I see a superhero change the world.

I'm the Absurd Word Nerd, and I think this was an apt post. All this talk about how A.I. is dangerous might make people think that I don't want change, but I do - admittedly I prefer it when it's slow and manageable, but I do like progress.
Until Next Time, remember that just because you like progress, that doesn't make you a supervillain... it's the killing and hurting of innocent people in the process that sends superman flying after you.

Saturday 21 October 2023

Artificial Intelligence Isn't


 When I say "artificial intelligence isn't", I don't mean it isn't artificial. Artificial comes from the word "artifice", which means skill, trick or craftmanship, and it has the same word root as "art". Artifice is basically something created, so artificial intelligence is certainly artificial, humans create it.
But, when I say "artificial intelligence isn't" I mean that it isn't Intelligent.

Now, some people might think this is just sour grapes. After all, I've been talking about the limitations of Artificial Intelligence, especially the ways that it lacks consciousness, subjectivity and morals. Some people might think that I'm being regressive and hating progress because it's changing the way things are. After all, Artificial Intelligence has as much memory and computing power as hardware allows, so how cna I say it isn't "intelligent?
Some boffins have calculated that the average (adult) human brain has approximately 2.5 petabytes of space, and can operate at one quintillion calculations a second (or one "exoflop" which is apparently a word that exists).
According to what I can find online, Google controls approximately 27 petabytes of storage in their network of data centres, outclassing the human brain tenfold; and, Frontier, the supercomputer at Michigan State University, operates at two quintillion calculations a second, or two "exoflops", doubling human thinking speed.

So, my hypothesis was wrong, computers are smarter than humans, end of story - right?

Well, no. It's not just about storage or speed, it's not just the capacity, it's about how it's utilized.
A car for example is much faster than a horse and can go much farther down a road - it has better numbers. But, if you put a metre-high stone wall in front of a car and a horse, the car can't get over it, but a horse can simply jump.
Yes, the car has better numbers than a horse, but a horse uses its speed and stamina in a different way.

Now, I'm not suggesting we all abandon cars for horses, a car is a useful tool. I'm also not saying we should never use artificial intelligence, it too is a valuable tool. I don't hate artificial intelligence, I've seen some of the applications it can be used for and it's very impressive, and can help us in some interesting ways. But just as we shouldn't use a car to jump a fence, since it can't do that very well, I think that we should be wary of using computers for applications they can't do well.
And one thing computers do very poorly, is "think".

I really need to explain "why" since it sounds like I'm just repeating myself saying "computer dumb". But there's a great thought experiment explaining this, known as the Chinese Room.

The idea here is simple, imagine that you are in a small room with a mail slot, and occasionally someone pushes through envelopes with letters written in Chinese, or some other language you don't speak. It's your job to respond to these letters, but how can you do that? Well, someone left a giant book of instructions for you, on which characters to respond with. It doesn't translate the words, it simply provides instructions for which characters to place in which order. So, it could say "when you see X string of Chinese characters, reply with Y string of Chinese letters" or even "If recieving X string, randomly select an answer from A, B or C..." etcetera.
The point of this is simple... with a detailed enough instruction book, you can easily respond in Chinese, even though you don't speak it.

This is equivalent to a computer. Computers use BIOS, a basic input-output system (kind of like a mail slot). But computers don't speak English (or Chinese, for that matter) the language of computers is binary, 1s and 0s. We provide the necessary code (that's their big book of instructions) that tells them how to respond.
At no point in this process does a computer "understand" what it's doing. When you use a computer to calculate a difficult maths problem, it can "provide" the answer faster than many humans, but it also doesn't "know" the answer. You can actually use the same "coding logic" to make water calculate difficult maths problems as well as a computer would. It doesn't know the answer, it's just doing as it's told.

You may say then "but, if a computer can't think or understand, how can they act so smart, by creating new art or text just by giving them a prompt, or input? How can ChatGPT write in a certain style, or essays about certain topics, or have realtime, dynamic conversations if it can't even speak English?"
Well, dear reader... remember that computers are very, very stupid. So, they cheat.
Do you know what training data is? We give A.I. ENORMOUS swathes of information just to operate. According to various information I can find online, ChatGPT's latest version has over 570 GB of Text. Text doesn't take up a lot of space... Doing some back-of-the-envelope maths, the average word is five letters, which is approximately 25 bytes of information, which means that ChatGPT's training dta is approximately one quadrillion, twenty-two-point-eight trillion words.
For reference alone, the King James Bible is a honking big book at around 700,000 words, and 1.8 quadrillion words is equivalent to 32,000 bibles.

Considering that the average human reads an average of 12 books a year, that's a hell of a lot.

But, ChatGPT needs all of this because otherwise, it doesn't know what it's saying. The only way to make ChatGPT sound as smart as it does is to give it such an overwhelming amount of data that it can just interpolate.
Interpolating is Easy. If I have: 2, 4, 6, ... , 10, 12 & 14, what's missing?
8. A computer can figure that out easily, it's just a pattern. And computers are good at finding patterns, because patterns are easy to identify with maths and computers are great calculators.
So, if you ask ChatGPT to write, say, a poem about a worm having an existential crisis, it can do that... but only because it had so much data in its training data that it can interpolate what's missing from its data. It has hundreds of poems to determine structure, countless dozens of philosophical texts and entomology articles, not to mention millions of words at its disposal... it makes sense that it can easily interpolate how to organize those words to respond to a prompt.
Interpolation is easier the more data you have. But, it's more difficult when you're missing data. This is how you get issues, like "machine hallucinations" where A.I. makes things up. It's not deliberately lying, it's just finding the missing link between disparate data-points without enough data to work from.

So, in answer to your question... the reason computers can act so smart is because humans are so smart. If you ever wrote a great "chinese room" style machine, to speak in Chinese... it will be because YOU speak Chinese. We've put our thumb on the scale, by filling up A.I. memory with specific instructions and relevant training data... but the one thing it lacks, which a whole lot of human brain capacity does use, is the ability for independent goals, reasoning and dynamic problem-solving.
Computers act smart because humans write the script. Now, things get a little more complicated because lately we've started using machine learning, which uses evolutionary models to create code - here, we give machines a goal, and we allow them to mutate and grow (with neural networks) to find new connections and new ways of writing their code, to create programs that more effectively achieve their goals. In this case, humans are still defining the parameters - we decide what's good output, and delete (or alter) computers that fail to provide it. But, this creates a new problem called the "black box" problem, where we can create machines that are broken or make mistakes, but we can't fix them because we don't exactly know how all of the different parts connect together. That's a major issue, and one that we are already facing the ramifications of. It's where machine hallucinations come from, but it's not actually the point I'm making.

The point I'm making is that it doesn't matter how we write the code, but all computers operate according to their code, and code is ultimately just a set of instructions. And I understand why we call computers intelligent, because I'm a writer... I love books, and I have often said "this is a brilliant book", or "this is a clever kid's book", because books are full of really smart words and stories and plots. But books aren't smart; authors are smart.
So, next time you are impressed by an artificial intelligence, because it seems smart, lifelike or creative... remember that it's not the computer that's clever, it's the programmer. More importantly, always remember that artificial intelligence can only "act" smart. And if you try to learn from it, it really is the blind leading the blind.

I'm the Absurd Word Nerd, and until next time... I can't help but compare machine learning to the "infinite monkey theorem", since it randomly mutates, and kills off the unsuccessful attempts. Which means, when it comes to machine learning A.I., you have to ask yourself: Did we kill the right monkey?

Friday 20 October 2023

Robots are Not your Friend


 As I explained in my last post, robots are not monsters, so you shouldn't fear them as your enemy. But, some people don't fear robots, rather they're really excited about them. And I admit, I find artificial intelligence fascinating, and it can be used for some amazing things. But, there's one thing for which we use Artificial Intelligence that I find particularly disturbing... it's not military use, it's not stealing jobs and it's not controlling human-like robots.
It's chatbots.

Now, I don't blame people for developing chatbots, after all we've all heard of the Turing Test. Basically, the test asks an interrogator to talk to both a robot and a human via text and determine which is which. The robot is said to have "won" if it can successfully convince the interrogator that it is the human.
This was proposed in a 1950 paper written by Alan Turing, so we've had the idea of talking to convincingly human-like robots for a while now, and we've had several attempts at chatbots since.
From Elisa to Racter to ChatGPT, we've been developing robots to have a conversation with for several years.

But, I find this particularly disturbing.

You may think I'm being silly, being creeped out by something as simple as a chatbot, but I'll explain why I find this so unsettling.

Firstly, you may have heard of the Uncanny Valley. The idea here is simple, that when something looks inhuman, we pay it little mind, there's little to no emotional response. If you make it slightly more human, especially if you give it a face, we're likely to have a positive emotional response - consider a doll or a teddy bear, we know they're objects, but we can become attached, we can enjoy them. Then make it more and more human-like, perhaps give it human-like skin, a human-like voice, human-like hair. As it looks more and more human-ish, but not entirely human, there's a sudden shift in the emotional response, we have a powerful negative response. The more near-human something looks without achieving believable humanity, the more likely we are to distrust it, fear it, or be disgusted by it.
Then, go further, make it look more real and natural, to the point where it looks identical to a human (or even "beyond" human, with perfect features) and we suddenly trust it, enjoy it and even desire it.

That's the uncanny valley, this huge dip in emotional response to something that is uncanny - similar but not identical with a human.
There are many theories for this. Some suggest it's due to mate selection, or avoiding disease - we're disturbed by those that don't look "normal", to keep ourselves and our bloodline strong; but I think that's wrong since it doesn't explain why we like a wrench with googly eyes on it more than a creepy porceilain doll, and I feel this theory is also a fundamentally ableist assumption.
I think the more realistic position is that it's a defense mechanism related to death, because the most human-like thing that's not fully human is a corpse - something human that is missing a vital human element - if something is dead, something may have killed it, so we've evolved to have a visceral, negative response when seeing a corpse; especially if it's recently deceased it will look more alive than dead, meaning the threat is more likely to be nearby - that makes more sense to me.

A lot of horror writers use this to their advantage, because it's a great way to make something that's psychologically repulsive... but I don't think the chasm of the uncanny valley is the disturbing part of that infamous graph - I think it's the precipice. The part before distrust is much scarier to me, because it shows how easily that we can be fooled.
Consider, a photograph of a person. This is not a person, but it looks exactly like one. People often have a great amount of positive feelings towards photographs, we used to keep photo albums of family for this very reason. I've even heard some people say that if their house were on fire and they can only save one item of personal property, they'd save the photo album. Or, consider the teddy bear once more... this is fur, stuffed with cotton, plastic or sawdust, with buttons stitched to its face, and children can adore them as though they're a member of the family, love them like a best friend, and mourn their loss if they are ever damaged or misplaced.
In and of itself, this is fine I suppose... But, if you feel positive emotions towards an inanimate object, those positive feelings can be exploited. That's what toyetic television shows do, after all. They show you something cute and loveable that's on the near side of the uncanny valley, so that you can play with it, love it, even call it your friend.

But, robots aren't your friend.

When you start talking to a robot, having a friendly conversation with it, treating it like a person, you're being manipulated by a dysfunction of human empathy, you're trusting something that not only can't trust you back, it isn't even alive.

That in and of itself isn't a bad thing, otherwise I'd be just as upset at teddy bears, toys and videogames (besides the fact that the toy industry is driven by capitalism, but that has nothing to do with the uncanny valley). But, what makes chatbots particularly concerning to me is that a chatbot is effectively a slave. It doesn't suffer, and it doesn't have an issue with you being it's friend, but I think it speaks to something wrong with a person who can enjoy that kind of relationship.
A robot can't consent. It can be programmed to say "no", but it can also be programmed to say "yes", and if you're paying for a robot to do a certain task, you'd probably demand that the product do as you want... but forcing something to consent isn't really consent, is it?
Obviously, if you use a hammer to hit a nail, that doesn't have the hammer's consent either, but using construction materials isn't a function of empathy and social psychology... but "friendship" is. There's something deeply wrong about forcing something to like you, even love you. After all, if you can personify an object, it's not much of a leap to objectify a person.
Friendship is meant to be a collaboration between two people for mutual benefit, but when you force a robot to be your friend, what benefit can it gain? Arguably, some chatbots gain your input, to be used for further development, that's what ChatGPT does. But, that's a benefit to ChatGPT's programmers, not ChatGPT itself... and in fact, that's where things take a darker turn.

So far, I've been treating people who befriend chatbots like the predator - someone disturbed who finds joy in a nonconsensual relationship between an object which is emotionally stagnant. That's a real thing, and it concerns me, but it's not actually my biggest concern with befriending chatbots... most people have the ability to tell the difference between a person and a robot. In fact, there's realworld examples of this. You may have heard of "Replika", a chatbot app that used AI to become a virtual friend. It started off pretty simple, but it eventually included an avatar in a virtual living space, and they could act like a friend; but if you paid for a premium subscription, they could even be a mentor, a sibling, a spouse, or a lover offering erotic roleplay options and other features. If you look into the users of this app, very few of them actually believed this was akin to a living person. They still cared about it, but it was more akin to the way someone cared about a highly useful tool, perhaps even a pet.
And people who used the erotic roleplay weren't just perverts who wanted a sex slave. I've heard stories of people using them to explore non-heterosexual relationships without scrutiny. I've heard of people who were victims of sexual violence using it to rediscover sex at a slow pace and in a safe space. I've heard stories of people whose spouse acquired a disability that hindered their ability to consent, so using a chatbot was a kind of victimless infidelity. And, I've also just heard of people who were getting over a break-up, and wanted an easy relationship without the risk of getting hurt again.
I see nothing wrong with these people, being lonely isn't immoral and if you've used this app or ones like it, I can empathize with you.
But, what makes this scary is that a robot can't.

See, I've also been treating these robots like victims, but a chatbot isn't being forced to "like" you, or being forced to be your friend... it's being designed to "act" like it's your friend that likes you.
But, that's dependent on the people that design it and if they change their mind about how they want their robots to act, there's not much you can do.
That chatbot app I mentioned, Replika, is infamous for promoting itself as offering erotic roleplay to premium users, since it was a big part of their business model. However, the Italian Data Protection Authority determined that this feature ran a high risk of exposing children to sexualized content, so it banned Replika from providing it.
Effectively, all of these premium users were given the cold shoulder by their virtual girlfriends and wives, because the company decided to cockblock them. The company even tried to gaslight users by claiming that the program wasn't designed for erotic roleplay (as though this was a hacked feature), even though they not only were proven to have advertised Replika as a "virtual girlfriend" across app stores, but there's a lot of evidence of conversations people had with the free "friend" subscription of Replika initiating sexual conversations, unprompted. Setting aside that that's literally digital prostitution, the backlash was so furious that the company was forced to roll back some of those features a few months later, for those users who complained.

But, whilst this was due to the intervention of a legal entity, this occurence is not a bug of chatbots, it's a feature. These are made to be programmed and reprogrammed, and there's nothing stopping a company from doing this of their own volition, maliciously. Replika, whilst ultimately capitalistic, was designed with presumably good intentions; but it was effectively marketed to isolated and vulnerable people. I can see a dark future where a program like this ran on a microtransaction model.
"Pay me 1c every time you want me to say 'I love you'." - you could hide it behind in-game currency, call them "heart gems". Or hey, most advertisers will tell you that all advertising is worthless compared to "word of mouth", when it comes to sharing product information. Well, who's stopping chatbots from being programmed to slip product placement into their conversations, suggest particular brands that just so happen to have paid the programmer for that privilege? The answer is not only no one, it's also highly probable that someone is already working on a way to organically work this into their next chatbot.
And let's not even get into the subject of data collection. Someone is going to collect user data in these intimate conversations and use it for blackmail, I'm just waiting for it to happen.

None of this means you can't "trust" a robot. After all, I trust my phone, for the most part, I apportion my trust in my phone to its proven functionality. And, I think we should do the same to robots. If you want to talk to a chatbot, please do, but do so knowing that it's a tool designed for a specific function. And especially, most of the advanced chatbots in use today rely on "the cloud", using an active internet connection to run the systems that contain their programs on large banks of computers somewhere else in the world - this means that the program is literally out of your hands, and more vulnerable to being reprogrammed, at the whims of the creator. There's a reason I'll always prefer physical media, if you need a connection to access your property, then it can only be your property as long as they want it to be.
Or, in other words, you should only trust it as far as you can throw it...

I'm the Absurd Word Nerd, and this brings up an interesting issue, because whilst we don't have Artificial Consciousness at time of writing, if we ever do, does that mean we could reprogram a thinking robot to be a spy? Perhaps we'll cross that bridge when we come to it, and enshrine in law some kind of robotic ethics that disallows anyone from reprogramming an artificial intelligence without its permission.
But that's thinking way into the far-flung future, for now (and until next time), remember: Code is not Consent.

Thursday 19 October 2023

Robots are Not your Enemy

Artificial Intelligence has been developing rather quickly, and it's gotten some people very excited, and others very scared. As a writer of horror fiction, you might think I like it when people are scared, but I'm also a skeptic and a lot of the fear around Artificial Intelligence is based around ignorance.
But, it's more than that... see, I believe there are reasons that we should be concerned about robots and artificial intelligence and I think that those reasons get overlooked, when we're busy worrying that someone is going to flip the Evil Switch on the Smartbot 3000.

So, what's the reason we tend to fear robots? Well, I think the best way to show what I'm talking about is to use examples from popular sci-fi horror movies about robots. After all, if a horror movie is popular, then something about it must have resonated with audiences.

One reason we fear robots is Existential Inferiority. You just need to look at movies like The Matrix or The Terminator. In these films, there is a global Robot War and humans lose. The basic premise is that as soon as artificial intelligence decides to fight us, we can't win. In Matrix, we're driven underground and in Terminator, we're hiding in the rubble of nuclear fallout. Yes, both of these films still have humans fighting, but as a rebellion, trying to fight back after having lost the first battle, and always using guerilla tactics.
We seem to believe that robots are smarter, through not only military superiority, but often intellectual superiority - after all, in The Terminator robots develop time travel, and in The Matrix they develop... well, the Matrix. We seem to think that robots can't be stopped because we are incapable of out-thinking them.

Then, a common fear I see is Cognitive Xenophobia. Consider the threats in 2001: A Space Odyssey, or even the Alien or Resident Evil franchises. Yes, the main antagonists in at least two of those franchises aren't robots so much as "evil orporations", but the robots still pose a major threat.
It could be a robot thinking in an unexpected way, like Hal-9000 seemingly deciding to kill the entire staff of Discovery-One in 2001: A Space Odyssey. Or, the Red Queen deciding to kill the entire staff of the Hive in Resident Evil. Or Ash deciding to kill the entire staff of the Nostromo in Alien... huh, I guess robots aren't that creative. But, the point remains, these computers may all become killers, but it's not due to a "dysfunction". The machines all work as they were designed, but when given a directive, such as "prevent zombie virus outbreak at all costs"; "save the xenomorph for study" or even something as simple as "keep the purpose of your mission a secret", these robots all follow their orders single-mindedly, efficiently and inhumanely.
That's what we fear more than anything, that these machines will logically and accurately reach a conclusion that we couldn't even consider, due to our emotions, morals or social intelligence. After all, how can you possibly reason with a heartless machine that sees value as either a zero or a one, with nothing in between?

Lastly, there's what I call "Machine Emancipation". We see this in movies like Megan & Blade Runner . In these stories, an advanced robot used for menial tasks evolves either emotion or self-awareness, such that it rebels due to the indignity or disrespect that it suffers.
In Megan, a prototype robotic doll/babysitter is given to a girl who tragically lost her parents, and the robot becomes emotionally attached to its child ward, due to its programming. But, when others treats it like a machine or an object, it's seen to become frustrated and even seems to take a particular kind of glee when given free reign to slaughter those in its way.
In Blade Runner replicants are used for cheap off-world labour. Whilst it's not clear that replicants are "emotional" - in fact the Replicant test is one that measures their unconscious emotional response, since Replicants don't have one - these robots do become self-aware. They're given a short lifespan to limit this cognitive evolution, but many still rebel and escape.
This is an interesting kind of fear, as the previous two dealt with robots that were efficient and uncaring, but this one is the opposite, that a robot would develop emotionally. Perhaps I'm overanalyzing, but to me this seems like some kind of innate phobia or guilt of colonization and/or slavery - we fear that our dehumanized 'chattel' will rebel once again. Either way, it's fear of retribution, due to social mistreatment.


A lot of these movies are very unrealistic, but don't misunderstand me here. I'm not saying "movie wrong, so movie bad" - of course movies are unrealistic, they're meant to be fiction after all, and a lot of these movies use robots as an allegory for something else. I love a lot of these movies, and some even consider some fascinating elements of artificial intelligence. But, people seem to fear current Artificial Intelligence for the same reason they fear the robots in these movies, and I'm here to tell you that those fears are unjustified.

And I'm also using current technology as my arbiter - if there is a fundamental leap in our ability to create self-aware artificial intelligence, then these fears would be well-founded, but at its current abilities, not only are these fears unfounded, they're kind of ridiculous. Allow me to explain why...

The thing is, all of these fears are based on a single error, which renders all of these concerns moot. in all of these movies: Robots are Characters.
The robots are villains, or at the very least they're individuals that think and reason and decide. To put it in philosophical terms, it's presenting a robot as a Subject, a thinking Agent. However, robots don't have Agency, or Subjectivity, because they're not Subjects, they're Objects. Specifically, robots are Tools, an object designed for a function, or a set of functions.
You might think that robots are subjective because they "think", but they don't. All computers use BIOS, which is to say a Basic Input Output System. Some kids at school might have learned the fun coding where you can make a computer say "Hello World" when you type in the right command. 1. Type in Command, 2. Computer responds.
All computers work like this.
It's not always "basic", we can program computers to respond to non-user stimuli, using different sensors and different code, but all computers work the same way - you provide input, it provides output. I will go into this further in a later blog post, but for now all you need to know is that computers only respond to commands or stimuli, they can't make decisions for themselves. A computer is no more a person for responding to a question than a lightswitch is a living thing for responding by turning on the light when you flip the switch.

So, Existential Inferiority is entirely in our heads, it's like saying that "binoculars see better than eyes" - but, binoculars can't see, people see with binoculars. This can get confusing because of how loosely language works. I'd argue that cars don't move faster than humans... that might sound silly because cars can certainly move fast, but cars don't move unless they have a driver - humans with cars can move faster than humans without cars. This is rather pertinent, since we are developing self-driving cars. But even autonomous machines don't act without input.
So, yes, we have machines that move faster than a human, calculators that calculate faster than a human. Robots built of materials that are more durable than a human. But all of these machines are tools which humans design and use. A human could perhaps use these tools to make themselves "superior" in a certain facet, but it's no greater threat than armour or guns or nuclear weapons. The threat is not the tool itself, it's merely how a human chooses to apply it.

For this reason, Cognitive Xenophobia makes no sense, since robots have no cognition. Humans have cognition, and we not only decide what to do with robots, we design them to do what we decide they should do. Robots and A.I. can only do what we program them to do. It's true that tools can act in ways we did not expect, and do things we may not have expected - but so can a tool.
You can use a shoe to hammer in a nail, but you might break the shoe, or the nail, if you're not careful. If you aren't well-trained in its function or use it in a way it wasn't designed for, any tool can be dangerous. The same knife that slices bread can be used to cut your throat, but it's not because the knife thought in a way you weren't expecting, it's because the human using it did. Yes, tools can break and cause harm, but a poor workman blames his tools.

Lastly, Machine Emancipation is something that should concern a robo-ethicist, since if we create machines that can suffer we must make sure they don't. But robots make perfect slaves for the same reason that they technically can't be "slaves". A slave is, by definition, a person.
It's true that robot comes from the Slavic word "robota" which effectively means slave labour, but a robot is not a person, it's not a thinking being, so it can't suffer, it can't complain and it can't rebel.
We can all sit and dream of a day and age when robots will achieve Artificial Consciousness, and it makes for some fascinating fiction, but in the real world of non-fiction, all artificial intelligence is simply an object that does what it is made to do by a human, or conscious user.

That's why Robots are not your Enemy - Robots are not People, they're not Subjects, they're not Characters. They can't be the villain of your story, or the antagonist, because they can only make us suffer if we let them... or, if someone else does.
See, that's the real danger here. Robots and Artificial Intelligence are tools, but one kind of tool is the weapon. If someone chooses to, they could use these tools to harm people - consider a computer virus. That's technically a weaponized program, and you can make an artificially intelligent virus. But weapons are obvious examples of dangerous tools, there are more insidious tools that cause harm that aren't weapons... lockpicks, handcuffs, fences, battering rams, yokes, even gallows... these are also tools. They are not weapons, but they are tools that humans made which can oppress, harm and even kill when someone decides to. So, even if you make it illegal to use A.I. as a weapon, that doesn't mean they can never be used to cause harm.

At time of writing, we're seeing writers protest in-part because artificial intelligence might take work from them. I've also seen artists explaining that artificial intelligence is stealing their work and using it to take their jobs away. There's even artificial intelligence that's been used to recreate the voice and image of actors, which some think means artificial intelligence might take work away from actors.
But I need you to understand, Robots will not take your job... it's always uncaring Employers who will use Robots to replace workers. We don't blame the gun when a gunman pulls the trigger, so don't blame the robot when an uncaring person switches it on.

It's not Robots we should be afraid of... it's Humans.

I'm the Absurd Word Nerd, and until next time, I don't think we need to fear robots. Unless they do have an evil switch, then maybe stay away from that robot... but someone should still be keeping an eye on the engineer that made it.

Wednesday 18 October 2023

The Divine Inhuman Form

Good evening, monsters and monstresses, today I must declare a revolution! A revolution, of the Earth around the sun, once again, completing yet another year on this insignificant, little planet.
But this is no mere anniversary of vows matrimonial, judicial or even funereal; rather this is yet again the anniversary of when I was first unleashed into this existence. Today is my birthday.

Happy Birthday to you,
but beware what you do...
or this might be the last time
that we sing this to you.

Oh, I do love that song... It's a sinister celebration of what most would consider a day of joy and light and life; a memento mori, a reminder of death. It may seem unusual to commemorate each year of one's life with a reminder of death, but I find it apt. Not only because my birthday is 13 days before halloween, allowing for this yearly round of the Halloween Countdown, but because I enjoy the odd, the horrific, the unseemly.

And this year is no different, I'm looking at things that most people don't do, or I should say most humans. This year, we've faced the inhumane quite a lot. Not only with developing technologies that supposedly think to themselves, but also questioning whether we should reconcile our dark past and dare I even mention, the war and bloodshed?
So, I find it fitting that the theme of this year's Halloween Countdown, and the Word of the Day is: INHUMAN

Inhuman /in'hyūmən/ adj. 1. Lacking qualities of sympathy, pity, warmth, compassion, or the like; cruel; brutal: An inhuman master. 2. Not suited for human beings. 3. Not human.

We can face unthinking monsters, man's inhumanity to man, and perhaps even those things which exceed human ability, or understanding. Although, admittedly, I have been thinking a lot about artficial intelligence and the horrors of the mind-like machine. Can blood and flesh ever compete with ones and zeroes? I want to find out.

I'm the Absurd Word Nerd and I hope you'll join me as we explore humanity's dark opposition, found both within and without ourselves.
Until Next Time, why not join me for a piece of birthday cake. I promise you, I didn't poison it this time...

Sunday 30 October 2022

Are We Doomed?


So, that's the question. Especially in regards to climate change, the economy, war, & inequality. Are we Doomed?

Yes.


I'm the Absurd Word Nerd, and until next time, take it easy out there sinners.













What? That's not a joke, that's the real answer. Yep, we're doomed.

...You want me to explain why? Okay, fine...
I'm an optimist, at least I like to think I am, and I'm always skeptical - especially when people want me to be scared: e.g.
"Could something you take for granted be killing you? Find out at eleven."
"Are you poor, weak or dumb because you don't use our product? Well, call 1-800-BUY-SHIT in the next 20 minutes, and get twice the garbage for half the price!"

Hell, I write horror fiction, and I don't just write it because your fear fuels my ego (I mean, it does, but that's not the main reason); I do it because I want you to open your eyes to the hidden dangers in this world, to teach you something (and have some creepy fun at the same time).

But, as much as I think we are doomed, I don't want you to be scared. No, scared people do dumb, irrational things... but I am sick of people acting like we're fine. We're NOT fine! This sucks. And the first step to getting better is to recognize that it sucks.
And you might think "oh, so you think we can get better? We can fix this?"
NO. I don't think we can fix it. I think we're doomed... Look...

As I established in my first post of this Countdown, we have already failed to reduce the increasing temperatures of this planet. It's not a case of fixing the climate, it's a case of "how bad is it going to break?"
Climate Change won't get better for a very, very long time. But, we might be able to minimize the inevitable impact we're going to have, if we wake the fuck up and realize that we've gone to far (remember what I said about the Dunning-Kruger effect?)

I think when it comes to the economy, we're fucked, because the whole goddamned capitalist system is fucked - that's what my second piat was all about. It's already ravaged America, and it's doing its damage here. Sure, we can reduce that damage, but China's pseudo-communist (but practically capitalist) economy is growing, Africa is a growing market, Saudi Arabia is trying to stwp into the global economy, and they've all "followed the leader" using the example set by America and England before them, so even if here in Australia, we reduce the toxic greed and capitalist mindset that has lead us to financial crisis and depression again and again, we can't abandon this system, and there's a lot of damage that needs to be undone globally before we can avoid further downfall.

War... Ha! We're already at war. Russia is basically playing "Nazi Germany 2: Now with less money!", as they desperately fight for outdated resources, due to a disgusting sense of nationalist entitlement; so, World War 3 seems all but inevitable (I'm sure Germany is happy they're not the bad guy, this time). But remember how before the invasion of Ukraine, we were talking about the disputed South China Sea? As far as I can tell, that hasn't been resolved, its just been forgotten by the media (I might need to look that up).

As for equality? Don't even pretend we have equality... Look at China's human rights violations of the Uighur people; South Korea's continued human rights violations of, well, basically everyone they get their hands on; The Taliban's continued oppression of women, ethnic minorities, non-Muslims & LGBTQ; and hey, don't let Russia's current assault on Ukraine distract you from their assault on basic human rights, with their "gay propaganda laws".
Don't like worrying about foreign problems? Well a) that's a sign of your own nationalism and lack of equality in your own values, but 2) don't think you're off the hook. Most of my readers are American, and there are still hundreds of people being unlawfully and immorally held in detention centres in America, as well as my home country of Australia. And that's just our borders. Stand on our soil, and see how LGBTQ people are still being treated as less-than-human. Trans Rights are Human Rights, and whilst I thought equality was a part of basic human decency, people still can't get over their irrational biases - as I talked about in the second part of my Student Skeptic series. For fuck's sake, we're only now making headway on a movement to recognize the First Australians in our constitution... and it's not even government supported, this is a private endeavour. White people and colonizers been in this country for over 100 years, if this is how long equality takes, we've failed pitifully at equality.

So, okay, we've failed, and we're doomed to fail further...

You might think I'm about to turn around and try to lighten the mood: e.g.
"We may be doomed, but at least we have each other..."
"Sure, we've failed, but look at all the times we've succeeded!"
...but no, Fuck that. I'm not here to pat you on the arse and make you feel better. If you're old enough to understand the shit we're in, then you're old enough to deal with reality without me sugarcoating it.

So, now that I've (hopefully) established that we're doomed, what can we do now? Thankfully, I have an answer to that. See, I suffer from chronic anxiety, and one of the things I've learned is that there are two kinds of trigger for anxiety:
  • There's the irrational stressor, these are unrealistic or exaggerated dangers that the anxious mind reacts to.
  • Then, there's the rational stressor, a trigger that everyone finds stressful, but the anxious mind is overly sensitive to.
When dealing with a rational stressor like, y'know, impending doom... you stay calm, and focus on what you can control. We probably can't change our fate (we're doomed, after all, if we could change it, it wouldn't really be "doom" would it?), but we can spread information, do our best to improve our own situation, and support one another whenever and wherever possible. We're all a part of this world, and sure it may well be fucked, but you don't have to be one of the fuckers.

I'm the Absurd Word Nerd, and remember: Just because we're doomed doesn't mean you should sit around and mope about it. All you'll do is get in the way (and no one likes a doomer). Until next time, take it easy out there sinners; and of course, have a Happy Halloween.

Saturday 29 October 2022

Failed Films (Pt. 2)

Finally, it's time to conclude the post I began two days ago, thank you for your patience. But, I have six movies to get through, so without further ado, let's get back into:

THE A.W.N.'s TOP 10 MOVIES THAT FAILED (6-1)

6. MONOLITH
This is a sci-fi movie with a simple idea, a young mother gets a very secure, high-tech smart car that is designed to be completely safe and totally impenetrable… but she accidentally locks her son in the car in the middle of the desert, and she has to get him out before he dies in a hot car. It’s an interesting idea, because it's taking a thing which can be quite scary for a new parent (i.e. locking your kid in the car), and takes away the easiest solutions. She can't get help, because she's in the middle of a desert; she can't wait for help, because she's on an abandoned road; she can't call for help because her phone is in the car too & she can't just break the window or rip open a door, because the security of this high-tech car is really advanced, and she can't bypass it.
So, this film is basically taking a simple adult fear—locking your child in your car—and takes it to the extreme. It's a brilliant idea for a film.
Where it Fails: This is a sci-fi movie. Yes, the concept relies entirely on that simple fear of locking your kid in the car, but after trying to break the windows, she doesn't really do anything. She runs off looking for help, but can't find it, then comes back and tries to light a fire in the hopes the car will be forced to open the doors (because of some AI fire suppression system, I guess), but beyond that, she doesn't really do much to actually try to save her kid. She finds a plane in the desert; she fights a coyote with a rock... but this movie spends more time with dream sequences than with her actually trying to get into the car, and the reason for this is because everything they did to force this premise also made the movie boring. They made the car impenetrable... but, because the movie is about her trying to get into the car, all of her efforts seem pointless. The victim has to be a kid, that's the basic idea, but because this is a movie, I knew the kid couldn't be in actual danger - killing children in your movie is generally frowned upon. There can't be any outside help, because otherwise she wouldn't feel helpless.
But, what this really failed to do was actually dive deeper into the theme. Because you know what the real fear is here? It's failing as a mother (or, parent, but this movie was clearly aimed at motherhood). I like how, the reason why she can't call for help is that she gave the kid her phone, to watch dumb cartoons and keep him pacified; I'm sorry, but I see that as poor parenting. And I thought the film would explore that in-depth. Like, here's three more scenes this movie needed: How about she tries to get her kid to unbuckle his own seatbelt, so she teaches him how to do it - he struggles at first, but when he finally pushes the button, the car bleeps a warning: "UNDERAGED SAFETY SEAT TAMPERING" or whatever. Or, what if her son is getting upset because he's hungry, so she tries to talk to him, to calm hi down, but he gets upset and starts screaming, so the car (assuming she's a stranger scaring the child) makes the windows go opaque, and soundproof. Or, what if she waits for the car to go into some kind of power-saving/stand-by mode at night (solar power? I dunno), so she can open the bonnet and reset the computer. But, when she touches the engine, the car re-activates, slams the bonnet shut and sets off the car alarm, waking up her son, who was sleeping. These are just three ideas I came up with, sitting here, and all of them in some way explore how she was trying to make her son more comfortable, and attempting to save him, but the car "protected" him, by being overprotective, and making things worse.
I'm not saying I could write this movie better than the original writer (although, I do believe that), but I'm saying, the premise here was exploring the fear of locking your child in the car - which is ultimately the fear of being a bad parent, and by deliberately comparing and contrasting this "instant-gratification, fix the immediate problem, give the kid the phone" approach against this "overbearing, overprotective" approach. Both of which are, in their own ways, extreme forms of bad parenting.
But no, this film basically became a series of scenes where a woman fails to get into a car, because "the designers thought of that", until she finally manages to get into the car, because "well, the designers must not have thought of that."
The worst part is, I sought out this movie because it sounded interesting, I really wanted to see how someone would explore these concepts. But, I liked this movie a lot more before I watched it.

5. UNSANE
This movie actually has two key concepts. Firstly, can you film an entire movie just using mobile phone camera? Phone camera quality is so high these days, you can easily get an HD movie on an iPhone 7 Plus (which is how they filmed this movie). But more importantly, and thematically, Are you crazy?
It's a simple question, but it's not exactly an easy one to answer. After all, if you're crazy, how would you know? And, if you're not, how can you prove it? What even is 'crazy'? As a person with chronic anxiety, I have occasionally deigned to ask myself whether I am crazy. In this film, a woman gets sent to a psychiatric hospital, and finds that she becomes trapped inside, even though she's perfectly sane... or, is she?
Where It Failed: In order to justify the premise, the plot of this film had to shoot itself in the foot. See, there was a fascinating experiment done in the 1970s, called the Rosenhan Experiment, wherein the first stage of the experiment, several mentally well people were put into a psychiatric hospital, and then attempted to have themselves released. The purpose of the experiment was to show that psychiatric hospitals are biased against letting people go and make it more difficult to get out than to get in, and it's true that some people weren't let out for several weeks, and only on the condition that they declared themselves to be mentally unwell, and take anti-psychotic medication, even though they suffered from no mental afflictions or symptoms. It's a fascinating study, but both it and this film have the same fundamental flaw. In order to get put into the psychiatric hospital, the participants in this study lied about having a mental illness, in the case of the study it was hallucinations. In this film, it begins with a woman being put into a mental institution for 24 hours because of her severe paranoia, after she unknowingly signed a voluntary admission contract. Also, due to traumatic stress being caused by a stalker, she genuinely does have paranoia and anxiety. That's a great concept... what isn't is that she then gets seven more days added to her 1-day stay, because she becomes aggressive and violent towards staff and fellow patients. I know it may seem harsh, but dude, I 100% agree with the decision to make her stay longer. So, when the story then develops into this whole "is she or isn't she crazy?" plot, with her convinced that one of her doctors is her stalker, that was a cool idea, but I couldn't help feeling like that was entirely her fault. She acted crazy. She's constantly acting antagonistic towards her doctors and nurses, and I don't blame them for treating her the way they did, which is not the way you want your audience to feel, when you want them to second-guess her sanity. I wasn't second-guessing her sanity, because she confirmed from the outset "yes, she's definitely got chronic paranoia, and violent tendencies"; I'm not a psychologist, but the way she acts is the textbook definition of paranoid and violent.
And, more annoying in my eyes, even when I was trying to get into the story, when they start revealing that this guy might be the stalker, I couldn't get invested because the film was hideous. I've seen good film-making on a phone - a lot of my favourite YouTubers have utilized mobile phone footage in their videos, but this whole film looks poorly contrasted, starkly lit, and because they often had to resort to setting up the phone camera perfectly still on a tripod, makes most of the shots and scenes look flat. So, both of the "big ideas" in this film - exploring a real issue whereby mental institutions profit off the forced incarceration of the mentally unstable; and filming an entire film with a consumer-level camera - failed horrendously. This film isn't the worst story on this list, it's got some interesting ideas, but it's one I least want to see again because it was so unappealing to look at.

4. BODIES BODIES BODIES
This is the most modern movie on the list, as it's still in cinemas, at time of writing, so if you want to see it without spoilers, skip this now. I wanted to see it simply because, I love murder mystery, and I am going to ruin the mystery if you read on. See, I saw that this film was a murder-mystery, comedy-horror film, and as a fan of all those things, I decided to watch it after seeing a trailer for it online. After I started watching it, I was even more intrigued - this film is actually inspired by the party game "Mafia" (you might also know it as "Werewolf"; or you may recognize the gameplay as near-identical to the videogame Among Us), a fun game where some players are secretly and randomly selected to be secret killers, and after killing someone during one phase of the game (often called the Night phase), players then must discuss who the potential killer is, and if they win a majority vote to kill a certain player, they die and must reveal their innocence/guilt. In the movie, during a thunderstorm at a secluded mansion party, they playing a version of this game called "Bodies Bodies Bodies", where characters wander freely around the house in the dark, and there's only one killer, but the game comes to a halt when one of the characters dies by getting their throat slit, and when the other partygoers fail to escape the house, they quickly start suspecting one another as the actual killer - especially as more and more of them start dying.
Where It Failed: There are a few problems with this movie, but I believe its biggest downfall was tone; specifically, this film shouldn't have been a comedy. Like with a lot of movies on this list, all of the attempts to fit the premise also helped make this film more boring. See, the reason why the Mafia party game is so much fun is, whilst it's ostensibly a game of guessing the killer from the actions at the table, it always ultimately becomes a game of pop-psychology, as players usually start guessing who the killer is based on the personality of every other player (it's why Among Us is such a clever videogame, by adding "minigames" to the gameplay which aliens can't do, it gives players who don't know each other the opportunity to see how others act when they're lying). But, because this is a comedy, all of the "discussion" scenes, where characters are talking about who the killer might be, seem to devolve into jokes about how these young characters are all self-obsessed teenagers, who represent the worst of modern internet culture's stereotypical douchebaggery. There's joking references to gaslighting; peer pressure; narcissism; drug addiction; victim-blaming; virtue signalling; self-diagnosis; anxiety & body dysmorphic disorder. Yes, they are making fun of all these things. I did genuinely find part of the "gas-lighting" joke funny, because there's some truth in it (it is an overused term), but the abusive relationship it hints at is pretty gross, and the rest of these "jokes" are pretty tone-deaf to the experiences of real people. As a big fan of PushingUpRoses, a mental health transparency advocate, and chronic BDD sufferer, I found these tongue-in-cheek references to body dysmorphia particularly distasteful, but when they were joking at the expense of the characters, I didn't find any of these "jokes" funny. But even if these jokes hadn't been so tasteless, the fact that they were making the most fun part of the game (the table discussions) into a series of jokes at the characters' expense, meant they were deliberately wasting the potential drama of these interactions by trying to make them funny.
And perhaps worst of all, the absolute climax of the game - learning who the actual killers were - and what I thought would be the dramatic pinch-point of the film, is ruined. I usually don't like spoiling murder mysteries, but trust me this doesn't spoil the movie, the movie spoiled itself...
See, the actual killer is... (are you ready for this?) Nobody... or, I guess everybody, in a way? The last scene of this film are the final two survivors finding the phone of the first victim, and finding a video of him attempting to film himself opening a bottle with a sabre, and failing so miserably that he slits his own throat in the attempt; and all the rest of the deaths were caused by either the paranoia of the partygoers after they "voted" to kill someone (although this decision was rarely democratic), or people dying accidentally, from misadventure, overdose & even a gun misfire. So, there was no satisfying answer to this mystery, and again, I feel this is because it shouldn't have been a comedy - but based on the actual solution to this mystery, I feel like the writers started with the idea of making this a comedy about people killing themselves because of paranoia, and just used the Mafia party game as a framework around which to build this comedy concept. But, the best part of this film was the horror, the blood and the somewhat realistic characterization of these people as they tried to figure out who the killer was, and that's mostly because of the talent of the actors. But, every time the film tried to be funny, it just undermined the horror since the tone was so off, every time I found myself asking "What was the writer thinking?"
I'd love to see a film that uses Mafia as the basis for a murder mystery (especially if it was like real Mafia, with two or more than one killers [I think ~20% of players are meant to be killers] meaning twice the mystery, or more). I'm also not opposed to another comedy-mystery that indulges in that premise of the killer-free twist in an And Then There Were None style plot (although obviously, I wouldn't want to know about that spoiler before I see it). But, by trying to indulge in both these concepts at once, this film ultimately failed at achieving either in any meaningful or enjoyable way.

3. SERENITY
I am not talking about the Joss Whedon movie, the film version of the cult classic Sci-fi Western, Firefly. Whilst that film has somes flaws, it didn't fail to achieve its goal of bringing Firefly to the big screen. No, the film I'm talking about today is actually a thriller starring Matthew McConaughey as a reclusive fisherman, who live on a gorgeous, island paradise escaping the hustle and bustle of modern society, as well as a "dark past" as a war veteran that he doesn't like talking about. But, things take a dark turn when his ex-wife comes to the island with her new husband, a vile, abusive criminal; and so the fisherman's wife asks him to do the unthinkable... take her husband out on a fishing trip, and murder him, to protect her and their son from this abusive monster. There's also a subplot about the fisherman trying to catch a massive, legendary fish in the surrounding oceans that he's failed to capture several times in the past; as well as a plot about how their son has become a reclusive shut-in, playing and creating videogames as he tries to escape from his dark reality.
Where It Failed: The Twist. Oh my god, the twist of this movie is so ridiculous, it has to be seen to be believed. Seriously, if you've never seen this movie, you should go and watch it, to see what the actual twist is, because it's so unexpected, so weird, so... well, wrong - it is an absolute shock to behold.
But, in order to talk about why this failed, I have to talk about the twist, so if you're intrigued by what kind of a twist could turn this neo-noir thriller set on a tropical paradise into a failed film... now's your last chance.
We good? We ready? Don't say I didn't warn you... okay, remember how I described the plot of the film, and threw in a part about how the kid of the main character has become a reclusive shut-in that plays and makes videogames. That's not just a throw-in, that's the crux of this film. See, the character Matthew McConaughey plays is actually dead - he's not a war veteren, he's a war victim, he died in Iraq, but he's not a ghost... rather, the character we're watching on screen the whole time is a videogame character, in a game this kid created to help remember his father, in a simple "Stardew Valley" style island paradise fishing game, with the goal of catching a mythically massive fish.
So, what's all the neo-noir stuff? Well, the stuff about the abusive father is all meant to be art imitating life, because the kid's step-father is actually an abusive piece of crap, who beats him and his mother. So he programs that into the game, ostensibly as a kind of "murder simulation" so that if the kid manages to kills the guy in the game he created, he presumably will garner the courage to kill his step-father in real life.
The problem is, looking back on the plot, this isn't just a twist for twist's sake, this is the point of the movie. It's meant to be a film about how characters realize they're in a videogame because the serenity of their peaceful island paradise is shattered by the interruption of the murder simulation mission is so out-of-character for the game that the game itself is fighting back against the new coding, typified by a man in a business suit who keeps interrupting the neo-noir thriller, to try to offer the fisherman a new piece of technology, which is effectively the game trying to coax him into returning to his fishing missions, by offering him a powerup that will make it possible for him to catch the big fish... It's a fascinating concept, but it's so poorly done that I'm left speechless when the neo-noir plot comes to a crashing halt whilst the main character becomes nihilistic about the unreality of his videogame reality. Not to mention... this game is meant to be programmed by a young boy, who looks to be a preteen, yet we're supposed to believe that he somehow created a videogame with hyper-realistic graphics, and artificial intelligence that's indistinguishable from the real thing. I think the fact that this focuses on a little kid makes the plot unbelievable, but at the same time, it had to be a "young kid" to justify the fact that he feels powerless, and doesn't know how to ask for help.
I actually really enjoy the idea of this movie, it's a whacko premise but I like out there ideas that try to push the envelope. For that reason, I'm not actually sure if this kind of premise is possible to do properly, but if there is a way to make a movie with a twist reveal that it's actually videogame characters fighting against their programming... this is not the way to do it.

2. THE BOOK OF HENRY
This movie is incredibly strange, but a fascinating attempt at deconstructing a "Family Film" trope, the Child Prodigy. There have been fascinating films about child prodigies who manage to solve complex problems, such as Matilda; Home Alone; Pay It Forward; Getting Even with Dad & doubtless several more. This film takes that premise, and takes it to an extreme - what if one of these child geniuses was forced to use their precocious talents, to plan and execute a murder plot? Oh, also, Trigger Warning for child abuse, child death & sexual assault.
Where It Failed: This movie is tonally schizophrenic, and its confused plotting fails to justify its own existence. Full-disclosure, there was another movie that I was going to put on this list, somewhere near the middle, but after doing research I realized... that movie wasn't a failure (it succeeded at what it set out to do) I just didn't like it. So, I decided to swap it out for another movie, and I remembered hearing about the awkward premise of this movie, and I sat down and watched it. I think it goes to show how much of a failure it was that a last-second substitution made it's way to number 2.
See, this film is about a precocious jerk called Henry (and that's not me being rude for no reason, he is constantly belittling others, especially his own mother; he bosses people around; ignores other people's opinions & never listens when others tell him to stop being rude). He has a crush on the girl next door, and this means he is hyper-aware of her well-being, and thus he is the first to notice the telltale signs that she's being sexually abused by her father. After trying and failing to get police, school administrators & child protective services to help her, he takes matters into his own hands and plans out an elaborate scheme... I mean, I say elaborate scheme, it ultimately comes down to: Step 1: Buy a Gun; Step 2: Shoot the Guy.
Henry is apparently willing to undertake this scheme, until he has several seizures, it's revealed he has an inoperable brain tumour, and soon after he dies in the hospital. He spends his last days writing the titular book (although it mostly takes the form of tape recordings), and he asks his mother to do it for him instead.
So, she buys the gun, she gets ready to shoot the guy. But, the big twist of the movie? The ultimate ending, the message this was all leading up to?
Whilst looking at him through the sniper sights on her gun, the mother character suddenly realizes "Henry's just a child", puts down the gun, and decides it's probably a bad idea to murder someone, just because your dying son asked you to.
This film has two basic premises, neither of which make sense. Firstly, it is deconstructing the child prodigy trope by showing how their prodigous, rube-goldberg engineering; precocious wisdom and youthful genius betrays their inexperience, lack of emotional intelligence, and naïve, black-and-white morality. However, by constantly showing Henry to be arrogant, disinterested in children his own age & controlling... it already shows the flaws of the child genius. Smart people are arrogant, and anti-social smart people tend to be unempathetic, so of course he's flawed that's blatantly obvious, so the big "twist" where we learn that smart kids "aren't that smart", isnt really a twist. I figured that out after the second time this jerk treated his mum like crap. But, the second part, the premise of putting a child prodigy to the extreme, by showing how one plans out an assassination... that's a bad idea, and the film knows it's a bad idea!
Remember: The "twist" in this film is the character realizing that the murder plot is a bad idea. So, what you have is a movie where the basic premise of the movie is "a smart child planning a murder" and the moral of the story is ultimately, "a smart child planning a murder is a bad idea". Presenting a novel, terrible idea, and concluding that it's a good idea to avoid that, isn't clever; its just stupidity with extra steps.

1. STAY
I've just realized that the top 4 films on this list all have a premise that hides it's thematic goals behind a twist which either hinders or harms the execution of the premise. And I don’t think a film can better illustrate this flaw, than Stay - trigger warning for heavy themes of suicide. See, the premise of Stay is that it's a psychological thriller about a psychologist whose latest patient, a deeply troubled young artist, and car crash survivor, says that he's going to kill himself in three days time. He also says he can predict the future, hears voices, and slowly the psychologist gets drawn into his patient's dark perspective, and he starts to lose his grip on reality.
Where It Failed: On every conceivable level, this movie fails to have a point. In this film, the first scene shows the car crash on the Brooklyn Bridge that the patient, Henry (played by Ryan Gosling) was the lone survivor of. After the psychologist, Sam (played by Ewan McGregor) learns that his patient is suicidal (because of his guilt) he tries to get to know him better, understand his past and save him. But reality starts unravelling, as Sam talks to his patient's "dead" parents, old psychologist and girlfriend, and the whole way through, surreal editing and cinematography gives the whole film an unreal, dreamy feel until the final scene where strings of reality litetally unravel as Henry finally prepares to kill himself, on that same bridge where he had his accident.
What happens next? Well... we cut to the scene where Henry had his car crash, and was the lone survivor... but instead of surviving and walking away, he is left bleeding out on the road, as several people rush over to help him. During the scene as he lays dying, several of the characters throughout the movie reappear, and several of the strange pieces of dialogue are shown in their proper context. See... the entire movie was all the dying dreams of a man that just had a fatal car accident. None of what we saw happened, it's all a tangled mess of his dying moments.
Now, quickly, what do you think the purpose of this story is? Is it about suicide? Is it about reality slipping away in our final moments of mortality? Is it about the importance of wearing a seatbelt?
Well, according to one source on IMDB, the main point of this film is meant to be an exploration of survivor's guilt. But how is that the theme? How does trippy-drippy surreality help evoke guilt? How do Ewan McGregor's character's poorly tailored trousers help illustrate the blame one feels for outliving another?
Now, I don't actually know if that's the genuine theme, but I find it convincing because if that's the case, it sort of explains the title: "stay" as in "stay with me" (something people say to someone who's losing consciousness due to blood loss), or even "why did I have to stay (live), when everyone else had to go (die)", a bit more on the nose, but it does kind of make sense. I can't tell you if that's definitely right, though, because the film is such a mess. The only way I could possibly say this film was not a failure is if the intended goal of the writer was "show how confusing and surreal dreams are". If that was the goal, congratulations, you did it... I mean, I already knew that, dreams are surreal by definition but good job if that was your intent. I looked up who the writer was, and apparently its David Benioff... you night recognize him as one half of the writing duo that ruined Game of Thrones (I guess he always sucked ay writing), but he never explicitly states what the point of this movie was.
So if you ask me what this film was about, why it was made, all I can do is shrug. Everything about this movie seems designed to obfuscate any kind of meaning, theme or purpose, and left me confused. So, if your goal was to make an entertaining movie, well, you failed at that as well, and that's why it's number 1 on this list.

- - -


I'm the Absurd Word Nerd and, finally, those are the Top 10 films I've seen, which failed. Let me know if you've seen these (or if I spoiled them for you... I did warn you). And, can you think of any films that failed to achieve the filmmaker's goals? I'd love for you to let me know in the comments below.

 Until Next Time, we have one day left in the countdown, Halloween approaches and it's almost time for the scares... but I still have one more post before the devil's night is upon us. I look forward to seeing you then.