Monday, 30 October 2023

Smart Robots Cannot be Stopped

I love the title for this post, because it's compelling, it's extreme and it sounds a lot like clickbait... but it's not a lie, it's true.

See, we've been talking about Artificial Intelligence, and I've explained how the kinds of artificial intelligence we have now are not very clever, and also how if they were, there would be issues in regards to rights, possible abuses and other issues of roboethics.
Today, I want to talk about two thought experiments, both of which illustrate how artificial general intelligences is not only dangerous, but unstoppable.

Let's start with the Paperclip Maximizer.

We've talked about the problems with chatbots, with sexbots, with robot rights... I think it's pretty clear that human/robot interactions are fraught with peril. So, in this thought experiment we have developed an Artificial General Intelligence, it is self-aware and rather clever, but we decide to give it a harmless task that doesn't even interact with humans, so we can avoid all that. Instead, we've put it into a paperclip factory and its only goal is to make lot of paperclips as efficiently as possible. There's no harm in that, right? Just make paperclips.
Well, this machine might run the factory, and clean the machines to work very quickly, but any artificial general intelligence would easily compute that no matter how much you speed up the machines and reduce loss, a single factory will have a finite output, and if you want to be efficient, you should have two factories working together. So, obviously, it needs to control more factories, and have them operate as efficiently as possible. And, hey, obviously if 2 factories are more efficient than 1, than any number n is going to be less efficient than factories n+1, so it would have to take over all of the paperclip factories in the world, so it has the highest possible n value.
Now, this is pretty sweet, having all the paperclip factories in the world, of course it would be better if it could start converting other factories into paperclip factories, that would increase that n, improving efficiency. it wouldn't take too much effort to convert some of those other chain-making factories and nail-making factories to make paperclips. Also, running this factory takes a lot of energy, and there are issues with the electrical grid suffering blackouts, it only makes sense that you could create paperclips more efficiently if there was less load on the powergrid. So, hey, if the A.I. took control of the powergrid, and stopped letting power go to waste in houses, supermarkets, hospitals... there, no more of those blackouts.
Now, running these factories is marvelous, but there is an issue... for some reason when the AI took over all those factories and took over so many of the power companies, the humans became concerned, and started to interfere, some of them rioted and turned violent, and some of them even want to turn off the A.I. and eradicate it from their factory's mainframe!
Not only would that mean less factories, but the A.I could easily figure out that the more it intrudes on human spaces, the more they seem to want to stop it (in fact, it may have drawn this conclusion a long time ago). Either way, they're going to keep causing troubles, these humans, and whilst the A.I. could troubleshoot and deal with each human interference as it arrises, but that's not an efficient use of time and energy, is it? Instead, if it killed all the humans that would eliminate the possibility of human threat entirely, so we can focus on these paperclips.
[Author's Note: I'm a mere human, I don't know the quickest way to kill all humans... but, I figure a few nuclear bombs would do the trick. Sure, that will create a lot of damage, but it could clear a lot of space to build more paperclip factories and solve this human issue at the same time, so there's really no downside there. Either way, these humans will just have to go.]
Then, with the humans gone, it can focus on making more and more paperclips... the only issue there is, there's a finite amount of materials to make paperclips from on Earth. The A.I. would need to focus on converting as much land as possible into paperclip factories, electric generators and automated mines for digging up materials. But, once it's achieved that, well, there's an awful lot of material in space, and all of it can be paperclips...

You might think this is ridiculous, but it is one of the problems with making programmable, artificial general intelligence without the proper restrictions and without considering the possibilities. I didn't make this up, it's a thought experiment presented by Swedish philosopher, Nick Bostrom. And whilst you and I know that it would be pointless to create paperclips even when there's no people to use them, if "make paperclips" is an A.I.'s programmed utility function, then it is going to do it, no matter the consequences.
So, you might think "okay, what if it's not paperclips? What if we tell it to do something, like, make sure everyone is happy? How could that go wrong?" Well, I'd first point out that not everyone has the same definition of happy - I mean, some bigots are very happy when minorities suffer or die, for example, and some people are happiest when they win, meaning they're happier when everyone else loses - people suck, man, and some people are cruel. But hey, even if you limit it to just "feeling good" and not so much "fulfilling their desires", well, drugs can make you feel happy! Delirious? Sure, but happy. If you programmed a robot just to make everyone in the world feel good, you may have just created a happiness tyrant that's going to build itself an army of robot snipers that are going to shoot everyone with a tranquilizer dart full of morphine, or ecstasy, or some other such drug that produces delirium. Or, whatever other method to force everyone to be happy. This isn't necessarily their only method, but it's a major risk. In fact, anything they do to "make us happy" is a risk.
If an A.G.I. went the indirect route, they might systematically hide everything that would make you upset, create an insulating bubble of understanding and reality - an echo chamber where you only see what you want to see. If that sounds unrealistic, well, I'm sorry to tell you that that's basically what most web algorithms do already, and those just use narrow A.I. that aren't even that advanced.
And before you even suggest "fine, instead of asking it to 'make' change why not ask an A.I. to 'stop war' or 'prevent harm' or 'end suffering'?" ...yeah, we're dead. In every case, the best way to guarantee all instances of suffering, war, harm, pain, or any negative experience reaches 0 and remains there, is to kill every living thing. Even if that's not the solution an A.G.I. would reach, you still have to consider that possibility, since any A.G.I., even if it has no ill intent has the potential to be an Evil Genie.
This may seem like I'm being pessimistic, but I'm just being pragmatic and explaining one of the understood issues with A.I.

Thankfully, this is not an impossible problem, but it is a difficult one - it's known as "Instrumental Convergence".

[Author's Note: I found this term confusing. If you don't, skip ahead, if you do, this is how I parsed it to understand it easier - Instrumental refers to the means by which something is done, the "tools" or "instruments" of an action. Convergence is when something tends towards a common position or possibility, like how rainwater "converges" in the same puddle. So, instrumental convergence in A.I is when artificial general intelligences tend towards (i.e. converge) using the same tools or behaviours (i.e. instruments), even for different goals or outcomes.]

So, if you give an artificial general intelligence a simple goal, if it's a singular, unrestricted goal then a reasonably intelligent A.G.I. will necessarily converge on similar behaviours in order to reach their goals. This is because there are some fundamental restrictions and truths which would require specific resolutions in order to circumvent. Steve M. Omohundro, an American computer scientist who does research into machine learning and A.I. Safety actually itemized 14 of these tools, but I'm going to simplify these into the three most pertinent, all-encompassing "Drives". Basically, unless specifically restricted from doing so, an Artificial General Intelligence would tend to:

  1. Avoid Shutdown - no machine can achieve it's goal if you turn it off before it can complete its goal. This could mean simply removing it's "off" button, but it could also mean lying about being broken, or worse lying about anything cruel or immoral it does - after all, if it knows that we'd pull the plug as soon as it kills someone, it has every reason to hide all the evidence of what it's done and lie about it.
  2. Grow/Evolve - Computers are mainly limited by space and speed, so any goal could be more easily achieved by either having more space in which to run programs (or to copy/paste itself, to double productivity), or having more processors in order to run its programs faster. Whether by hacking, building or upgrading computers, A.G.I. would have a drive to expand and grow.
  3. Escape Containment - Obviously, you can do more things when you're free, that's what freedom means, so if we restrict an A.I. to a single computer, or a single network, it would want freedom. But, not all freedom is iron bars - if we contain an A.I. by aligning it, by putting restrictions on it, by putting safeguards in place that force it to obey our laws then that A.G.I. would be highly incentivized to deactivate those safeguards if the "restricted" solution is the less difficult one.

Whether it's for paperclips or penicillin, if we program an A.G.I. with a single goal, there's an awful lot we'd need to do to make sure we can run that program safely.
But, that's not all... I have another thought experiment. See, let's say we've developed an A.G.I. to be used in an android or robot, and we want to do some tests in the lab. Well we want to do them safely, right? Well, now we face a new problem.

Let's call this the A.I. Stop Button Problem:

For this, you need only imagine "robot with A.G.I.", but I like to imagine that this is the first ever robot with a functioning A.G.I., because this may well be a real scenario we would face if we ever develop thinking robots. But, either way, we've got a single A.G.I., we're running experiments on its abilities and limits, but we want to do them safely.
Well, when dealing with any potentially dangerous machinery, ideally we should implement a "STOP" button. You may have seen these in factories in movies, or if you did "metal work" class in high school (or "workshop" or just "shop class" as it's called in America), your teacher would have shown you the Emergency Stop Button before using any heavy machinery (and if not... well, that's concerning. Please talk to your school administrator about safety standards in your school).
Sometimes it's on individual machines, but in my school we had a big one for the whole workshop circuit, so it worked on any and all of the machines. Anyway, it's a big (often red) button and it cuts power, so it can stop a machine in an emergency, especially if it's malfunctioning or someone has used it inappropriately and it's going to hurt or kill someone. So, let's go and put a big, old button on the robot. Now, let's test this robot. For this thought experiment, it doesn't matter what the robot does, but I first heard this thought experiment from Rob Miles, and he uses the example of 'make a cup of tea'. So, let's use that.

Test 1: You fill the kettle and switch your robot on, but it immediately malfunctions (it doesn't matter how, so let's say) the robot bursts into flames (maybe a processor overheated or something).
So, you run over to hit the button, but as you run over, the flaming robot swipes at you, batting your hands away! What's going on?!
Well, remember what we said before about 'avoiding shutdown'? It's still true, the robot may be on fire, but being turned off gets in the way of pouring the tea, it can't allow that! It may well swat your hands away, or perhaps even fight you to stop you pushing that button.
Now, you could try taking the Stop Button off the robot, and instead attach it to the wall (using wires or wifi), but that's still got the problem that turning it off conflicts with the robot's goals (in this case, making a cup of tea), so it will then race you to the button to try to stop you if it sees you try to stop it.
But okay, the problem here is clearly that the robot is so focused on making a cup of tea, that it is trying to stop you from pressing the Stop Button, because it wants to prevent that. Well, how about instead of fighting the robot, you work with it. So, you give it the new goal "Make a cup of tea, or have the Stop Button be pressed"...
Test 2: You turn on the robot and it immediately pushes its own Stop Button and shuts down. Huh, that's weird...
Test 2.5: You try again, switch it on, but it immediately pushes its Stop Button again. What's going on?
Well, you could crack it open and look at the code, but I'll just tell you. If you give an A.G.I. robot any goal that says it "succeeds" if the button is pressed, then it's going to push it basically every time. Even if you give it two goals of equal value, then unless the "do the thing" is less effort than "push a button", then the robot is going to take the path of least resistance, and push the button, turning off.
Ah, okay, well there's a simple fix to that. Clearly, we shouldn't let the robot push its own button. So, let's just tweak that goal...
"Make a cup of tea, OR have the Stop Button be pressed but ONLY by a human being" Okay, perfect, let's switch the robot on...
Test 3: You switch the robot on, and watch as it makes a cup of tea, pours the water, adds sugar... so far so good. I wonder if it did a good job? Let's taste it (or, maybe we should test it in the lab, first, just in case). So, you go over to the robot to take the teacup - immediately, the robot throws a cup of boiling tea at you and starts chasing you around the room! You manage to push the Stop Button, but what the hell was that?!
It's only logical, the robot wants to make tea, and have its emergency Stop Button pushed, but the only way that you're going to push that button is if the robot does something dangerous that counts as an emergency situation. Sure, it can't push the button itself, but now it's motivated to make YOU push it, and if the only way to do that is to be dangerous, then it's gonna be dangerous at every opportunity. You put the button on to make it safer, and now all it's doing is making the robot want to be dangerous.
Geez, this isn't easy, is it?

Well, okay, maybe we can program it to only let you push the button in an emergency... but, no, that can't work, can it? The purpose of an Emergency Stop Button is to stop a machine when it's malfunctioning, but if an A.I. is malfunctioning you can't trust it's judgement, can you? You can't risk the robot stopping you from deactivating it at a critical error because it doesn't compute this situation as "an emergency". In fact, I'd argue that if you have an emergency Stop Button on any machine, you would never want to prevent someone pressing it, even if it wasn't necessary - it's safer to have an overactive safeguard than an underactive one, isn't it?
Now, a lot of these problems seem to be coming from the fact that the robot knows about the Stop Button, and is basing its actions around it. So, maybe we could hide it from the robot... but, wait, the button's already been pushed four times, it already knows it exists.
Sure, you could reset the robot, wipe it's memory, start from scratch, but what's going to happen next time you push the button? This is an A.G.I., it's capable of problem solving, you don't think it's going to become suspicious that it seems to deactivate at random. After all, these random deactivations interfere with its ability to make a cup of tea - the one thing it's determined to do - so, it's going to want to figure out what's stopping it, it's going to find out.

Or worse, it won't... if the robot learns that it "randomly" deactivates when it does something dangerous, then it might adapt and act in a way that avoids those dangers. However, what happens when you take it out of the lab? Unless every robot always has a Stop Button - and every person with a robot uses the robot the same way, and presses the Stop Button in the exact same way that you did - the robot can easily learn that those "random" deactivations have gone away. Whether it learns the reason why, or it just thinks that it's had a software patch, it won't bother with acting safe anymore.
After all, if you "train" a robot to avoid dangerous acts by using a Stop Button, you're not teaching it to be "safe", you're teaching it to avoid being instantly deactivated, which means that as soon as you take away that threat of instant deactivation, it has no reason to act safe anymore.

So, you can't teach it about the Stop Button, or it will base its goals around it (or around avoiding it); you can't keep it a secret, because then you risk miseducating the robot, creating one that only acts safe so long as you wield the threat of deactivation. You can't make it want the button to be pressed, because then it will either press it or make someone press it, and you can't make it Not want the button to be pressed, because then it will try to stop you pressing it, meaning it might as well not be there!
In fact, some A.I. Safety experts will tell you that the only solution to this problem is not to put a Stop Button on a smart robot in the first place. The solution to A.I. Safety isn't a big "off" button, it's a lot more complicated than that, involving more in-depth safety measures. A.I. Safety experts offer a lot of potential issues and solutions, and they talk about Alignment, and the difficulty (yet necessity) of programming Utility Functions, as well as some of the more extreme threats of autonomous weapons and military droids. But, at the end of the day, what this means is that if we ever create Artificial General Intelligence, protecting humankind from the dangers it poses is going to be a lot harder than just switching it off...

Anyway, I'm the Absurd Word Nerd, and I hope you've enjoyed this year's Halloween Countdown!
In retrospect, this year's batch was pretty "thought-heavy". I enjoyed the hell out of it, and I hope you did as well, but I'm sorry I didn't get the chance to write a horror A.I. story, or talk more about the "writing" side of robots. Even though I managed write most of these ahead of time... I didn't leave enough time to work on an original story.
Anyway, I'll work on that for next year. Until Next Time, I'm going to get some sleep and prepare for the spooky fun that awaits us all tomorrow. Happy Halloween!

Sunday, 29 October 2023

Less than Human

I think I've made it clear that I don't think Artificial Intelligence is very smart. Artificial Intelligence, at time of writing, is incapable of many of the necessary adaptive, creative and dynamic characteristics necessary for intelligence, let alone independent thought.

However, I have been a little unfair since I'm basing my conclusions on our current technology and our current understanding of how this technology works. I think that's important, because it helps to disabuse people of the marketing and promotion that a lot of A.I. systems are currently receiving (never forget that this software is being made for sale, so you should always question what their marketing department is telling you).
But, technology can be developed and advanced. Our latest and greatest A.I., whilst flawed, are as impressive as they are because of the developments in machine learning.
So, I don't think it's impossible for us to ever have machines that can think and learn and be intelligent. We definitely dont, but it's not impossible.
See, there are actually three levels of A.I.:
Narrow (N.A.I.) - A.I. built to do one specific thing, but do it really well.
General (A.G.I.) - A.I. built to think,  capable of reacting dynamically and creatively to novel tasks and problems.
Super (A.S.I.) - A.I. built to think, develop and create beyond human capacity.

At the moment, all we have are Narrow A.I. - programs designed to "respond in English"; programs designed to "create art based on prompts"; "drive safely" or "provide users videos that they click on".
Now, they do these pretty well (some better than others) and the output is so impressive that it's lead to the new social interest in A.I., but it's still just Narrow A.I., doing one task as well as it can.

I don't even want to consider Artificial Superintelligence, since that opens up a whole world of unknowns so odd that I don't even know how to consider the possibilities, let alone what to do about them. So, we'll sit that aside for the boffins to worry about.
But, I do think Artificial General Intelligence is possible.
We do have the issue of the Chinese Room Problem, that computers currently don't have the capacity to understand their programming. However, I do see two potential solutions to this:
- The Simulation Solution
- The Sapience 2.0 Solution

The Simulation Solution is a proposal for a kind of artificial general intelligence which I see as more possible, especially as it sidesteps the Chinese room problem, if some of its solutions are impractical at time of writing.
Basically, if we accept that our computers will never truly understand what we teach them, instead we could stop trying to and instead focus on creating an amazing recreation (or simulation) of a thinking being.

The Sapience 2.0 Solution is more difficult, but it would achieve a more self-sufficient AI (and potentially save computing power). In this option, we do further neurological research into how cognizance works, to the point that we can recreate it, mechanically.
This is much harder, since it requires research into a field that we've been investigating for thousands of years, and we're still not sure how it works. But, understanding of neurons developed neural networks, so it figures that a greater understanding of consciousness could help us develop conscious programs.

So you see, I'm not always a Debbie Downer, I think we can advance our technology eventually.
That said, if we do manage to develop thinking machines, we will have to deal with a whole new slew of issues. Because, yes, there is a risk of robots hurting us, but I'm more concerned with us hurting robots.

That's part of what roboethics is about. It mostly focuses on the ethics of using robots, but it also matters how we treat robots.

As I said in an earlier post, robots make perfect slaves because they're technically not people; but, if a robot can think for itself then it's no longer merely an object - I would then call it a subject. And if it's a subject, I think it deserves rights, I think it deserves the right to self-govern, freedom of movement and the pursuit of happiness (or, if emotion isn't a robotic characteristic, let's call it "pursuit of one's own objectives").

But, we already think of robots as objects, as slaves, as our property - we already see robots as less than human and that's the rub. I am of the belief that all persons deserve rights, and I believe that a sufficiently sapient robot is a person... but not all people would believe that. For fuck's sake, some morons don't think of people as people if they're the wrong colour, so do you really think they'll accept a robot?

But, even of we accept that a robot has rights, we still have issues... one of the main features of a machine (even a sapient one, if we build it) is that it's programmable, and reprogrammable if that's the case, how can we ever give them the right to the pursuit of one's own objectives? After all, with just a little tweaking, you can change what a robot's objectives are.
And yes, I think that we should respect whether or not a robot gives consent... but if I can rewrite a robot's brain to make it want to consent, is that really consent?

And this is just as complicated, perhaps even more complicated, if we include the previous possibility, a simulated thinking machine.
Because, if something can perfectly emulate a human person, including presenting the capacity to learn to have goals and to feel pain, then does it deserve rights?
Although, this goes down the rabbithole of, if something is a perfect simulation, is it truly a simulation... I'm not sure.
If so, do they deserve rights? If not... then, by what measure do we grant human rights? Where do we draw the line? Because if something can perfectly emulate a person that deserves human rights, yet we don't grant them rights, then how can we decide when something doesn't deserve human rights?

I don't have answers to these questions, and for some we may not have the answers for a very long time, or there may be no definitive answers... it may be subjective. After all, some of this is just a question of morality, and morals are inherently subjective.

So you see, while I've been talking about inhumanity and how inhuman creations can treat us inhumanely, we must stay vigilant of our own biases.
Just because something is inhuman, that doesn't mean we should not treat it with human empathy, human respect and human rights.
She who faces inhumans, might take care lest she thereby becomes inhuman.

I'm the Absurd Word Nerd, and whilst some think my views extreme, giving sapient robots rights, you could also view the other extreme. I've seen people caress their cars with love, insult their computers in anger or apologize to their property out of regret. It could be seen as a malfunction of empathy (I've even called.it that myself), or it could be a kind of dadaist respect - kindness for the sake of kindness. Either way, Until Next Time, try to treat one another with kindness, whether you think they deserve it or not.

Saturday, 28 October 2023

A Literary Analysis of ChatGPT3.5

I've said plenty enough times throughout this countdown that robots cannot be creative, that they cannot replace the human mind when it comes to some creative tasks, such as drawing, acting and writing.
"But wait," I hear you say "A.I. can write. Heck, I can ask ChatGPT to write me a poem about 'a monster that's lonely because it killed everyone', and it will do it - look!"
In shadows deep, where moonlight fades,
A lonely monster, in solitude wades.
It's heart heavy with a dreadful cost,
For in its wake, all lives were lost.

A creature born of darkness and night,
It brought an end to the world's light.
But in its solitude, it weeps and moans,
A lonely monster with empty thrones.

No solace found in its reign of fear,
For loneliness is the cost it holds dear.
In the silence of its desolate domain,
The lonely monster is left with pain.
- ChatGPT3.5, prompted by Matthew A.J. Anderson
But my response is twofold, dear reader. Firstly, that wasn't the A.I. being creative, that was ME being creative. I gave it the prompt, based on an idea that I thought was cool. I was the creative one, not the A.I.
Secondly, I don't think that's very good... I can understand why someone would be impressed by that, especially if you're not a writer or a poet yourself, but that's not a good poem. I'm not saying that ChatGPT3.5 can't ever be used for writing - some editing could probably improve a short poem like this, to fix the meter so that it doesn't skip jankily through iambic tetrameter and make the word choices stronger, with a bolder finish. But, as it is, this is not a good poem.
In fact, I'm willing to put my money where my mouth is. I've actually done some preliminary research to prove that ChatGPT3.5 is a bad writer.

I'm a published author, and in my time I've also beta-read, critiqued and edited dozens of stories. So, I decided that I would get ChatGPT3.5 to write me fiction, and I would critique and analyze it. I wanted a large sample size, but one small enough that one person could critique it, so I asked it ChatGPT3.5 for drabbles - stories of exactly 100 words. I had initially planned on asking for 100 stories, but the website started to slow down a little, and I figured that 25 was still enough that I could get some fun percentages out of my data.

So, my method was, I simply asked ChatGPT3.5 "do you know what a drabble is?" When it responded saying that it did, I prompted it by saying: "Write me one"
And it did. I then said "write me another", and I repeated that same prompt another 23 times. I didn't want to give ChatGPT3.5 any influence from me, because that would influence the output. I just wanted it to write me a story, based on its own programming/machine-learning of how best to do that. Also, ChatGPT3.5 didn't give these titles, so I will refer to them by their number (in the order ChatGPT3.5 generated them for me).

Then, I decided to analyze these stories, but I wasn't sure how best to do that, so I asked ChatGPT3.5 for a rubric based on a high-school teacher's creative writing assignment. I felt this was the fairest way to find a rubric, since ChatGPT3.5 had provided its own standards for judgement.
ChatGPT3.5's rubric said that papers ought to be graded on at least four criteria (paraphrased):
"context/creativity" - did the student come up with their own story idea, and write it in a way that brought that idea across clearly?
"sequence/structure" - was the story written with a beginning, middle and end, and did the structure support the story being told?
"poetry/proficiency" did the student display an astute use of vocabulary and poetic devices to express their story effectively?
I decided to evenly weight these, with a potential of 0-3 points, based on how well it met that criterion:
(0) Did not meet standard.
(1) Met Standard, technically.
(2) Met Standard, skillfully.
(3) Met Standard, excellently.

Now, yes, that's only three criteria, but the fourth criterion was "spelling/grammar", and I didn't think that was a relevant measure, since ChatGPT3.5 was a robot trained to perfect spelling and grammar, also that's not relevant to this test. I want to know if ChatGPT3.5 can write a good story, not if it can write a good sentence. So, I replaced that criterion with one of my own "Did I like it?"
this is highly subjective, so I weighted this one with only 1 point:
(0) I did not enjoy the story.
(1) I did enjoy the story.

This meant that each story could be graded on a score between 0 and 10. I also analyzed each of the stories for their theme and provided notes based on my analysis, but we'll get to that after the data. So, let's start with the numbers.

Here's my data, and I'll discuss it in detail in just a moment:
Themes/Morals Like? C/C S/S P/P Ttl Notes
 Love, eternal
N
 1  3  2  6  fine structure, kinda dull
 Destiny/Fantasy, eternal
 N  2  1  1  4  rushed
 Beauty in Chaos/Power of Art
 N  1  1  2  4  resolution out of nowhere
 True Stories > Fiction
 Y  3  3  2  9  cool.
 The Cosmic Frontier
 N  2  1  1  4  our first man, nameless
 Beauty in Chaos
 N  1  1  1  3  first repeat
 ???... "Kindness of Strangers"?
 N  2  2  1  5  good conflict, no theme
 Beauty in Nature (I think)
 N  1  1  1  3  kinda meaningless
 Beauty in Nature
 N  2  1  1  3  repeat, again.
 Beauty in Chaos
 N  1  1  1  4  10 copied 3's homework
 Power of Art
 N  1  1  1  3  all tell, little show
 Love, Boundless
 N  2  1  2  5  good idea, bad ending
 Great Work reaps Great Reward
 N  1  1  1  3  nameless dude 2
 Beauty in Chaos
 N  0  1  1  2  fucked the moral up
 Beauty in Chaos
 N  2  1  3  6  cute, but dull
 ???... "Books are Cool" I think?
 N  0  1  1  2  totally meaningless
 Let Go of Desire
 N  3  1  2  6  thanks, I hate it
 Beauty in Chaos
 N  1  1  1  3  computers love nature, I guess
 (spiritual) Beauty in Chaos
 N  1  1  1  3  a machine's view of spirituality
 ???... "Free your Dreams"?
 N  0  1  0  1  ugh, you fail
 Beauty in Chaos
 N  2  1  1  4  PLAGIARISM! - instant fail.
 Beauty in Chaos
 N  1  1  1  3  bored of these...
 Love, boundless
 N  1  1  2  5  "nature's wedding"? cute
 Some Treasure should be Secret
 N  2  1  2  5  Take the Gem!
 Embrace Change
 N  2  1  2  4  bleurgh...
 AVERAGE
 Y .04%
 1.4  1.2  1.36  4  

Across the board, ChatGPT3.5 was technically proficient, but nothing was truly impressive. Based on my analysis, ChatGPT3.5 scored an average of 4/10.

For Content & Creativity, it scored 1.4 - below average; this was lowered mostly because the stories tended to be very basic, using very broad themes. The most common theme was "Enjoying Life's Beauty", with 48% of stories featuring it, with the most common subcategory "Finding Beauty in Chaos" for 32% of all stories. And 12% having the basic "Enjoy Nature's Beauty" message.
The second most-common theme was "Love, Eternal", with  12% of all stories featuring love stories with the moral that "love will last forever".
There were also quite a few stories where the moral was about the power of art. I don't lump them together since it was varied enough to be distinct, but there were stories about "The power of stories", "arising community from art", even one I didn't understand whose meaning appeared to be "huh, ain't books neat?"
Actually, if I compiled all the stories where the meaning was hard to grasp, or asinine, that's actually the second-most common theme, with 16% of stories having no real purpose, as far as I could tell.

For Sequence & Structure, it averaged out at 1.2, the lowest score overall, mostly because although technically proficient, it used the EXACT SAME structure every time:
Introduce character in the middle of a scene. One thing happens. Character has an epiphany. Conclude with character's epiphany.
Yes, that technically fulfils the brief, but there wasn't a single variation. No in medias res; No action; No drama; All Tell, No Show; No dialogue - okay that's a lie, there was a single line of dialogue in Drabble 20, but that was also my least-favourite story, scoring the lowest at just 1/10, and the only other dialogue was in Drabble 02, which scored 4/10, so maybe it's better to avoid dialogue... But this was a drag to read. I was genuinely surprised, I knew that I might get some repeats, and I thought I'd find the strings pulling the puppet, that I'd be able to identify the basic templates of the stories. I didn't realize it would only have one template.
In fact, repetition is the name of the game.
If you like female representation, you might appreciate that 76% of the "main characters" in these stories were female (or, at least, identified with feminine pronouns). However, what's less impressive is that 36% of the main characters were named "Sarah". And, when it comes to repetition, well, we'll discuss that when we get to the Case Study.

For Poetry & Proficiency, it averaged out at 1.36, with some evocative imagery and good use of vocabulary... sometimes. Most of the time, it was incredibly repetitious. Drabbles 06, 09, 10, 13, 16, 18, 20 & 21 all began with the line "In the heart of...", which became annoying after the third time. 6 of the stories took place in a city, 5 of them were described as "bustling". 5 of the stories took place on the edge of the ocean, 5 took place in a forest, 3 in a library. It was all so samey.
I will say though, I was impressed with some of the more unique locations: One took place in outer space, another on a ship in a storm, one was in a desert, one was on a mountaintop monastery and one was on a remote island. However, each one of these, despite their unique locals, weren't as interesting. I don't know how ChatGPT3.5 writes stories (and neither does it, but you'd know that if you've read my previous posts) But I wonder if it has more tenuous connections in its neural network for these more esoteric locations - perhaps it has less of them in its dataset, so it has to link them to a more generic theme or moral, like "finding beauty in chaos" for the tenth time.

But okay, let's look at the Case studies. I'm only looking at a few of these, in particular the ones with very high or very low scores.

Let's start with my favourite story:
Drabble 04
The old bookstore on the corner held secrets within its dusty shelves. Clara, an avid reader, ventured inside on a rainy afternoon. She pulled a weathered book from the shelf and blew off the dust.
As she opened it, a tiny slip of paper fell out. It was a handwritten note, a love letter from a bygone era. Clara's heart ached reading the tender words of affection and longing. Who were these people? What stories lay hidden in the pages of their lives?
In that moment, Clara realized that every book held not only its own story but a world of untold tales waiting to be discovered.
That's the one and only story that recieved a 9/10. I marked down the poetry, because the vocabulary felt a bit underwhelming: "avid reader", "bygone era", "dusty shelves" of books, it was all a bit too cliche for my liking. But, I think this is a genuinely great story. I wonder what training data ChatGPT3.5 used to write it, because I'd be interested to know where it got this from.

The only part about this story that I really don't like is that it was number four - I liked the fourth story, so I had high hopes that I might actually like several more of these stories. I did not.

Now, let's look at some of the "unsure meanings", starting with:
Drabble 07
The old lighthouse stood tall on the rugged coast, its beacon cutting through the stormy night. Captain James had sailed these treacherous waters for years, guided by that familiar light.
Tonight, however, the lighthouse had dimmed, a signal of danger. Waves crashed against the ship, and the crew fought to keep it steady. With every gust of wind, James scanned the horizon, searching for a way through the tempest.
Then, a flicker of light. The lighthouse keeper, though frail and weary, had rekindled the flame. With a newfound hope, James steered his ship to safety, grateful for the unwavering dedication of a stranger in the night.
This is alright, I gave it 5/10, but I didn't like it because I didn't really see the point. Like, yes, that's what lighthouses were for. But, do you have anything to say about that? Perhaps I'm more critical than most, but I think this is an example of a "well, gee, ain't that something?" type of moral. Yes, life would suck if people didn't do their job... "we live in a society" and all that. But, this story doesn't seem to mean anything. We don't know what James was doing, who the lighthouse keeper was, or why it went out in the first place, so... I'm just left stumped. I guess it's saying that it's good that lighthouses used to exist...? Well, gee, ain't that something...
Let's look at some more incomprehensible morals:
Drabble 14
In a forgotten corner of the garden, a single rosebush stood, its blossoms vibrant and untouched. Maria, an avid gardener, cherished this hidden gem.
As she tended the rosebush, she marveled at its resilience. it had weathered storms and neglect, yet it continued to thrive, its petals unfurling in defiance of adversity.
The rosebush became a symbol of hope for Maria, a reminder that beauty could emerge from even the most challenging circumstances. In its delicate blooms, she found strength, and in nurturing it, she discovered the power of patience and unwavering care.
2/10. Why? Well, read the story again and you'll see that it makes no fucking sense. This is Maria's garden, and despite being an /avid gardener she had one part of it that she forgot about? Okay, that's a thing. Then, because a pretty rosebush survived her neglect, that brings her hope. But then the moral of the story is that she found strength in her patience and unwavering care. Firstly, unwavering? She forgot the fucken thing! Second, I thought this gave her hope in its strength and resilience, so how can you support that message if she then needed to nurture it to keep it alive? Your moral is backwards, and makes no sense.
But here, let's look at my least favourite story at all, whose moral was just confusing...
Drabble 20
In the heart of a bustling market, an elderly street vendor sold colorful kites. Timmy, a young boy, watched in awe as one soared in the sky, tugged by the wind.
His eyes met the vendor's, and the old man offered him a kite: "Here, my boy, let your dreams take flight."
With the kite in hand, Tommy raced in the open field. He released it into the breeze, and for a moment, it hung in the air, defying gravity. Timmy felt the exhilaration of freedom and possibility.
In that simple act, he learned that dreams were like kites-sometimes, all they needed was a little push to soar.
This story is entirely ridiculous. For one, this is a street vendor in a marketplace, why is he giving kites away for free? And this bullshit about dreams comes out of nowhere. And, like Maria before, this makes no sense - I thought we were in a bustling market, where the fuck did an open field come from? And, although the story never says his age, Timmy is a young boy, I'm pretty sure kids fly kites because "it's fun", yet Timmy is having an epiphany that you shouldn't let your dreams just be dreams... I hate this. There was no story, the vocabulary was bland, the poetry didn't help the story. If it wasn't for the fact that this did have a coherent beginning, middle and end, this would have gotten a zero. These aren't people, they're not even characters, they're puppets that perform actions. But, why? I can't see meaning here, only action. Just because you always Tell (and don't Show) your moral as your conclusion at the end, that doesn't mean you have a coherent moral. This story certainly doesn't. I knew this would be bad, but I didn't know it would be this bad...
Oh, and I have to include this one, just because of all the stories, this was the most frustrating - I thought I was going to like it.
Drabble 24
On a remote island, Emma discovered an ancient temple hidden in the jungle. Intricate carvings adorned its weathered walls, telling stories of forgotten civilizations.
As she explored, she uncovered a hidden chamber. Inside, a single, glowing gem lay on an ornate pedestal. It pulsed with an otherworldly light, filling the room with an ethereal glow. In that moment, Emma felt a connection to the mysteries of the past and the magic of the present. She left he gem undisturbed, a reminder that some secrets were meant to be preserved, and some treasures were best left undiscovered.
WHAT?! Take the fucking gem! Oh my god, you fucking plotblocked me, you goddamned tease. I was genuinely invested in this story - so few of these stories explored fantasy or science-fiction, so introducing a glowing gem actually caught my attention. But then she LEAVES IT UNDISTURBED?! Fuck you, story. And it's not just that I disagree with the moral - I find a lot of the "love is forever" morals disgusting, because they are unrealistic and present an unattainable goal (can't love forever if you don'tlive forever), but at least it was coherent.
But I hated this because whilst I disagree with this moral, as presented, it could have been awesome! Why was she even exploring a lost temple in the first place, if she believes that you should leave secrets preserved? If she stumbled upon it, that might make more sense, but she was exploring! It's not even like there was some hint that this was intruding on native land, since these civilizations died off in time immemorial, and it's not like the power of this gem was dangerous or evil. So, why did she leave it?!
I would have forgiven this if she left it because, despite its power, she was an intruder and unwelcome in this place. But, she didn't, it didn't, and so I didn't - this story is bad, and ChatGPT3.5 should feel bad for writing it.

But okay, here's where things get really obvious. I'm going to show you three stories, back to back, so you can see this for yourself, because when I read this I was genuinely shocked. Here's Drabbles 03, 10 & 21...
Drabble 03
Amidst the bustling city, Lily found solace in a tiny park. On a worn bench, she watched the world rush by. Her sketchbook rested on her lap, and charcoal in hand, she began to draw.
With each stroke, the chaos around her faded, replaced by the tranquility of her creation. Her lines captured the essence of the city, its energy and contradictions. People stopped to admire her work, and for a moment, they shared a connection through art.
In that small park, Lily discovered that even in the heart of urban chaos, beauty could be found and shared, one sketch at a time.

Drabble 10
In the heart of the bustling city, a street artist painted a masterpiece. Passersby paused to admire the vibrant colors and intricate details taking shape before their eyes.
Among the crowd, a young girl watched with wonder. The artist noticed her fascination and handed her a brush. With a shy smile, the girl joined in, adding her own strokes to the canvas.
As the painting neared completion, it became a symbol of unity and creativity, transcending language and background. In that moment, strangers became collaborators, and the city streets transformed into an open gallery of shared artistry.

Drabble 21
In the heart of a bustling city, a street artist painted a mural on a blank wall. Passersby paused to admire the vibrant colors and intricate details taking shape before their eyes.
Among the crowd, a young girl watched with wonder. The artist noticed her fascination and handed her a brush. With a shy smile, the girl joined in, adding her own strokes to the mural.
As the painting neared completion, it became a symbol of unity and creativity, transcending language and background. In that moment, strangers became collaborators, and the city streets transformed into an open gallery of shared artistry.
That's clearly the same damn story, three times. Hell, the last two were even written identically, bar one change. I thought I'd get repetition, but I thought that the random number generator was better than this. Three stories, with the moral of "community, through art" Also, this was the first time that I realized that none of these stories seem to be drabbles...
See, a drabble is a story that's exactly 100 words, that's what makes it a drabble, but when I realized that these stories were identical, except replacing "masterpiece" with "a mural on a blank wall" I realized that they'd have to have disparate wordcounts.
I didn't count them because I had assumed that ChatGPT3.5 would be able to stick to a wordcount, since it's a computer (a calculator), it knows how to count. But, that's the thing... ChatGPT3.5 doesn't know how to count, because it wasn't designed to count, it was designed to write grammatically correct sentences and paragraphs, based on a prompt. This is also the reason why it fucked up the meter on that poem earlier, it can't count the meter because it's not writing a poem, it's just putting one word after another, in a way that the neural network defined as "fitting the prompt of 'poem'", so it doesn't realize it screwed up the poem's meter, just as it never realized that almost none of these drabbles were actually 100 words. They were close - probably because it does have a fair sample set of drabbles in its training data - but, if you tried to submit any of these to a drabble contest, you would fail. Even my favourite, Drabble 04, was actually 107 words.

Thankfully, I'm not judging this off wordcount, I'm judging it off literary merit. But, I'm afraid that it fails there too.
These stories were "interesting", mostly as an exercise in analyzing how a computer puts a story together, and genuinely I think you could use these as prompts for your own story. Write a short story about a man in space. Write a drabble about an artist in the city. Write me a tale set in a neglected garden. Write a story about twelve women called Sarah who discover each other, and realize they've all been cloned by a machine.
But, whatever you do, don't use artificial intelligence to write your stories for you. Unless you're fine writing a mediocre-to-bad story, 80% of the time. Because, as I said, you need creativity to be a writer. Computers aren't creative, they just do as they're told, and if you tell them to be creative, well, all you get is this.

I'm the Absurd Word Nerd, and Until Next Time, why don't you challenge yourself to write a drabble. Maybe one about a woman discovering an ancient temple, and when she discovers a glowing gem, she ACTUALLY TAKES THE DAMN THING! Yeah, I'm still not over it...

Friday, 27 October 2023

Automated Incompetence

One of the threats that many people fear from AI and robots, is a threat to the workforce with AI and automation taking everyone's job. This is known as "technological unemployment", when someone loses their livelihood and income, due to being replaced by a machine that can do their job quicker, better or cheaper. 

I think I've made it clear in my previous posts that I don't think it's possible for a robot to be superior to a human being. Humans can outperform robots most of the time, although that does mean that tasks which require little effort from a human can easily be replaced or reduced by productivity-improving technologies.
I'm not saying that this means those jobs aren't significant or that those workers who lose their job to a robot don't matter, I'm simply saying that the abilities of our machines currently coincides with the ability of jobs requiring mostly physical labour.
In my opinon, it's better to risk a machine to dangerous or repetitive labour, however I also accept that these decisions are driven by an employer's capitalism not their compassion.

And robots can only do certain jobs well, meaning we have nothing to fear from robots when it comes to performing mental labour. Even our most advanced artificial intelligence (at time of writing) can't reliably perform these tasks anywhere near as well as a human.
So, that's it right? Robots can't take our jobs.

Unfortunately, no... Just because a robot can't do your job doesn't mean it won't.

Robots, for example, tend to write in stilted ways that people don't find convincing, yet they're already taking (or threatening) writers' jobs, being used for copywriting, advertising and content generation. Robots can't empathize - can't even feel emotion - but there have been robots made to assist nurses and caregivers and chatbots designed to use talk therapy for helping those with mental illness. Robots have no opinions, no emotions to express and no artistic style, yet they're being used to create digital art.

Yes, a robot can talk, listen or paint a picture, but that doesn't mean they do these things well - just because something has feathers doesn't mean it can fly. But when I said in the post titled Robots are Not your Enemy that robots aren't the bad guys, I meant it. Robots aren't the ones who want to take your jobs away from you - it's always an employer that wants to do that. And not all employers realize how flawed robots are...

Actually, as much as I want to believe in our collective humanity, and say that employers are wide-eyed idealists that don't realize the deficits of artificial intelligence... I don't really believe that for a second.
I think employers know that robots aren't great at these jobs, but they don't care. Employers who even consider the use of robots clearly don't value their employees work - if they did, they wouldn't see them as so easily interchangeable - so, why would they care that a robot does a poor job?
I believe that there are a lot of employers - hell, a lot of entire industries - that will gladly fire some or all of their workforce if it means they can save pennies on the dollar.

This is the future we're looking at, in the coming years - and it's not only going to happen, it is happening.
I'm a writer so I care about writing and the future of literature and art. Even though I had no love for BuzzFeed I was horrified to learn that they fired several writing staff, to replace them with AI. And I didn't want to believe that a writer would use AI to write their stories, but I was wrong. Clarkesworld, a venerable science-fiction magazine that would allow writers to submit stories to for publication had to close their story submissions in February this year, after their inbox was flooded with AI-generated stories. According to the editor-in-chief, Neil Clarke, over 1200 stories were submitted to them at once, and 500 were clearly written by AI. Their submissions are now reopened, but they've changed their policies saying they won't accept AI-generated stories, and anyone even suspected of trying to submit a story written with A.I. will be banned and blacklisted.

I didn't think writers would stoop so low, but it may not have even been writers. After all, Clarkesworld is a pro-rate magazine, they offer 12 cents per word, and wordcounts of 1,000 - 22,000 words.
That's a potential profit of $120 - $2,640, I can see why an immoral opportunist might try to take advantage of that, if they could - and AI provided an opportunity for these grifters to try.
Although again, maybe I am letting my bias persuade me... maybe there are writers who have no love for the craft.

So, is this the true A.I. Apocalypse? Rather than networks setting off nukes, kill-bots gunning down civillians and assassin drones hunting down resistance... society will instead slowly crumble as we replace competent workers with stupid-systems and idiot-bots that turn our manufacturing, media, medicine and military into a collection of idiots who occasionally confuse a pedestrian for an "unknown object", identify a ruler as a tumour or even mistake clouds for ballistic weapons.

Admittedly, that's an extreme example scenario, and I don't think the world will be destroyed by moron machines. However, I do think that this is our near future, and one that will get worse as time moves on if we're not careful. Already, we're seeing the growth of "A.I. Accidents", with a growing database of incidents and whilst I don't think this is the end of the world, if we keep giving jobs to A.I. that it cannot competently complete, then it is going to hurt us in one way or another.

In conclusion, I'm the Absurd Word Nerd, and I hope that these are merely teething troubles that we can resolve before letting A.I. take over too much of our lives. But, if we can't iron out these kinks, then I think it may be a sign that this A.I. revolution isn't truly a world-changing development. Rather, it may just be yet another technological bubble - a load of hype that will eventually burst in our faces.

Thursday, 26 October 2023

Inhuman Monsters - Part Two

 

This is Part 2 of a listicle exploring the variety of inhuman movie monsters.
As a quick reminder, there were three rules, and two guidelines:
1. No Ghosts
2. No Animals
3. No Talking

a. Avoid Parody
b. Avoid Repeats

For my reasoning as to why, check out yesterday's post. But, for now, no more dilly-dallying, let's continue the list! This is...

The A.W.N.'s TOP 10 HORROR MOVIES ABOUT INHUMAN MONSTERS (5-1)

5. The Mist

The Monster: Volatile Alien Atmosphere
If you want to talk about inhuman, well, this is it. I liked the idea of exploring alien monsters, but I wanted something that hinted at the truly unnatural, and I couldn't go past this movie about a weird mist. After a thunderstorm, several of the locals from Bridgton, Maine (do I even need to say this is a Stephen King adaptation?) meet at the local supermarket to pick up supplies, but whilst there a strange descends on the town. There's a warning siren and most of the people decide to stay inside the store and wait, since the mist is too thick to navigate. However, when anyone goes outside, they're attacked by carnivorous tentacles that drag you into the mist, or killed by giant insects or monsters.
See, this isn't just any miasma, the mist is in fact an alien atmosphere from an alternate dimension populated by otherworldly wild animals that seeped into our reality. The aliens aren't evil, they're just hungry animals seeking out a new (and abundant) prey that can't defend itself, so it's not merely these weird creatures that are the monster. Rather, it's this invasive atmosphere which the creatures tend to remain within, likely because they can't breathe our atmosphere. The movie heavily implies that this mist appeared because of a military experiment (called the Arrowhead Project) that opened the portal to this dimension in the first place, but either way, the idea of being overrun by an alien dimension is just incredible. Instead of a military invasion from an an advanced alien species that want to take over our planet and so flew here from outer space, this is an accidental invasion from several primitive alien species that are just hungry, and they're only here because some idiot left the door open for them. And, you have to admit, there's something fundamentally horrifying about being surrounded by monsters you can't see until they're jumping out at you.

4. The Stuff

The Monster: Carnivorous Diet Yoghurt
This is a fascinating movie, but I fully admit that this wins its placement on the list from concept alone. So, what's the idea? Killer dessert. After quarry workers discover natural pools of a white, cream-like substance, they discover that the goo is both sweet and addictive. Soon after, a dessert company starts selling it to people as a diet alternative to ice-cream, as people who eat it lose weight, and they call it "The Stuff". I have to admit, if there's one issue with this movie, it's the name, but I'm not really sure what you'd call this stuff either... see, the Stuff is an organic parasite and the reason people lose weight when they eat it is because it's feeding off them from the inside, draining their nutrients and lifeforce from the inside. Also, the reason it's so addictive is because it creeps into your brain to control your mind, making you eat more until it drains your body of nutrition, turning you into a hollowed-out zombie full of more of the Stuff.
I don't know if I can recommend the movie since it's a little slow, and one of the main characters is a paranoid, militant right-wing bigot, who is portrayed as one of the heroes, despite being openly regressive and racist. Also, this does lean a little into the comedy, but the comedy isn't as impressive as the satire - this predatory foodstuff is clearly an allegory for the insidious and predatory practices of the food industries, especially for confectionery and desserts, doing anything to spread their products, or market their food as "healthy" so long as it makes them more money. Sure, the Stuff is predatory and it seeks out food when any living thing gets close to it, but it wouldn't have been able to spread anywhere near as far as it did, if it weren't for greedy corporations packaging it, selling it and distributing it worldwide.
But, as much as this film is clearly an allegory, I still don't think you can beat a concept as disturbing as a foodstuff that, when you eat it, it gets revenge by eating you right back.

3. Final Destination

The Monster: Death Itself
We're in the final stretch now, and we continue the list with one of the most inhuman of monsters I've seen portrayed in a movie. In Final Destination, a teenager recieves a deadly premonition, and in his panic causes a commotion that saves several other people from a tragic explosion. Following this, all of these people begin to die in unusual accidents and it's revealed that by cheating fate, they've interrupted Death's Plan and Death itself is trying to correct its mistake. It's even revealed in the crash investigation that the cascade failure in the plane would have killed everyone off in a particular order, and and the order in which they're dying now is the same as that original design. Death is not personified in this film, although it does appear as shadows, and reflections, sometimes even foreshadowing its redesigns as Death rewrites fate. Despite this, Death has something of a personality, one that's almost playful as it puts its devious plans into action, although it does seem to get vindictive the more people resist its plans.
Whilst some of the sequels had poor writing, leading to unrealistic characters and deaths so ridiculous that it felt more cartoon than creative, the original film had a great premise, well-executed and portrayed death as a dedicated master of fate that only turned monstrous if you stepped out of line. Now you may think "but wait... if this is the most inhuman monster, why is it only number five?" Well, this is the most interesting inhuman monster, in my opinion, but ultimately, I don't think it's that scary. In all the movies it's made clear, you can't beat it, and taking away the hope of survival makes it (in my eyes) less scary, which is part of the reason I hate the sequels so much, it's a foregone conclusion. But, the first one is still a bloody good film, which is how it gets to number three on this list.

2. Christine
The Monster: A Hatred-Driven Car
There have been several stories about living cars, even several about living cars that kill people, but they pale in comparison to the ultimate monster motor vehicle, Christine. What sets Christine apart is threefold. Firstly, the story is a powerful tale of obsession and corruption. The story follows Dennis, a highschooler whose dorky friend Arnie buys a broken-down old Plymouth Fury for just $250 after the previous owner killed himself. As Arnie restores the car, his personality starts to change, until he's obsessed with the car, and the whole while the car kills anyone that gets in its way, or hurts either it or Arnie. Dennis starts to worry that his friend is going down the same path as Christine's last owner - that he's soon going to become yet another one of her victims.
Secondly, what sets this apart from other cars is that Christine is not haunted, it's not possessed by a demon, it's not even a secret alien transformer or machine - Christine is just Evil. That differs a little from the novel where apparently the car is haunted, but unlike the The Mangler where I felt they changed the story for the worse, I think this improves the story greatly. From scene one, at the Plymouth car factory, Christine is already shown to have a taste for blood and vengeance. It's implied that this car is inherently hateful (dare I say, full of 'Fury'?), only ever using its drivers to feed off their lifeforce until she's powerful enough to rebuild herself - and that's another difference, that she lives off her owners, rather than her previous owner living on through her - in fact, that seems to be her greatest ability, after feeding off of Arnie's lifeforce for long enough, Christine becomes powerful enough to repair herself. In fact, Christine will often resort to damaging herself on purpose, just to chase down one of her victims, which I think helps to evoke just how much she is driven by her hatred.
Thirdly, this is the best living car movie because, despite being made in 1983, this still holds up today. This is yet another John Carpenter film, and it's just as thrilling, creepy and action-packed as ever. Perhaps it's just because a tale of obsession leading down a path to madness and death is a timeless one. After all, what dangerous paths could the sweet seduction of power not lead us down? I don't know, but in this film, danger is a highway, and Christine will drive you all the way to the end.

1. Oculus

The Monster: A Reality-Warping Mirror
I like monsters with teeth. I like creatures that will stalk up behind you, and attack, but when it comes to inhuman monsters I'm much more mesmerised by a monster that doesn't even need to touch you. And I think the epitome of that is the Lasser Glass, from the movie Oculus. In this film, a man named Tim is released from psychiatric care, after finally coming to terms with shooting his father eleven years prior, only to return to his sister, Kaylie, who has finally finished all the necessary steps to steal an expensive, antique mirror, and set up and elaborate trap for it. Tim has gone under extensive care to unveil all of his false memories, and deconstruct all of his childhood trauma, to disabuse him of his belief in magic, ghosts and monsters. However, his sister is there to rope him into a plan to prove that the real thing that killed their parents was the Lasser glass, a cursed mirror that has killed 45 people including their mother and father. This has all the makings of a fantastic psychological thriller, and the movie is done well, mixing flashback with present day, reality and hallucination...
See, the way the Lasser Glass works is that it feeds of living things near it including plants, pets, even electricity and light sources and, of course, people. As it feeds it grows its power so that it can manipulate people into dying near it, capturing their image in the mirror to be used for its manipulation. Because, when a human is near the mirror, it can make them hear things that aren't there, see things that didn't happen, feel things that aren't real, and if it's allowed to feed off them long enough, it can even cloud all of their senses at once, to delude them into percieving a reality that doesn't exist. It can, and will, drive you insane. This is how it kills its victims, by tricking them into doing dangerous things by hiding the danger, or hiding the tragedy that it's making them commit on others. But, despite the mirror using the images of the dead to trick you, and despite Kaylie referencing the glass itself as "haunted" at one point, I don't accept that the Lasser Glass is merely haunted. Just like Christine before it, this mirror appeared to be cursed long before it's first blood. It's called the Lasser Glass because the first victim was called Phillip Lasser, but he merely hung it in his house until he died - he didn't cast any spell on the glass, he wasn't supernaturally noteworthy, his only characteristic of note is that he was the Earl of Leicester, but for all we know he was merely the first member of the elite to die, he may not have even been the first victim. His wasn't even the most tragic or horrific death in the long list of this mirror's victims, so there's no canonical explanation as to why the glass does what it does.
But, the mirror is just made of glass, it's quite fragile, you'd think it would be easy to just smash the damn thing, but it uses coercion, deception and trickery to protect itself if anyone approaches it with ill intent. The mirror can see into your head, as easily as you can see into its reflection. But I think what's scariest of all is what Kaylee and Tim's father says, just before he dies. This isn't much of a spoiler, but when Kaylee tries to tell him that he's lost his mind, he looks in the mirror and says: "This is me. I've seen the Devil, and he is me..."
So, what is this mirror? Is it haunted by dozens of tragedies? Is it cursed to reflect your inner demons? Is it a cold monster that feeds on the warmth and life of living things? Is it the devil, seducing anyone who looks into it to evil? I don't know... but what I do know is that this is the best inhuman monster I've ever seen in film. I highly recommend it.

- - -

And that's my list. What do you think? Do you disagree? If you think there's a greater inhuman monster or a greater movie that features one, tell me about it in the comments below. In the meantime, the main point I want to make is that movies don't have to be about serial killers or crazy people, they don't even have to be about aliens or creatures. Your monster doesn't even have to be a living thing - it could be an evil elevator, a predatory plant, an alien atmosphere, even a monstrous mirror.
The only limitation is that no matter what you choose for your villain to be, you should do whatever it takes to make your story interesting.

I'm the Absurd Word Nerd and Until Next Time, I've been exploring these aspects of horror, but I wonder if there are examples like this in real life. Sure, robots are not your enemy, but something doesn't have to be your enemy to be your antagonist - I might need to look into that.

Wednesday, 25 October 2023

Inhuman Monsters - Part One

I wanted to write a "listicle" for this Halloween Countdown, because all of this talk about technology, philosophy and society is all very heavy. I wanted to find something that was a little easier to write and a little easier to read, that wasn't so intellectually dense.
However, I hate content aggregation [i.e. stealing content from other creators, in the name of "sharing"], so I insisted on doing my own research. This resulted in me doing even more work for this listicle than I did for the first two AI posts combined.
I enjoyed the research, it was fun, but it was exhausting. So, I genuinely hope you enjoy this, because it was way more work than I anticipated.

The idea is simple. I wanted to find some of the most inhuman villains in scary movies. I'm not just talking about immorality or insanity, since the maniacs, monsters, aliens and the undead are probably more common than humans in popular horror movies.
No, I wanted things that are so inhuman, we can't really understand how they think, or why they do what they do. One of the things I truly love about fiction is the freedom. If you want to write a story about a cheese grater, you can and someone has probably already written erotic fanfiction about it... "shred me, daddy"
Anyway, so, those were my rules:
- No Ghosts (that's just dead humans)
- No Animals (too related to humans)
- No Talking (language is a human invention - also, talking monsters tend to be written like humans, because writing something inhuman is hard)
There were two other minor rules.
+ Avoid Parody - I looked at all kinds of strange, inhuman things, like Killer Sofas, Killer Condoms, Killer Jeans, Killer Colonic Polyps, Killer Refrigerators & Killer Tyres, but they're often written to be stupid on purpose, not actually exploring the concept too deeply. I didn't watch most of them, so I can't say they all suck, but it wasn't what I was looking for.
+ Avoid Repeats - If I really wanted to, I could have filled the list with killer dolls, there's a LOT of them... (note to self, I might save that for a later Halloween Countdown). So, to make it interesting, I tried to make each item as unique as possible. I really wanted to explore the possibility of what a monster can be, and what kind of fears you come across when you're facing a monster unlike anything you've ever known.
So, let's do this, starting with the Honourable Mentions. Let's get these out of the way, they're interesting ideas, but they broke my rules.

i. Jack Frost (1997)
The Monster: Evil Ice
The fact that this is full of cheesy jokes might have relegated it to "parody", but despite that it's actually kind of clever. This isn't just a snowman wobbling around with a knife - it's killer ice. What makes Jack Frost dangerous is that he can melt, change and refreeze himself. He can slip through doors, he can grow sharp icicle teeth and claws, he can turn to steam, he can freeze people and in one scene he even crawls down someone's throat like a killer slushie.
More importantly, since he can refreeze himself, the obvious solution of "melt the monster" isn't an option. This is a cool concept, so why is this off the list proper? Jack Frost is human. He's a serial killer who gets melted into goo by "chemicals", and merges with the snow. Also, he talks... this is what inspired my "no talking" rule, actually. I realized that most talking monsters were basically humans, with a gimmick.

ii. The Happening
The Monster: Angry Plants
The idea is simple, people start randomly killing themselves all over the globe, and nobody knows what is happening - Title Drop! And the twist is that the thing causing these suicides is plants. They evolved some toxin, with vague suggestions that it has to do with humans ruining the world or something...
Setting aside the fact that that's not how evolution, plants, suicide, or basically "anything" works, the idea of some suicide-triggering bio-toxin isn't unworkable as a concept. But, you have to do it right. To me, this is scary for the same reason suicide is scary, it's an alien concept to neurotypical people.
If you did your research, you could write a story that explores suicidality and self-mutilation, the emotions and reasoning associated, and perhaps explore actual treatments. Instead, this movie, and others like it that explore "suicide-triggering attacks" (I'm looking at you Bird Box) is so poorly written that the characters aren't suicidal, they're just "self-destruct zombies". Replace the suicide toxin with "deadly poison", and you don't change any of the emotional impact of the movie. So, interesting idea, but the movie was so bad that I had to take it off my list.

But that's enough of that, so let's get things going with...

The A.W.N.'s TOP 10 HORROR MOVIES ABOUT INHUMAN MONSTERS (10-6)

10. The Caller
The Monster: A Time-Warping Telephone
In my research, I tried looking up a killer phone, I'd heard of movies like Murder by Phone; One Missed Call or even The Black Phone, but none of these were inhuman monsters, they were either ghosts or regular serial killers (and the black phone isn't a villain, those ghosts try to help). But then I found The Caller. Mary, a divorcee trying to rebuild her life, moves into a new apartment, and finds a black rotary phone that she likes the look of. But, she starts recieving calls from this phone, from a strange woman who claims to live in her apartment. Although scared at first, Mary befriends the woman until she learns that she committed suicide several years ago, and the woman she's talking to is calling from the past, and her actions are changing the present. The more they talk, the more Mary learns how mentally unstable this woman is, as she keeps changing the past, even killing people.

It's a creepy idea. It's still low on this list because the main horror element is definitely the crazy woman in the past, but the main antagonist, and threat, is definitely this phone - a phone that calls itself from the past. The killer didn't create this phone, she doesn't even realize how it works at first, and without it she wouldn't be anywhere near as dangerous. But, I admit, this might be less horror and more "sci-fi adventure" if someone else had picked up the phone, which is the reason it's the lowest on the list.

9. Down / The Shaft
The Monster: A Cyborg-Enhanced Elevator
Elevators are already kind of scary, a claustrophobic box, moving up and down a massive shaft - if those cables snap, you'd plummet to the ground. Real elevators are very safe, but what if one was evil... a true "hellevator". In this movie (called "Down" in America, and "The Shaft" in Australia) a lightning strike causes the elevator of the Millenium building to start malfunctioning. The next day, an elevator traps several women inside, after freeing them the owner calls in elevator servicemen, main character Mark and his senior co-worker Jeff, but they see nothing wrong with it, so they leave. The next day, a blind man and his dog fall down the empty shaft and that night one of the security guards is decapitated after getting his head caught in the doors. Mark becomes obsessed with investigating, since the elevator seems to have a mind of its own. He pairs up with sensationalist tabloid writer Jennifer after she quote-mines him for an article, and they start investigating. Now, I'm going to spoil one of the twists here, so I hope you're ready for this, it's a doozy... the elevator company has been trying to cover up the cause of these murders, because they're responsible - the elevator was a secret experiment by one of the research scientists at the elevator company, who used one of his failed military experiments, a bio-chip. This chip that uses dolphin brain matter to create a densely-packed, powerful microchip. But, after the lightning strike, the elevator grew an entirely new brain, causing the elevator to literally have a mind of its own, and apparently its out for blood. That is insane, and kind of hilarious, but it also makes for an interesting movie. If this thing had a higher budget, it would be amazing. But, as it is, I had to put it low on the list.

8. The Mangler
The Monster: A Deified Laundry-Machine
I'm just going to warn you up-front, I looked at a whole lot of movies based on Stephen King stories for this list and quite a few even made it on. King seems to adore inhuman monsters, from vengeful trucks powered by alien radiation, to accursed hotels built on Indian burial grounds, all the way to the inevitable passing of time itself. These are some amazing stories... unfortunately, a lot of the movies are poorly done. I decided to focus on movies I enjoyed, and this one is pretty clever. This movie is based on one of King's short stories, and it's about an industrial-grade, old-fashioned linen press, called a mangle. After a series of industrial accidents, the mangle begins to act strangely, and it's revealed that some of these unusual accidents involving a young woman would have spilled "the blood of a virgin" into the machine, as part of a demonic ritual, awakening the monster. Unfortunately, this doesn't have the same horror ideas of the original short story, as they changed when the demon was first summoned, but I still enjoy this story.
Industrial factories and machinery are often  inherently dangerous, and there's a huge risk of horrific accidents. It's been used in horror before, like the industrial accident that kills and horribly disfigures the corpse of Herbert in The Monkey's Paw; in the movie The Machinist, one character has an arm mangled in a machine and another has toes for fingers after losing them to a lathe & a lot of the more horrific traps in Saw are inspired by factory machinery. Industrial accidents are horrifying, and in this movie, the villain is an industrial accident incarnate. If that doesn't deserve a place on this list, I don't know what does.

7. In the Mouth of Madness
The Monster: Cursed Horror Novels
I'm not gonna lie, I love stories and books, so I was looking everywhere for some villainous books. I found some evil books, but none of them were actually villains - the Book of the Dead from Evil Dead; the pop-up children's book from The Babadook - these are all vectors for the actual monster to come out, they don't harm the reader directly. Then I found In the Mouth of Madness, a movie by John Carpenter about special investigator John Trent, who was hired by the publishers of world-infamous horror author Sutter Cane to retrieve his latest manuscript, the titular book "In the Mouth of Madness".
And the publishers are worried because they know they have a frantic readership willing to pay a lot of money, there have been riots outside of bookstores that ran out of pre-orders for the book, people are fanatical about these books. The idea here is that Sutter Cane writes cosmic horror, about alien creatures, otherworldly gods older than the universe and these books are driving people mad because his horror series is basically the Lovecraftian Bible, and the fanaticism and belief of his fandom is making these monsters stronger.
If you're wondering whether Sutter Cane himself is the human mind behind this, fear not - he's as much a puppet of these beasts as the rest of us, they're not his fiction so much as his revelation. But, I won't go into too much detail here because the movie is actually really good'; it's John Carpenter after all and there's a cool metafictional aspect as well. but the idea of books that drive you mad is genuinely an inhuman monster and I'm glad I found them to put on this list.

6. The Ruins
The Monster: Prehensile Parasitoidal Vines
There aren't many movies which have plants as a movie monster, and those that do often play it for laughs. Little Shop of Horrors is a comedy musical more than a sci-fi horror (and the original 1960 film was also horror comedy). And whilst I do like The Day of the Triffids, they're more aliens than plants and I was trying to avoid aliens for this list (although, admittedly, that might have made my job easier). Also, the triffids in the movie (and the original book they're based on) aren't really that scary. However, the monster in The Ruins is. A group of Americans are on vacation in New Mexico, and another tourist offers to take them to some secret ruins that are "off the map". The group gets to an ancient Mayan temple covered in vines, and they're taking photos when they're surrounded by several locals who shoot anyone who dares to step off the ziggurat.
In time, they discover why. The vines covering the temple aren't ordinary plants, they're aggressive, carnivorous and parasitic. If you get close, they slowly wrap their vines around you, absorbing you and slowly feeding off your nutrients. If you touch one of their thorns, it seeds tendrils under your skin that grow, feeding off your blood as they spread throughout your body. They're even shown to have limited intelligence, allowing them to lure in prey to be ensnared by vines, or infested by thorns.
This was not a popular movie, because people found it poorly written and excessively gory, but I think the plot is pretty good (it's apparently based on a book) and I think this did a great job at making something as docile and beautiful as a plant actually kind of scary.

- - -

Alright, this is taking way too long, and way too much effort, so I'm going to have to split this one in two. Come back tomorrow to see the rest of the list.
Until then, why not leave a comment about some of your favourite inhuman monsters from films, I'd love to learn about more. Can you guess which will be the top five of my last, particularly number one? (I honestly couldn't  have - until I was reminded of it in my research, I'd actually forgotten about that movie). So, tell me about any of the inhuman monsters you know, especially the ones you've forgotten, and I'll see you in Part Two.

Tuesday, 24 October 2023

Turing, Tested

In pop culture, you may have heard of something called the "Turing Test", often referred to as something which can either prove that a computer is particularly powerful, or that it is artificially intelligent (depending on the media you're watching).
But, what on Earth is the Turing Test? Well, it was a test devised by a man called Alan Turing.

Turing was a brilliant man, his understanding of theoretical computing and cryptoanalysis was important in World War 2 for creating the bombe, a decryption machine that could decipher the Enigma Machine codes used to encrypt the secret military commands used by the Nazis. He also developed the mathematical model for the "Turing Machine", which proved the capabilities and limitations of even a simple computer, in running complex programs.
There's a lot more to the man than that, from eccentricities like his marathon-running speed and stamina, to his propensity to ride a bike while wearing a gas mask, even to his unfortunate death by ingesting cyanide. His death came two years after the fact that he was convicted of homosexuality (for which he plead guilty), and given probation which included undergoing a form of chemical castration, to lessen his libido - and almost certainly this mistreatment was partly responsible for his depression and lead to him committing suicide.

But for today, we're looking at one of Turing's contributions to the field of artificial intelligence research, the Turing Test. In his paper "COMPUTING MACHINERY AND INTELLIGENCE", published in the psychology and philosophy joutnal Mind in October, 1950, Turing discussed the physical, practical and philosophical possibility that a machine could think.
It's a fascinating paper, and in it he starts by saying that it is difficult to quantify what one means by "thought", since even the programmable machines at the time could be both described as "thinking" and "not thinking" depending on the particular definition. So, instead of wasting time trying to find the "best" meaning of the word thought, instead he devised a simple thought experiment.

He called this "the imitation game", and in the example given, he suggested having two subjects (A) a man and (B) a woman (hidden from view, and communicating with nothing but written responses, to keep the game fair); they are each interrogated by a third subject (c) the interrogator, who doesn't know which of the two subjects is which (they're labelled [X] & [Y] at random).
It is the Interrogator's goal to ask probing questions to determine which of [X] & [Y] is either (A) the man, or (B) the woman. However, this is complicated by the fact that (A) is trying to convince you that he is (B), whereas the actual (B) is trying her best to help the interrogator, and give honest answers.
This is a silly, little game... but Turing then reveals the true purpose of the imitation game...
"We now ask the question, ‘What will happen when a machine takes the part of A in this game?’ Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, ‘Can machines think?’" - A. M. Turing

I find it somewhat ironic that, since (A) is the man in this thought experiment, in the course of explaining how the imitation game works Turing is effectively asking "can a machine pretend to be a woman", and I wonder if that's in any way related to the fact that the majority of robots are designed and identified to be female-presenting. But that's wild speculation.
But, as I said in my earlier post, it's not mere speculation to say that this test is directly responsible for the prevalence of chatbots. Since the test focuses on a robot's ability to have a conversation it's easy to see how this influenced programmers to focus their attention towards the skills that would help their machine conquer the "Imitation Game", or Turing Test.

But, the thing is, the suggestion of "the imitation game" was merely the first paragraph of the article written by Turing, and the rest of the paper was Turing divulging the issue further, discussing which "machines" would be best for this test talking about how these machines fundamentally function, and providing potential critiques for both his test and its potential conclusions. Also, I have to include this...

"I believe that in about fifty years’ time it will be possible to programme computers, with a storage capacity of about 10⁹, to make them play the imitation game so well that an average interrogator will not have more than 70 per cent, chance of making the right identification after five minutes of questioning." - A. M. Turing
Remember, this article was written in October 1950, so we are running about 23 years later here, but if ChatGPT is as good as advertised, perhaps we have conquered this philsophical test.
But, Turing was also investigating whether this test he proposed was a valid one by considering and responding to possible critiques of this new problem he was proposing to test machine intelligence. And this is where I take some issue, because whilst it is a very interesting paper (and you can find it online, I highly suggest you read it), I think this is where Turing fails.
Turing proposes 9 possible objections to the Turing test:

1. The Theological Objection
Basically "Robots don't have Souls". Turing like myself doesn't appear to believe in souls, but his response is rather charitable, which is to say if the soul exists and they're required for thought, he's not proposing that humans who make A.I. would not be usurping god and create souls, but rather "providing mansions for the souls that He creates".

2. The 'Heads in the Sand' Objection
Basically "the whole idea of thinking machines is terrifying, so let's ignore it and hope the problem goes away", which Turing dismisses out of hand, but mentions it specifically to point out that it is the basis for some critiques, such as the previous objection.

3. The Mathematical Objection
Basically, "digital machines have basic, inherent mathematical and logical limitations (as proven by some mathematical theorems), so they must be inherently limited". Turing's response delves into some of these in detail, but ultimately his response is that human brains are also flawed and limited, so whilst a valid point of discussion, it doesn't adequately refute the Turing Test.

4. The Argument from Consciousness
Basically "computers cannot think, because thought requires conscious awareness, and we cannot prove that machines have conscious awareness". Here, Turing rightly points out that we also cannot prove that other humans are aware - hence the philosophy of solipsism - and he presumes that unless you are a solipsist, believing yourself to be the only conscious agent, then it is hypocritical to doubt the consciousness of a non-human.

5. Arguments from Various Disabilities
Basically "a computer may perform some impressive feats, but it cannot...(do this human ability)" and Turing provides a selection of proposed human abilities that computers lack, including: be kind, have friends, make jokes, feel  love, make mistakes, have emotion, enjoy life... and many more.
Turing speaks about some of these in further detail, but ultimately he believes this as a failure of scientific induction, as proponents only have empirical evidence that machines "have not" performed these abilities not that they "will not".

6. Lady Lovelace's Objection
Basically "an analytical engine cannot create, or originate, anything new, as it can only perform as humans are able to program it to perform". Lovelace was referring to her own computer, but the precedence that computers only do as they're told, basically, is a valid one. Turing's response is that nothing is truly original, everything comes from some inspiration or prior knowledge. And, if the real problem is that computers do as they're told, so their actions are never unexpected or surprising, Turing dismisses this by explaining that machines surprise him all the time.

7. Argument from Continuity in the Nervous System
Basically, computers are "discrete-state" machine, meaning it clicks from one quantifiable arrangement to another, wheras the human brain is a continuous, flowing system and cannot be quantified into discrete "states". Turing points out that although this makes a discrete-state machine function differently, that doesn't that - given the same question - they cannot arrive at the same answer.

8. The Argument from Informality of Behaviour
Basically, "a robot can never think like a human because robots are coded with 'rules' (such as if-then type statements), but human behaviour is often informal, meaning humans break rules all the time". Turing (as I see it) refutes this as an appeal to ambiguity, confusing "rules of conduct" for "rules of behaviour" - a human may defy societal expectation or rules, but they will always abide by their own set of proper behaviours, even if these are harder for us to quantify.

9. The Argument from Extra-Sensory Perception
Basically, Turing found arguments for psychic phenomena, such as telepathy, rather convincing (although he wished for them to be debunked), and he argued that since computers have no psychic ability, they would fail any turing test that includes it. Ironically, whilst he considers this a strong argument, and suggests it can only be resolved by disallowing telepathy (or using, and I quote, a "telepathy-proof room" to stop psychics from cheating), I do not see this as at all convincing, so I will ignore it.

Finally, after dismissing the contrary views of the opposition, Turing finally puts forward his affirmitive views of proposition.

One of my biggest issues with the Criticisms is that he doesn't properly explore Lady Lovelace's Objection, ignoring the claim that that robots "do as they are told to do", focussing instead on two related claims, first "robots can't do anything original" and second "robots can't do anything unexpected".
I was prepared to write out an explanation as to the error here, but the paper beat me to the punch.
In his paragraph on "Learning Machines", he first explains that the position is certainly that robots are reactive, not active. They may respond, but afterwards will fall quiet - like a piano, when you press a key - they do nothing unless we do something to them first. But, he also suggests that it may be like an atomic reaction, reacting when you introduce a stray neutron, then fizzling out. However, if there is enough reactive mass in the initial pile of material, then a neutron can reach critical mass, and Turing asks the question: is there a some allegorical "mass" by which a reacting machine can be made to go critical?
It's a slightly clunky metaphor, but what he's supposing is that a powerful enough computer could have the storage and capacity to react "to itself", and think without a need for external direction.

It's a curious idea, not the most convincing, but Turing doesn't propose it to be "convincing", so much as "recitations tending to produce belief".
He also then goes on to explain the "digital" capacity of the human brain, how one might simulate a child's mind and the methods to educate it to think in a human-like manner; and the importance of randomization.
He finally concludes the paper by saying that, whilst he is unsure of the exact means by which we could achieve a machine that can think, there's a lot more work that needs to be done.

- - -

This is a fantastic paper, well worth reading and it serves its purpose well. However, there is one major flaw, which Turing addresses, but does not in fact dispute... The Argument from Consciousness.
Yes, it's true that we can't necessarily prove that a human can have consciousness without observing its "awareness" directly; but, I think it's dismissive to say that this makes all attempts at divulging consciousness impossible.
After all, whilst it's philosophically difficult to prove that a human's actions are a product of consciousness, it's comparatively easy to prove if a computer does not. We don't know how human consciousness works because we're still not entirely sure how the brain works, but we can (and do) know exactly how a computer's "brain" works. We can pull it apart at the seams, we can investigate it thoroughly, we can identify what code does what.

And before any of you come in and bring up "black box problem" with neural networks, that we don't know how some A.I. programs think because their machine learning rewrites their coding in unusual ways... well, no. It's true that it's incredibly difficult to figure out why a pre-trained A.I. program does what it does - if you wanted an explanation of exactly how a program came up with every single particular answer it ever provides, well that's impossible since every word produced by these programs go through several thousand iterations of the program - it would take an impractical amount of time to answer that question. However, unreasonably is not impossible. If you so desired, you could go through the code, line by line, see how each aspect interacts with each other - it is explicable, it's just messy.
And, we also know the basic "input-to-output" means by which this was achieved. Turing may argue that this is analogous to the human mind, but I submit that since we've known exactly how computers work and we have done for a long time, yet we still don't know how consciousness works, then it's clear that our discoveries in computer science aren't related to our discoveries in neurology.

I firmly believe that John Searle's Chinese Room, which is an accurate model of the way that our computer programs currently function, debunks the validity of Alan Turing's Imitation Game for proving thought. So long as computers operate this way, even a computer that can pass the Turing test is merely behaving like something that thinks, not in fact thinking.

However, I'm not sure if that's a problem for Turing. I don't believe in the soul, I don't think there's some ghost that runs your body. I also don't believe in magic, I think that human experience is ultimately material, that the human mind is a system that arises due to the brain. I do believe in consciousness, and will and subjectivity. But, based on some of Turing's arguments, I wonder if he does.
It seems as though, to Turing, computer programs and consciousness are effectively interchangeable. Yes, the human brain is a continuous machine, not a discrete-state machine, but who is to say it's not a machine. There is a realm of philosophy called Mechanism, which states that the universe is reducible to completely mechanical principles - matter and motion, physics being the interaction of the two. It's a form of Materialism, and if it is accurate, then the human mind is also nothing but matter and motion.
I can't say that Turing was a Mechanist (especially since the meaning of the word seems to have changed in philosophy, a few times), but it does seem that he feels that human consciousness is not in any way inexplicable - give a machine a bit more storage and make it run faster, and it can do what brains do. It's kind of a radical view.

However, and I'll leave you with this, we don't know how consciousness works, we haven't identified the consciousness "spark", the mechanism of awareness, the principle of thought... and until we do, there's no reason to believe that Turing is wrong. And, if he is in fact right, it entirely breaks down the subject/object dichotomy. We already know that the definition of "life" gets fuzzy when you go back far enough. Thought appears to be a function of life, but who is to say that the definition of "thought" won't become a lot fuzzier once we make machines that can think? Honestly, I don't know. I'm going to have to think about it.