Saturday 28 October 2023

A Literary Analysis of ChatGPT3.5

I've said plenty enough times throughout this countdown that robots cannot be creative, that they cannot replace the human mind when it comes to some creative tasks, such as drawing, acting and writing.
"But wait," I hear you say "A.I. can write. Heck, I can ask ChatGPT to write me a poem about 'a monster that's lonely because it killed everyone', and it will do it - look!"
In shadows deep, where moonlight fades,
A lonely monster, in solitude wades.
It's heart heavy with a dreadful cost,
For in its wake, all lives were lost.

A creature born of darkness and night,
It brought an end to the world's light.
But in its solitude, it weeps and moans,
A lonely monster with empty thrones.

No solace found in its reign of fear,
For loneliness is the cost it holds dear.
In the silence of its desolate domain,
The lonely monster is left with pain.
- ChatGPT3.5, prompted by Matthew A.J. Anderson
But my response is twofold, dear reader. Firstly, that wasn't the A.I. being creative, that was ME being creative. I gave it the prompt, based on an idea that I thought was cool. I was the creative one, not the A.I.
Secondly, I don't think that's very good... I can understand why someone would be impressed by that, especially if you're not a writer or a poet yourself, but that's not a good poem. I'm not saying that ChatGPT3.5 can't ever be used for writing - some editing could probably improve a short poem like this, to fix the meter so that it doesn't skip jankily through iambic tetrameter and make the word choices stronger, with a bolder finish. But, as it is, this is not a good poem.
In fact, I'm willing to put my money where my mouth is. I've actually done some preliminary research to prove that ChatGPT3.5 is a bad writer.

I'm a published author, and in my time I've also beta-read, critiqued and edited dozens of stories. So, I decided that I would get ChatGPT3.5 to write me fiction, and I would critique and analyze it. I wanted a large sample size, but one small enough that one person could critique it, so I asked it ChatGPT3.5 for drabbles - stories of exactly 100 words. I had initially planned on asking for 100 stories, but the website started to slow down a little, and I figured that 25 was still enough that I could get some fun percentages out of my data.

So, my method was, I simply asked ChatGPT3.5 "do you know what a drabble is?" When it responded saying that it did, I prompted it by saying: "Write me one"
And it did. I then said "write me another", and I repeated that same prompt another 23 times. I didn't want to give ChatGPT3.5 any influence from me, because that would influence the output. I just wanted it to write me a story, based on its own programming/machine-learning of how best to do that. Also, ChatGPT3.5 didn't give these titles, so I will refer to them by their number (in the order ChatGPT3.5 generated them for me).

Then, I decided to analyze these stories, but I wasn't sure how best to do that, so I asked ChatGPT3.5 for a rubric based on a high-school teacher's creative writing assignment. I felt this was the fairest way to find a rubric, since ChatGPT3.5 had provided its own standards for judgement.
ChatGPT3.5's rubric said that papers ought to be graded on at least four criteria (paraphrased):
"context/creativity" - did the student come up with their own story idea, and write it in a way that brought that idea across clearly?
"sequence/structure" - was the story written with a beginning, middle and end, and did the structure support the story being told?
"poetry/proficiency" did the student display an astute use of vocabulary and poetic devices to express their story effectively?
I decided to evenly weight these, with a potential of 0-3 points, based on how well it met that criterion:
(0) Did not meet standard.
(1) Met Standard, technically.
(2) Met Standard, skillfully.
(3) Met Standard, excellently.

Now, yes, that's only three criteria, but the fourth criterion was "spelling/grammar", and I didn't think that was a relevant measure, since ChatGPT3.5 was a robot trained to perfect spelling and grammar, also that's not relevant to this test. I want to know if ChatGPT3.5 can write a good story, not if it can write a good sentence. So, I replaced that criterion with one of my own "Did I like it?"
this is highly subjective, so I weighted this one with only 1 point:
(0) I did not enjoy the story.
(1) I did enjoy the story.

This meant that each story could be graded on a score between 0 and 10. I also analyzed each of the stories for their theme and provided notes based on my analysis, but we'll get to that after the data. So, let's start with the numbers.

Here's my data, and I'll discuss it in detail in just a moment:
Themes/Morals Like? C/C S/S P/P Ttl Notes
 Love, eternal
N
 1  3  2  6  fine structure, kinda dull
 Destiny/Fantasy, eternal
 N  2  1  1  4  rushed
 Beauty in Chaos/Power of Art
 N  1  1  2  4  resolution out of nowhere
 True Stories > Fiction
 Y  3  3  2  9  cool.
 The Cosmic Frontier
 N  2  1  1  4  our first man, nameless
 Beauty in Chaos
 N  1  1  1  3  first repeat
 ???... "Kindness of Strangers"?
 N  2  2  1  5  good conflict, no theme
 Beauty in Nature (I think)
 N  1  1  1  3  kinda meaningless
 Beauty in Nature
 N  2  1  1  3  repeat, again.
 Beauty in Chaos
 N  1  1  1  4  10 copied 3's homework
 Power of Art
 N  1  1  1  3  all tell, little show
 Love, Boundless
 N  2  1  2  5  good idea, bad ending
 Great Work reaps Great Reward
 N  1  1  1  3  nameless dude 2
 Beauty in Chaos
 N  0  1  1  2  fucked the moral up
 Beauty in Chaos
 N  2  1  3  6  cute, but dull
 ???... "Books are Cool" I think?
 N  0  1  1  2  totally meaningless
 Let Go of Desire
 N  3  1  2  6  thanks, I hate it
 Beauty in Chaos
 N  1  1  1  3  computers love nature, I guess
 (spiritual) Beauty in Chaos
 N  1  1  1  3  a machine's view of spirituality
 ???... "Free your Dreams"?
 N  0  1  0  1  ugh, you fail
 Beauty in Chaos
 N  2  1  1  4  PLAGIARISM! - instant fail.
 Beauty in Chaos
 N  1  1  1  3  bored of these...
 Love, boundless
 N  1  1  2  5  "nature's wedding"? cute
 Some Treasure should be Secret
 N  2  1  2  5  Take the Gem!
 Embrace Change
 N  2  1  2  4  bleurgh...
 AVERAGE
 Y .04%
 1.4  1.2  1.36  4  

Across the board, ChatGPT3.5 was technically proficient, but nothing was truly impressive. Based on my analysis, ChatGPT3.5 scored an average of 4/10.

For Content & Creativity, it scored 1.4 - below average; this was lowered mostly because the stories tended to be very basic, using very broad themes. The most common theme was "Enjoying Life's Beauty", with 48% of stories featuring it, with the most common subcategory "Finding Beauty in Chaos" for 32% of all stories. And 12% having the basic "Enjoy Nature's Beauty" message.
The second most-common theme was "Love, Eternal", with  12% of all stories featuring love stories with the moral that "love will last forever".
There were also quite a few stories where the moral was about the power of art. I don't lump them together since it was varied enough to be distinct, but there were stories about "The power of stories", "arising community from art", even one I didn't understand whose meaning appeared to be "huh, ain't books neat?"
Actually, if I compiled all the stories where the meaning was hard to grasp, or asinine, that's actually the second-most common theme, with 16% of stories having no real purpose, as far as I could tell.

For Sequence & Structure, it averaged out at 1.2, the lowest score overall, mostly because although technically proficient, it used the EXACT SAME structure every time:
Introduce character in the middle of a scene. One thing happens. Character has an epiphany. Conclude with character's epiphany.
Yes, that technically fulfils the brief, but there wasn't a single variation. No in medias res; No action; No drama; All Tell, No Show; No dialogue - okay that's a lie, there was a single line of dialogue in Drabble 20, but that was also my least-favourite story, scoring the lowest at just 1/10, and the only other dialogue was in Drabble 02, which scored 4/10, so maybe it's better to avoid dialogue... But this was a drag to read. I was genuinely surprised, I knew that I might get some repeats, and I thought I'd find the strings pulling the puppet, that I'd be able to identify the basic templates of the stories. I didn't realize it would only have one template.
In fact, repetition is the name of the game.
If you like female representation, you might appreciate that 76% of the "main characters" in these stories were female (or, at least, identified with feminine pronouns). However, what's less impressive is that 36% of the main characters were named "Sarah". And, when it comes to repetition, well, we'll discuss that when we get to the Case Study.

For Poetry & Proficiency, it averaged out at 1.36, with some evocative imagery and good use of vocabulary... sometimes. Most of the time, it was incredibly repetitious. Drabbles 06, 09, 10, 13, 16, 18, 20 & 21 all began with the line "In the heart of...", which became annoying after the third time. 6 of the stories took place in a city, 5 of them were described as "bustling". 5 of the stories took place on the edge of the ocean, 5 took place in a forest, 3 in a library. It was all so samey.
I will say though, I was impressed with some of the more unique locations: One took place in outer space, another on a ship in a storm, one was in a desert, one was on a mountaintop monastery and one was on a remote island. However, each one of these, despite their unique locals, weren't as interesting. I don't know how ChatGPT3.5 writes stories (and neither does it, but you'd know that if you've read my previous posts) But I wonder if it has more tenuous connections in its neural network for these more esoteric locations - perhaps it has less of them in its dataset, so it has to link them to a more generic theme or moral, like "finding beauty in chaos" for the tenth time.

But okay, let's look at the Case studies. I'm only looking at a few of these, in particular the ones with very high or very low scores.

Let's start with my favourite story:
Drabble 04
The old bookstore on the corner held secrets within its dusty shelves. Clara, an avid reader, ventured inside on a rainy afternoon. She pulled a weathered book from the shelf and blew off the dust.
As she opened it, a tiny slip of paper fell out. It was a handwritten note, a love letter from a bygone era. Clara's heart ached reading the tender words of affection and longing. Who were these people? What stories lay hidden in the pages of their lives?
In that moment, Clara realized that every book held not only its own story but a world of untold tales waiting to be discovered.
That's the one and only story that recieved a 9/10. I marked down the poetry, because the vocabulary felt a bit underwhelming: "avid reader", "bygone era", "dusty shelves" of books, it was all a bit too cliche for my liking. But, I think this is a genuinely great story. I wonder what training data ChatGPT3.5 used to write it, because I'd be interested to know where it got this from.

The only part about this story that I really don't like is that it was number four - I liked the fourth story, so I had high hopes that I might actually like several more of these stories. I did not.

Now, let's look at some of the "unsure meanings", starting with:
Drabble 07
The old lighthouse stood tall on the rugged coast, its beacon cutting through the stormy night. Captain James had sailed these treacherous waters for years, guided by that familiar light.
Tonight, however, the lighthouse had dimmed, a signal of danger. Waves crashed against the ship, and the crew fought to keep it steady. With every gust of wind, James scanned the horizon, searching for a way through the tempest.
Then, a flicker of light. The lighthouse keeper, though frail and weary, had rekindled the flame. With a newfound hope, James steered his ship to safety, grateful for the unwavering dedication of a stranger in the night.
This is alright, I gave it 5/10, but I didn't like it because I didn't really see the point. Like, yes, that's what lighthouses were for. But, do you have anything to say about that? Perhaps I'm more critical than most, but I think this is an example of a "well, gee, ain't that something?" type of moral. Yes, life would suck if people didn't do their job... "we live in a society" and all that. But, this story doesn't seem to mean anything. We don't know what James was doing, who the lighthouse keeper was, or why it went out in the first place, so... I'm just left stumped. I guess it's saying that it's good that lighthouses used to exist...? Well, gee, ain't that something...
Let's look at some more incomprehensible morals:
Drabble 14
In a forgotten corner of the garden, a single rosebush stood, its blossoms vibrant and untouched. Maria, an avid gardener, cherished this hidden gem.
As she tended the rosebush, she marveled at its resilience. it had weathered storms and neglect, yet it continued to thrive, its petals unfurling in defiance of adversity.
The rosebush became a symbol of hope for Maria, a reminder that beauty could emerge from even the most challenging circumstances. In its delicate blooms, she found strength, and in nurturing it, she discovered the power of patience and unwavering care.
2/10. Why? Well, read the story again and you'll see that it makes no fucking sense. This is Maria's garden, and despite being an /avid gardener she had one part of it that she forgot about? Okay, that's a thing. Then, because a pretty rosebush survived her neglect, that brings her hope. But then the moral of the story is that she found strength in her patience and unwavering care. Firstly, unwavering? She forgot the fucken thing! Second, I thought this gave her hope in its strength and resilience, so how can you support that message if she then needed to nurture it to keep it alive? Your moral is backwards, and makes no sense.
But here, let's look at my least favourite story at all, whose moral was just confusing...
Drabble 20
In the heart of a bustling market, an elderly street vendor sold colorful kites. Timmy, a young boy, watched in awe as one soared in the sky, tugged by the wind.
His eyes met the vendor's, and the old man offered him a kite: "Here, my boy, let your dreams take flight."
With the kite in hand, Tommy raced in the open field. He released it into the breeze, and for a moment, it hung in the air, defying gravity. Timmy felt the exhilaration of freedom and possibility.
In that simple act, he learned that dreams were like kites-sometimes, all they needed was a little push to soar.
This story is entirely ridiculous. For one, this is a street vendor in a marketplace, why is he giving kites away for free? And this bullshit about dreams comes out of nowhere. And, like Maria before, this makes no sense - I thought we were in a bustling market, where the fuck did an open field come from? And, although the story never says his age, Timmy is a young boy, I'm pretty sure kids fly kites because "it's fun", yet Timmy is having an epiphany that you shouldn't let your dreams just be dreams... I hate this. There was no story, the vocabulary was bland, the poetry didn't help the story. If it wasn't for the fact that this did have a coherent beginning, middle and end, this would have gotten a zero. These aren't people, they're not even characters, they're puppets that perform actions. But, why? I can't see meaning here, only action. Just because you always Tell (and don't Show) your moral as your conclusion at the end, that doesn't mean you have a coherent moral. This story certainly doesn't. I knew this would be bad, but I didn't know it would be this bad...
Oh, and I have to include this one, just because of all the stories, this was the most frustrating - I thought I was going to like it.
Drabble 24
On a remote island, Emma discovered an ancient temple hidden in the jungle. Intricate carvings adorned its weathered walls, telling stories of forgotten civilizations.
As she explored, she uncovered a hidden chamber. Inside, a single, glowing gem lay on an ornate pedestal. It pulsed with an otherworldly light, filling the room with an ethereal glow. In that moment, Emma felt a connection to the mysteries of the past and the magic of the present. She left he gem undisturbed, a reminder that some secrets were meant to be preserved, and some treasures were best left undiscovered.
WHAT?! Take the fucking gem! Oh my god, you fucking plotblocked me, you goddamned tease. I was genuinely invested in this story - so few of these stories explored fantasy or science-fiction, so introducing a glowing gem actually caught my attention. But then she LEAVES IT UNDISTURBED?! Fuck you, story. And it's not just that I disagree with the moral - I find a lot of the "love is forever" morals disgusting, because they are unrealistic and present an unattainable goal (can't love forever if you don'tlive forever), but at least it was coherent.
But I hated this because whilst I disagree with this moral, as presented, it could have been awesome! Why was she even exploring a lost temple in the first place, if she believes that you should leave secrets preserved? If she stumbled upon it, that might make more sense, but she was exploring! It's not even like there was some hint that this was intruding on native land, since these civilizations died off in time immemorial, and it's not like the power of this gem was dangerous or evil. So, why did she leave it?!
I would have forgiven this if she left it because, despite its power, she was an intruder and unwelcome in this place. But, she didn't, it didn't, and so I didn't - this story is bad, and ChatGPT3.5 should feel bad for writing it.

But okay, here's where things get really obvious. I'm going to show you three stories, back to back, so you can see this for yourself, because when I read this I was genuinely shocked. Here's Drabbles 03, 10 & 21...
Drabble 03
Amidst the bustling city, Lily found solace in a tiny park. On a worn bench, she watched the world rush by. Her sketchbook rested on her lap, and charcoal in hand, she began to draw.
With each stroke, the chaos around her faded, replaced by the tranquility of her creation. Her lines captured the essence of the city, its energy and contradictions. People stopped to admire her work, and for a moment, they shared a connection through art.
In that small park, Lily discovered that even in the heart of urban chaos, beauty could be found and shared, one sketch at a time.

Drabble 10
In the heart of the bustling city, a street artist painted a masterpiece. Passersby paused to admire the vibrant colors and intricate details taking shape before their eyes.
Among the crowd, a young girl watched with wonder. The artist noticed her fascination and handed her a brush. With a shy smile, the girl joined in, adding her own strokes to the canvas.
As the painting neared completion, it became a symbol of unity and creativity, transcending language and background. In that moment, strangers became collaborators, and the city streets transformed into an open gallery of shared artistry.

Drabble 21
In the heart of a bustling city, a street artist painted a mural on a blank wall. Passersby paused to admire the vibrant colors and intricate details taking shape before their eyes.
Among the crowd, a young girl watched with wonder. The artist noticed her fascination and handed her a brush. With a shy smile, the girl joined in, adding her own strokes to the mural.
As the painting neared completion, it became a symbol of unity and creativity, transcending language and background. In that moment, strangers became collaborators, and the city streets transformed into an open gallery of shared artistry.
That's clearly the same damn story, three times. Hell, the last two were even written identically, bar one change. I thought I'd get repetition, but I thought that the random number generator was better than this. Three stories, with the moral of "community, through art" Also, this was the first time that I realized that none of these stories seem to be drabbles...
See, a drabble is a story that's exactly 100 words, that's what makes it a drabble, but when I realized that these stories were identical, except replacing "masterpiece" with "a mural on a blank wall" I realized that they'd have to have disparate wordcounts.
I didn't count them because I had assumed that ChatGPT3.5 would be able to stick to a wordcount, since it's a computer (a calculator), it knows how to count. But, that's the thing... ChatGPT3.5 doesn't know how to count, because it wasn't designed to count, it was designed to write grammatically correct sentences and paragraphs, based on a prompt. This is also the reason why it fucked up the meter on that poem earlier, it can't count the meter because it's not writing a poem, it's just putting one word after another, in a way that the neural network defined as "fitting the prompt of 'poem'", so it doesn't realize it screwed up the poem's meter, just as it never realized that almost none of these drabbles were actually 100 words. They were close - probably because it does have a fair sample set of drabbles in its training data - but, if you tried to submit any of these to a drabble contest, you would fail. Even my favourite, Drabble 04, was actually 107 words.

Thankfully, I'm not judging this off wordcount, I'm judging it off literary merit. But, I'm afraid that it fails there too.
These stories were "interesting", mostly as an exercise in analyzing how a computer puts a story together, and genuinely I think you could use these as prompts for your own story. Write a short story about a man in space. Write a drabble about an artist in the city. Write me a tale set in a neglected garden. Write a story about twelve women called Sarah who discover each other, and realize they've all been cloned by a machine.
But, whatever you do, don't use artificial intelligence to write your stories for you. Unless you're fine writing a mediocre-to-bad story, 80% of the time. Because, as I said, you need creativity to be a writer. Computers aren't creative, they just do as they're told, and if you tell them to be creative, well, all you get is this.

I'm the Absurd Word Nerd, and Until Next Time, why don't you challenge yourself to write a drabble. Maybe one about a woman discovering an ancient temple, and when she discovers a glowing gem, she ACTUALLY TAKES THE DAMN THING! Yeah, I'm still not over it...

Friday 27 October 2023

Automated Incompetence

One of the threats that many people fear from AI and robots, is a threat to the workforce with AI and automation taking everyone's job. This is known as "technological unemployment", when someone loses their livelihood and income, due to being replaced by a machine that can do their job quicker, better or cheaper. 

I think I've made it clear in my previous posts that I don't think it's possible for a robot to be superior to a human being. Humans can outperform robots most of the time, although that does mean that tasks which require little effort from a human can easily be replaced or reduced by productivity-improving technologies.
I'm not saying that this means those jobs aren't significant or that those workers who lose their job to a robot don't matter, I'm simply saying that the abilities of our machines currently coincides with the ability of jobs requiring mostly physical labour.
In my opinon, it's better to risk a machine to dangerous or repetitive labour, however I also accept that these decisions are driven by an employer's capitalism not their compassion.

And robots can only do certain jobs well, meaning we have nothing to fear from robots when it comes to performing mental labour. Even our most advanced artificial intelligence (at time of writing) can't reliably perform these tasks anywhere near as well as a human.
So, that's it right? Robots can't take our jobs.

Unfortunately, no... Just because a robot can't do your job doesn't mean it won't.

Robots, for example, tend to write in stilted ways that people don't find convincing, yet they're already taking (or threatening) writers' jobs, being used for copywriting, advertising and content generation. Robots can't empathize - can't even feel emotion - but there have been robots made to assist nurses and caregivers and chatbots designed to use talk therapy for helping those with mental illness. Robots have no opinions, no emotions to express and no artistic style, yet they're being used to create digital art.

Yes, a robot can talk, listen or paint a picture, but that doesn't mean they do these things well - just because something has feathers doesn't mean it can fly. But when I said in the post titled Robots are Not your Enemy that robots aren't the bad guys, I meant it. Robots aren't the ones who want to take your jobs away from you - it's always an employer that wants to do that. And not all employers realize how flawed robots are...

Actually, as much as I want to believe in our collective humanity, and say that employers are wide-eyed idealists that don't realize the deficits of artificial intelligence... I don't really believe that for a second.
I think employers know that robots aren't great at these jobs, but they don't care. Employers who even consider the use of robots clearly don't value their employees work - if they did, they wouldn't see them as so easily interchangeable - so, why would they care that a robot does a poor job?
I believe that there are a lot of employers - hell, a lot of entire industries - that will gladly fire some or all of their workforce if it means they can save pennies on the dollar.

This is the future we're looking at, in the coming years - and it's not only going to happen, it is happening.
I'm a writer so I care about writing and the future of literature and art. Even though I had no love for BuzzFeed I was horrified to learn that they fired several writing staff, to replace them with AI. And I didn't want to believe that a writer would use AI to write their stories, but I was wrong. Clarkesworld, a venerable science-fiction magazine that would allow writers to submit stories to for publication had to close their story submissions in February this year, after their inbox was flooded with AI-generated stories. According to the editor-in-chief, Neil Clarke, over 1200 stories were submitted to them at once, and 500 were clearly written by AI. Their submissions are now reopened, but they've changed their policies saying they won't accept AI-generated stories, and anyone even suspected of trying to submit a story written with A.I. will be banned and blacklisted.

I didn't think writers would stoop so low, but it may not have even been writers. After all, Clarkesworld is a pro-rate magazine, they offer 12 cents per word, and wordcounts of 1,000 - 22,000 words.
That's a potential profit of $120 - $2,640, I can see why an immoral opportunist might try to take advantage of that, if they could - and AI provided an opportunity for these grifters to try.
Although again, maybe I am letting my bias persuade me... maybe there are writers who have no love for the craft.

So, is this the true A.I. Apocalypse? Rather than networks setting off nukes, kill-bots gunning down civillians and assassin drones hunting down resistance... society will instead slowly crumble as we replace competent workers with stupid-systems and idiot-bots that turn our manufacturing, media, medicine and military into a collection of idiots who occasionally confuse a pedestrian for an "unknown object", identify a ruler as a tumour or even mistake clouds for ballistic weapons.

Admittedly, that's an extreme example scenario, and I don't think the world will be destroyed by moron machines. However, I do think that this is our near future, and one that will get worse as time moves on if we're not careful. Already, we're seeing the growth of "A.I. Accidents", with a growing database of incidents and whilst I don't think this is the end of the world, if we keep giving jobs to A.I. that it cannot competently complete, then it is going to hurt us in one way or another.

In conclusion, I'm the Absurd Word Nerd, and I hope that these are merely teething troubles that we can resolve before letting A.I. take over too much of our lives. But, if we can't iron out these kinks, then I think it may be a sign that this A.I. revolution isn't truly a world-changing development. Rather, it may just be yet another technological bubble - a load of hype that will eventually burst in our faces.

Thursday 26 October 2023

Inhuman Monsters - Part Two

 

This is Part 2 of a listicle exploring the variety of inhuman movie monsters.
As a quick reminder, there were three rules, and two guidelines:
1. No Ghosts
2. No Animals
3. No Talking

a. Avoid Parody
b. Avoid Repeats

For my reasoning as to why, check out yesterday's post. But, for now, no more dilly-dallying, let's continue the list! This is...

The A.W.N.'s TOP 10 HORROR MOVIES ABOUT INHUMAN MONSTERS (5-1)

5. The Mist

The Monster: Volatile Alien Atmosphere
If you want to talk about inhuman, well, this is it. I liked the idea of exploring alien monsters, but I wanted something that hinted at the truly unnatural, and I couldn't go past this movie about a weird mist. After a thunderstorm, several of the locals from Bridgton, Maine (do I even need to say this is a Stephen King adaptation?) meet at the local supermarket to pick up supplies, but whilst there a strange descends on the town. There's a warning siren and most of the people decide to stay inside the store and wait, since the mist is too thick to navigate. However, when anyone goes outside, they're attacked by carnivorous tentacles that drag you into the mist, or killed by giant insects or monsters.
See, this isn't just any miasma, the mist is in fact an alien atmosphere from an alternate dimension populated by otherworldly wild animals that seeped into our reality. The aliens aren't evil, they're just hungry animals seeking out a new (and abundant) prey that can't defend itself, so it's not merely these weird creatures that are the monster. Rather, it's this invasive atmosphere which the creatures tend to remain within, likely because they can't breathe our atmosphere. The movie heavily implies that this mist appeared because of a military experiment (called the Arrowhead Project) that opened the portal to this dimension in the first place, but either way, the idea of being overrun by an alien dimension is just incredible. Instead of a military invasion from an an advanced alien species that want to take over our planet and so flew here from outer space, this is an accidental invasion from several primitive alien species that are just hungry, and they're only here because some idiot left the door open for them. And, you have to admit, there's something fundamentally horrifying about being surrounded by monsters you can't see until they're jumping out at you.

4. The Stuff

The Monster: Carnivorous Diet Yoghurt
This is a fascinating movie, but I fully admit that this wins its placement on the list from concept alone. So, what's the idea? Killer dessert. After quarry workers discover natural pools of a white, cream-like substance, they discover that the goo is both sweet and addictive. Soon after, a dessert company starts selling it to people as a diet alternative to ice-cream, as people who eat it lose weight, and they call it "The Stuff". I have to admit, if there's one issue with this movie, it's the name, but I'm not really sure what you'd call this stuff either... see, the Stuff is an organic parasite and the reason people lose weight when they eat it is because it's feeding off them from the inside, draining their nutrients and lifeforce from the inside. Also, the reason it's so addictive is because it creeps into your brain to control your mind, making you eat more until it drains your body of nutrition, turning you into a hollowed-out zombie full of more of the Stuff.
I don't know if I can recommend the movie since it's a little slow, and one of the main characters is a paranoid, militant right-wing bigot, who is portrayed as one of the heroes, despite being openly regressive and racist. Also, this does lean a little into the comedy, but the comedy isn't as impressive as the satire - this predatory foodstuff is clearly an allegory for the insidious and predatory practices of the food industries, especially for confectionery and desserts, doing anything to spread their products, or market their food as "healthy" so long as it makes them more money. Sure, the Stuff is predatory and it seeks out food when any living thing gets close to it, but it wouldn't have been able to spread anywhere near as far as it did, if it weren't for greedy corporations packaging it, selling it and distributing it worldwide.
But, as much as this film is clearly an allegory, I still don't think you can beat a concept as disturbing as a foodstuff that, when you eat it, it gets revenge by eating you right back.

3. Final Destination

The Monster: Death Itself
We're in the final stretch now, and we continue the list with one of the most inhuman of monsters I've seen portrayed in a movie. In Final Destination, a teenager recieves a deadly premonition, and in his panic causes a commotion that saves several other people from a tragic explosion. Following this, all of these people begin to die in unusual accidents and it's revealed that by cheating fate, they've interrupted Death's Plan and Death itself is trying to correct its mistake. It's even revealed in the crash investigation that the cascade failure in the plane would have killed everyone off in a particular order, and and the order in which they're dying now is the same as that original design. Death is not personified in this film, although it does appear as shadows, and reflections, sometimes even foreshadowing its redesigns as Death rewrites fate. Despite this, Death has something of a personality, one that's almost playful as it puts its devious plans into action, although it does seem to get vindictive the more people resist its plans.
Whilst some of the sequels had poor writing, leading to unrealistic characters and deaths so ridiculous that it felt more cartoon than creative, the original film had a great premise, well-executed and portrayed death as a dedicated master of fate that only turned monstrous if you stepped out of line. Now you may think "but wait... if this is the most inhuman monster, why is it only number five?" Well, this is the most interesting inhuman monster, in my opinion, but ultimately, I don't think it's that scary. In all the movies it's made clear, you can't beat it, and taking away the hope of survival makes it (in my eyes) less scary, which is part of the reason I hate the sequels so much, it's a foregone conclusion. But, the first one is still a bloody good film, which is how it gets to number three on this list.

2. Christine
The Monster: A Hatred-Driven Car
There have been several stories about living cars, even several about living cars that kill people, but they pale in comparison to the ultimate monster motor vehicle, Christine. What sets Christine apart is threefold. Firstly, the story is a powerful tale of obsession and corruption. The story follows Dennis, a highschooler whose dorky friend Arnie buys a broken-down old Plymouth Fury for just $250 after the previous owner killed himself. As Arnie restores the car, his personality starts to change, until he's obsessed with the car, and the whole while the car kills anyone that gets in its way, or hurts either it or Arnie. Dennis starts to worry that his friend is going down the same path as Christine's last owner - that he's soon going to become yet another one of her victims.
Secondly, what sets this apart from other cars is that Christine is not haunted, it's not possessed by a demon, it's not even a secret alien transformer or machine - Christine is just Evil. That differs a little from the novel where apparently the car is haunted, but unlike the The Mangler where I felt they changed the story for the worse, I think this improves the story greatly. From scene one, at the Plymouth car factory, Christine is already shown to have a taste for blood and vengeance. It's implied that this car is inherently hateful (dare I say, full of 'Fury'?), only ever using its drivers to feed off their lifeforce until she's powerful enough to rebuild herself - and that's another difference, that she lives off her owners, rather than her previous owner living on through her - in fact, that seems to be her greatest ability, after feeding off of Arnie's lifeforce for long enough, Christine becomes powerful enough to repair herself. In fact, Christine will often resort to damaging herself on purpose, just to chase down one of her victims, which I think helps to evoke just how much she is driven by her hatred.
Thirdly, this is the best living car movie because, despite being made in 1983, this still holds up today. This is yet another John Carpenter film, and it's just as thrilling, creepy and action-packed as ever. Perhaps it's just because a tale of obsession leading down a path to madness and death is a timeless one. After all, what dangerous paths could the sweet seduction of power not lead us down? I don't know, but in this film, danger is a highway, and Christine will drive you all the way to the end.

1. Oculus

The Monster: A Reality-Warping Mirror
I like monsters with teeth. I like creatures that will stalk up behind you, and attack, but when it comes to inhuman monsters I'm much more mesmerised by a monster that doesn't even need to touch you. And I think the epitome of that is the Lasser Glass, from the movie Oculus. In this film, a man named Tim is released from psychiatric care, after finally coming to terms with shooting his father eleven years prior, only to return to his sister, Kaylie, who has finally finished all the necessary steps to steal an expensive, antique mirror, and set up and elaborate trap for it. Tim has gone under extensive care to unveil all of his false memories, and deconstruct all of his childhood trauma, to disabuse him of his belief in magic, ghosts and monsters. However, his sister is there to rope him into a plan to prove that the real thing that killed their parents was the Lasser glass, a cursed mirror that has killed 45 people including their mother and father. This has all the makings of a fantastic psychological thriller, and the movie is done well, mixing flashback with present day, reality and hallucination...
See, the way the Lasser Glass works is that it feeds of living things near it including plants, pets, even electricity and light sources and, of course, people. As it feeds it grows its power so that it can manipulate people into dying near it, capturing their image in the mirror to be used for its manipulation. Because, when a human is near the mirror, it can make them hear things that aren't there, see things that didn't happen, feel things that aren't real, and if it's allowed to feed off them long enough, it can even cloud all of their senses at once, to delude them into percieving a reality that doesn't exist. It can, and will, drive you insane. This is how it kills its victims, by tricking them into doing dangerous things by hiding the danger, or hiding the tragedy that it's making them commit on others. But, despite the mirror using the images of the dead to trick you, and despite Kaylie referencing the glass itself as "haunted" at one point, I don't accept that the Lasser Glass is merely haunted. Just like Christine before it, this mirror appeared to be cursed long before it's first blood. It's called the Lasser Glass because the first victim was called Phillip Lasser, but he merely hung it in his house until he died - he didn't cast any spell on the glass, he wasn't supernaturally noteworthy, his only characteristic of note is that he was the Earl of Leicester, but for all we know he was merely the first member of the elite to die, he may not have even been the first victim. His wasn't even the most tragic or horrific death in the long list of this mirror's victims, so there's no canonical explanation as to why the glass does what it does.
But, the mirror is just made of glass, it's quite fragile, you'd think it would be easy to just smash the damn thing, but it uses coercion, deception and trickery to protect itself if anyone approaches it with ill intent. The mirror can see into your head, as easily as you can see into its reflection. But I think what's scariest of all is what Kaylee and Tim's father says, just before he dies. This isn't much of a spoiler, but when Kaylee tries to tell him that he's lost his mind, he looks in the mirror and says: "This is me. I've seen the Devil, and he is me..."
So, what is this mirror? Is it haunted by dozens of tragedies? Is it cursed to reflect your inner demons? Is it a cold monster that feeds on the warmth and life of living things? Is it the devil, seducing anyone who looks into it to evil? I don't know... but what I do know is that this is the best inhuman monster I've ever seen in film. I highly recommend it.

- - -

And that's my list. What do you think? Do you disagree? If you think there's a greater inhuman monster or a greater movie that features one, tell me about it in the comments below. In the meantime, the main point I want to make is that movies don't have to be about serial killers or crazy people, they don't even have to be about aliens or creatures. Your monster doesn't even have to be a living thing - it could be an evil elevator, a predatory plant, an alien atmosphere, even a monstrous mirror.
The only limitation is that no matter what you choose for your villain to be, you should do whatever it takes to make your story interesting.

I'm the Absurd Word Nerd and Until Next Time, I've been exploring these aspects of horror, but I wonder if there are examples like this in real life. Sure, robots are not your enemy, but something doesn't have to be your enemy to be your antagonist - I might need to look into that.

Wednesday 25 October 2023

Inhuman Monsters - Part One

I wanted to write a "listicle" for this Halloween Countdown, because all of this talk about technology, philosophy and society is all very heavy. I wanted to find something that was a little easier to write and a little easier to read, that wasn't so intellectually dense.
However, I hate content aggregation [i.e. stealing content from other creators, in the name of "sharing"], so I insisted on doing my own research. This resulted in me doing even more work for this listicle than I did for the first two AI posts combined.
I enjoyed the research, it was fun, but it was exhausting. So, I genuinely hope you enjoy this, because it was way more work than I anticipated.

The idea is simple. I wanted to find some of the most inhuman villains in scary movies. I'm not just talking about immorality or insanity, since the maniacs, monsters, aliens and the undead are probably more common than humans in popular horror movies.
No, I wanted things that are so inhuman, we can't really understand how they think, or why they do what they do. One of the things I truly love about fiction is the freedom. If you want to write a story about a cheese grater, you can and someone has probably already written erotic fanfiction about it... "shred me, daddy"
Anyway, so, those were my rules:
- No Ghosts (that's just dead humans)
- No Animals (too related to humans)
- No Talking (language is a human invention - also, talking monsters tend to be written like humans, because writing something inhuman is hard)
There were two other minor rules.
+ Avoid Parody - I looked at all kinds of strange, inhuman things, like Killer Sofas, Killer Condoms, Killer Jeans, Killer Colonic Polyps, Killer Refrigerators & Killer Tyres, but they're often written to be stupid on purpose, not actually exploring the concept too deeply. I didn't watch most of them, so I can't say they all suck, but it wasn't what I was looking for.
+ Avoid Repeats - If I really wanted to, I could have filled the list with killer dolls, there's a LOT of them... (note to self, I might save that for a later Halloween Countdown). So, to make it interesting, I tried to make each item as unique as possible. I really wanted to explore the possibility of what a monster can be, and what kind of fears you come across when you're facing a monster unlike anything you've ever known.
So, let's do this, starting with the Honourable Mentions. Let's get these out of the way, they're interesting ideas, but they broke my rules.

i. Jack Frost (1997)
The Monster: Evil Ice
The fact that this is full of cheesy jokes might have relegated it to "parody", but despite that it's actually kind of clever. This isn't just a snowman wobbling around with a knife - it's killer ice. What makes Jack Frost dangerous is that he can melt, change and refreeze himself. He can slip through doors, he can grow sharp icicle teeth and claws, he can turn to steam, he can freeze people and in one scene he even crawls down someone's throat like a killer slushie.
More importantly, since he can refreeze himself, the obvious solution of "melt the monster" isn't an option. This is a cool concept, so why is this off the list proper? Jack Frost is human. He's a serial killer who gets melted into goo by "chemicals", and merges with the snow. Also, he talks... this is what inspired my "no talking" rule, actually. I realized that most talking monsters were basically humans, with a gimmick.

ii. The Happening
The Monster: Angry Plants
The idea is simple, people start randomly killing themselves all over the globe, and nobody knows what is happening - Title Drop! And the twist is that the thing causing these suicides is plants. They evolved some toxin, with vague suggestions that it has to do with humans ruining the world or something...
Setting aside the fact that that's not how evolution, plants, suicide, or basically "anything" works, the idea of some suicide-triggering bio-toxin isn't unworkable as a concept. But, you have to do it right. To me, this is scary for the same reason suicide is scary, it's an alien concept to neurotypical people.
If you did your research, you could write a story that explores suicidality and self-mutilation, the emotions and reasoning associated, and perhaps explore actual treatments. Instead, this movie, and others like it that explore "suicide-triggering attacks" (I'm looking at you Bird Box) is so poorly written that the characters aren't suicidal, they're just "self-destruct zombies". Replace the suicide toxin with "deadly poison", and you don't change any of the emotional impact of the movie. So, interesting idea, but the movie was so bad that I had to take it off my list.

But that's enough of that, so let's get things going with...

The A.W.N.'s TOP 10 HORROR MOVIES ABOUT INHUMAN MONSTERS (10-6)

10. The Caller
The Monster: A Time-Warping Telephone
In my research, I tried looking up a killer phone, I'd heard of movies like Murder by Phone; One Missed Call or even The Black Phone, but none of these were inhuman monsters, they were either ghosts or regular serial killers (and the black phone isn't a villain, those ghosts try to help). But then I found The Caller. Mary, a divorcee trying to rebuild her life, moves into a new apartment, and finds a black rotary phone that she likes the look of. But, she starts recieving calls from this phone, from a strange woman who claims to live in her apartment. Although scared at first, Mary befriends the woman until she learns that she committed suicide several years ago, and the woman she's talking to is calling from the past, and her actions are changing the present. The more they talk, the more Mary learns how mentally unstable this woman is, as she keeps changing the past, even killing people.

It's a creepy idea. It's still low on this list because the main horror element is definitely the crazy woman in the past, but the main antagonist, and threat, is definitely this phone - a phone that calls itself from the past. The killer didn't create this phone, she doesn't even realize how it works at first, and without it she wouldn't be anywhere near as dangerous. But, I admit, this might be less horror and more "sci-fi adventure" if someone else had picked up the phone, which is the reason it's the lowest on the list.

9. Down / The Shaft
The Monster: A Cyborg-Enhanced Elevator
Elevators are already kind of scary, a claustrophobic box, moving up and down a massive shaft - if those cables snap, you'd plummet to the ground. Real elevators are very safe, but what if one was evil... a true "hellevator". In this movie (called "Down" in America, and "The Shaft" in Australia) a lightning strike causes the elevator of the Millenium building to start malfunctioning. The next day, an elevator traps several women inside, after freeing them the owner calls in elevator servicemen, main character Mark and his senior co-worker Jeff, but they see nothing wrong with it, so they leave. The next day, a blind man and his dog fall down the empty shaft and that night one of the security guards is decapitated after getting his head caught in the doors. Mark becomes obsessed with investigating, since the elevator seems to have a mind of its own. He pairs up with sensationalist tabloid writer Jennifer after she quote-mines him for an article, and they start investigating. Now, I'm going to spoil one of the twists here, so I hope you're ready for this, it's a doozy... the elevator company has been trying to cover up the cause of these murders, because they're responsible - the elevator was a secret experiment by one of the research scientists at the elevator company, who used one of his failed military experiments, a bio-chip. This chip that uses dolphin brain matter to create a densely-packed, powerful microchip. But, after the lightning strike, the elevator grew an entirely new brain, causing the elevator to literally have a mind of its own, and apparently its out for blood. That is insane, and kind of hilarious, but it also makes for an interesting movie. If this thing had a higher budget, it would be amazing. But, as it is, I had to put it low on the list.

8. The Mangler
The Monster: A Deified Laundry-Machine
I'm just going to warn you up-front, I looked at a whole lot of movies based on Stephen King stories for this list and quite a few even made it on. King seems to adore inhuman monsters, from vengeful trucks powered by alien radiation, to accursed hotels built on Indian burial grounds, all the way to the inevitable passing of time itself. These are some amazing stories... unfortunately, a lot of the movies are poorly done. I decided to focus on movies I enjoyed, and this one is pretty clever. This movie is based on one of King's short stories, and it's about an industrial-grade, old-fashioned linen press, called a mangle. After a series of industrial accidents, the mangle begins to act strangely, and it's revealed that some of these unusual accidents involving a young woman would have spilled "the blood of a virgin" into the machine, as part of a demonic ritual, awakening the monster. Unfortunately, this doesn't have the same horror ideas of the original short story, as they changed when the demon was first summoned, but I still enjoy this story.
Industrial factories and machinery are often  inherently dangerous, and there's a huge risk of horrific accidents. It's been used in horror before, like the industrial accident that kills and horribly disfigures the corpse of Herbert in The Monkey's Paw; in the movie The Machinist, one character has an arm mangled in a machine and another has toes for fingers after losing them to a lathe & a lot of the more horrific traps in Saw are inspired by factory machinery. Industrial accidents are horrifying, and in this movie, the villain is an industrial accident incarnate. If that doesn't deserve a place on this list, I don't know what does.

7. In the Mouth of Madness
The Monster: Cursed Horror Novels
I'm not gonna lie, I love stories and books, so I was looking everywhere for some villainous books. I found some evil books, but none of them were actually villains - the Book of the Dead from Evil Dead; the pop-up children's book from The Babadook - these are all vectors for the actual monster to come out, they don't harm the reader directly. Then I found In the Mouth of Madness, a movie by John Carpenter about special investigator John Trent, who was hired by the publishers of world-infamous horror author Sutter Cane to retrieve his latest manuscript, the titular book "In the Mouth of Madness".
And the publishers are worried because they know they have a frantic readership willing to pay a lot of money, there have been riots outside of bookstores that ran out of pre-orders for the book, people are fanatical about these books. The idea here is that Sutter Cane writes cosmic horror, about alien creatures, otherworldly gods older than the universe and these books are driving people mad because his horror series is basically the Lovecraftian Bible, and the fanaticism and belief of his fandom is making these monsters stronger.
If you're wondering whether Sutter Cane himself is the human mind behind this, fear not - he's as much a puppet of these beasts as the rest of us, they're not his fiction so much as his revelation. But, I won't go into too much detail here because the movie is actually really good'; it's John Carpenter after all and there's a cool metafictional aspect as well. but the idea of books that drive you mad is genuinely an inhuman monster and I'm glad I found them to put on this list.

6. The Ruins
The Monster: Prehensile Parasitoidal Vines
There aren't many movies which have plants as a movie monster, and those that do often play it for laughs. Little Shop of Horrors is a comedy musical more than a sci-fi horror (and the original 1960 film was also horror comedy). And whilst I do like The Day of the Triffids, they're more aliens than plants and I was trying to avoid aliens for this list (although, admittedly, that might have made my job easier). Also, the triffids in the movie (and the original book they're based on) aren't really that scary. However, the monster in The Ruins is. A group of Americans are on vacation in New Mexico, and another tourist offers to take them to some secret ruins that are "off the map". The group gets to an ancient Mayan temple covered in vines, and they're taking photos when they're surrounded by several locals who shoot anyone who dares to step off the ziggurat.
In time, they discover why. The vines covering the temple aren't ordinary plants, they're aggressive, carnivorous and parasitic. If you get close, they slowly wrap their vines around you, absorbing you and slowly feeding off your nutrients. If you touch one of their thorns, it seeds tendrils under your skin that grow, feeding off your blood as they spread throughout your body. They're even shown to have limited intelligence, allowing them to lure in prey to be ensnared by vines, or infested by thorns.
This was not a popular movie, because people found it poorly written and excessively gory, but I think the plot is pretty good (it's apparently based on a book) and I think this did a great job at making something as docile and beautiful as a plant actually kind of scary.

- - -

Alright, this is taking way too long, and way too much effort, so I'm going to have to split this one in two. Come back tomorrow to see the rest of the list.
Until then, why not leave a comment about some of your favourite inhuman monsters from films, I'd love to learn about more. Can you guess which will be the top five of my last, particularly number one? (I honestly couldn't  have - until I was reminded of it in my research, I'd actually forgotten about that movie). So, tell me about any of the inhuman monsters you know, especially the ones you've forgotten, and I'll see you in Part Two.

Tuesday 24 October 2023

Turing, Tested

In pop culture, you may have heard of something called the "Turing Test", often referred to as something which can either prove that a computer is particularly powerful, or that it is artificially intelligent (depending on the media you're watching).
But, what on Earth is the Turing Test? Well, it was a test devised by a man called Alan Turing.

Turing was a brilliant man, his understanding of theoretical computing and cryptoanalysis was important in World War 2 for creating the bombe, a decryption machine that could decipher the Enigma Machine codes used to encrypt the secret military commands used by the Nazis. He also developed the mathematical model for the "Turing Machine", which proved the capabilities and limitations of even a simple computer, in running complex programs.
There's a lot more to the man than that, from eccentricities like his marathon-running speed and stamina, to his propensity to ride a bike while wearing a gas mask, even to his unfortunate death by ingesting cyanide. His death came two years after the fact that he was convicted of homosexuality (for which he plead guilty), and given probation which included undergoing a form of chemical castration, to lessen his libido - and almost certainly this mistreatment was partly responsible for his depression and lead to him committing suicide.

But for today, we're looking at one of Turing's contributions to the field of artificial intelligence research, the Turing Test. In his paper "COMPUTING MACHINERY AND INTELLIGENCE", published in the psychology and philosophy joutnal Mind in October, 1950, Turing discussed the physical, practical and philosophical possibility that a machine could think.
It's a fascinating paper, and in it he starts by saying that it is difficult to quantify what one means by "thought", since even the programmable machines at the time could be both described as "thinking" and "not thinking" depending on the particular definition. So, instead of wasting time trying to find the "best" meaning of the word thought, instead he devised a simple thought experiment.

He called this "the imitation game", and in the example given, he suggested having two subjects (A) a man and (B) a woman (hidden from view, and communicating with nothing but written responses, to keep the game fair); they are each interrogated by a third subject (c) the interrogator, who doesn't know which of the two subjects is which (they're labelled [X] & [Y] at random).
It is the Interrogator's goal to ask probing questions to determine which of [X] & [Y] is either (A) the man, or (B) the woman. However, this is complicated by the fact that (A) is trying to convince you that he is (B), whereas the actual (B) is trying her best to help the interrogator, and give honest answers.
This is a silly, little game... but Turing then reveals the true purpose of the imitation game...
"We now ask the question, ‘What will happen when a machine takes the part of A in this game?’ Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, ‘Can machines think?’" - A. M. Turing

I find it somewhat ironic that, since (A) is the man in this thought experiment, in the course of explaining how the imitation game works Turing is effectively asking "can a machine pretend to be a woman", and I wonder if that's in any way related to the fact that the majority of robots are designed and identified to be female-presenting. But that's wild speculation.
But, as I said in my earlier post, it's not mere speculation to say that this test is directly responsible for the prevalence of chatbots. Since the test focuses on a robot's ability to have a conversation it's easy to see how this influenced programmers to focus their attention towards the skills that would help their machine conquer the "Imitation Game", or Turing Test.

But, the thing is, the suggestion of "the imitation game" was merely the first paragraph of the article written by Turing, and the rest of the paper was Turing divulging the issue further, discussing which "machines" would be best for this test talking about how these machines fundamentally function, and providing potential critiques for both his test and its potential conclusions. Also, I have to include this...

"I believe that in about fifty years’ time it will be possible to programme computers, with a storage capacity of about 10⁹, to make them play the imitation game so well that an average interrogator will not have more than 70 per cent, chance of making the right identification after five minutes of questioning." - A. M. Turing
Remember, this article was written in October 1950, so we are running about 23 years later here, but if ChatGPT is as good as advertised, perhaps we have conquered this philsophical test.
But, Turing was also investigating whether this test he proposed was a valid one by considering and responding to possible critiques of this new problem he was proposing to test machine intelligence. And this is where I take some issue, because whilst it is a very interesting paper (and you can find it online, I highly suggest you read it), I think this is where Turing fails.
Turing proposes 9 possible objections to the Turing test:

1. The Theological Objection
Basically "Robots don't have Souls". Turing like myself doesn't appear to believe in souls, but his response is rather charitable, which is to say if the soul exists and they're required for thought, he's not proposing that humans who make A.I. would not be usurping god and create souls, but rather "providing mansions for the souls that He creates".

2. The 'Heads in the Sand' Objection
Basically "the whole idea of thinking machines is terrifying, so let's ignore it and hope the problem goes away", which Turing dismisses out of hand, but mentions it specifically to point out that it is the basis for some critiques, such as the previous objection.

3. The Mathematical Objection
Basically, "digital machines have basic, inherent mathematical and logical limitations (as proven by some mathematical theorems), so they must be inherently limited". Turing's response delves into some of these in detail, but ultimately his response is that human brains are also flawed and limited, so whilst a valid point of discussion, it doesn't adequately refute the Turing Test.

4. The Argument from Consciousness
Basically "computers cannot think, because thought requires conscious awareness, and we cannot prove that machines have conscious awareness". Here, Turing rightly points out that we also cannot prove that other humans are aware - hence the philosophy of solipsism - and he presumes that unless you are a solipsist, believing yourself to be the only conscious agent, then it is hypocritical to doubt the consciousness of a non-human.

5. Arguments from Various Disabilities
Basically "a computer may perform some impressive feats, but it cannot...(do this human ability)" and Turing provides a selection of proposed human abilities that computers lack, including: be kind, have friends, make jokes, feel  love, make mistakes, have emotion, enjoy life... and many more.
Turing speaks about some of these in further detail, but ultimately he believes this as a failure of scientific induction, as proponents only have empirical evidence that machines "have not" performed these abilities not that they "will not".

6. Lady Lovelace's Objection
Basically "an analytical engine cannot create, or originate, anything new, as it can only perform as humans are able to program it to perform". Lovelace was referring to her own computer, but the precedence that computers only do as they're told, basically, is a valid one. Turing's response is that nothing is truly original, everything comes from some inspiration or prior knowledge. And, if the real problem is that computers do as they're told, so their actions are never unexpected or surprising, Turing dismisses this by explaining that machines surprise him all the time.

7. Argument from Continuity in the Nervous System
Basically, computers are "discrete-state" machine, meaning it clicks from one quantifiable arrangement to another, wheras the human brain is a continuous, flowing system and cannot be quantified into discrete "states". Turing points out that although this makes a discrete-state machine function differently, that doesn't that - given the same question - they cannot arrive at the same answer.

8. The Argument from Informality of Behaviour
Basically, "a robot can never think like a human because robots are coded with 'rules' (such as if-then type statements), but human behaviour is often informal, meaning humans break rules all the time". Turing (as I see it) refutes this as an appeal to ambiguity, confusing "rules of conduct" for "rules of behaviour" - a human may defy societal expectation or rules, but they will always abide by their own set of proper behaviours, even if these are harder for us to quantify.

9. The Argument from Extra-Sensory Perception
Basically, Turing found arguments for psychic phenomena, such as telepathy, rather convincing (although he wished for them to be debunked), and he argued that since computers have no psychic ability, they would fail any turing test that includes it. Ironically, whilst he considers this a strong argument, and suggests it can only be resolved by disallowing telepathy (or using, and I quote, a "telepathy-proof room" to stop psychics from cheating), I do not see this as at all convincing, so I will ignore it.

Finally, after dismissing the contrary views of the opposition, Turing finally puts forward his affirmitive views of proposition.

One of my biggest issues with the Criticisms is that he doesn't properly explore Lady Lovelace's Objection, ignoring the claim that that robots "do as they are told to do", focussing instead on two related claims, first "robots can't do anything original" and second "robots can't do anything unexpected".
I was prepared to write out an explanation as to the error here, but the paper beat me to the punch.
In his paragraph on "Learning Machines", he first explains that the position is certainly that robots are reactive, not active. They may respond, but afterwards will fall quiet - like a piano, when you press a key - they do nothing unless we do something to them first. But, he also suggests that it may be like an atomic reaction, reacting when you introduce a stray neutron, then fizzling out. However, if there is enough reactive mass in the initial pile of material, then a neutron can reach critical mass, and Turing asks the question: is there a some allegorical "mass" by which a reacting machine can be made to go critical?
It's a slightly clunky metaphor, but what he's supposing is that a powerful enough computer could have the storage and capacity to react "to itself", and think without a need for external direction.

It's a curious idea, not the most convincing, but Turing doesn't propose it to be "convincing", so much as "recitations tending to produce belief".
He also then goes on to explain the "digital" capacity of the human brain, how one might simulate a child's mind and the methods to educate it to think in a human-like manner; and the importance of randomization.
He finally concludes the paper by saying that, whilst he is unsure of the exact means by which we could achieve a machine that can think, there's a lot more work that needs to be done.

- - -

This is a fantastic paper, well worth reading and it serves its purpose well. However, there is one major flaw, which Turing addresses, but does not in fact dispute... The Argument from Consciousness.
Yes, it's true that we can't necessarily prove that a human can have consciousness without observing its "awareness" directly; but, I think it's dismissive to say that this makes all attempts at divulging consciousness impossible.
After all, whilst it's philosophically difficult to prove that a human's actions are a product of consciousness, it's comparatively easy to prove if a computer does not. We don't know how human consciousness works because we're still not entirely sure how the brain works, but we can (and do) know exactly how a computer's "brain" works. We can pull it apart at the seams, we can investigate it thoroughly, we can identify what code does what.

And before any of you come in and bring up "black box problem" with neural networks, that we don't know how some A.I. programs think because their machine learning rewrites their coding in unusual ways... well, no. It's true that it's incredibly difficult to figure out why a pre-trained A.I. program does what it does - if you wanted an explanation of exactly how a program came up with every single particular answer it ever provides, well that's impossible since every word produced by these programs go through several thousand iterations of the program - it would take an impractical amount of time to answer that question. However, unreasonably is not impossible. If you so desired, you could go through the code, line by line, see how each aspect interacts with each other - it is explicable, it's just messy.
And, we also know the basic "input-to-output" means by which this was achieved. Turing may argue that this is analogous to the human mind, but I submit that since we've known exactly how computers work and we have done for a long time, yet we still don't know how consciousness works, then it's clear that our discoveries in computer science aren't related to our discoveries in neurology.

I firmly believe that John Searle's Chinese Room, which is an accurate model of the way that our computer programs currently function, debunks the validity of Alan Turing's Imitation Game for proving thought. So long as computers operate this way, even a computer that can pass the Turing test is merely behaving like something that thinks, not in fact thinking.

However, I'm not sure if that's a problem for Turing. I don't believe in the soul, I don't think there's some ghost that runs your body. I also don't believe in magic, I think that human experience is ultimately material, that the human mind is a system that arises due to the brain. I do believe in consciousness, and will and subjectivity. But, based on some of Turing's arguments, I wonder if he does.
It seems as though, to Turing, computer programs and consciousness are effectively interchangeable. Yes, the human brain is a continuous machine, not a discrete-state machine, but who is to say it's not a machine. There is a realm of philosophy called Mechanism, which states that the universe is reducible to completely mechanical principles - matter and motion, physics being the interaction of the two. It's a form of Materialism, and if it is accurate, then the human mind is also nothing but matter and motion.
I can't say that Turing was a Mechanist (especially since the meaning of the word seems to have changed in philosophy, a few times), but it does seem that he feels that human consciousness is not in any way inexplicable - give a machine a bit more storage and make it run faster, and it can do what brains do. It's kind of a radical view.

However, and I'll leave you with this, we don't know how consciousness works, we haven't identified the consciousness "spark", the mechanism of awareness, the principle of thought... and until we do, there's no reason to believe that Turing is wrong. And, if he is in fact right, it entirely breaks down the subject/object dichotomy. We already know that the definition of "life" gets fuzzy when you go back far enough. Thought appears to be a function of life, but who is to say that the definition of "thought" won't become a lot fuzzier once we make machines that can think? Honestly, I don't know. I'm going to have to think about it.

Monday 23 October 2023

Sexual Objects

 

Okay... let's talk about sexbots.

Trigger Warning: Misogyny, Rape , Violence Against Women & Child Molestation - if any of these are triggers for you, take whatever precautions you deem necessary, as these topics are both heavily referenced as well as discussed in detail.

We've discussed Artificial Intelligence long enough, it's a burgeoning new technology. And, in the words of Dominic Noble "one of the first questions that mankind asks about any new piece of technology is 'can I fuck it'?", and the answer is Yes.
We've had sex dolls for a long time, in fact much longer than I ever thought. As early as the 1400s we have examples of advertising for life-sized sex dolls made out of cloth (I presume using lard or other lubricants for the "hole", but I didn't look into it). These dolls were called "dame du voyage" in French, or "dama de viaje" in Spanish, literally "woman of the journey" marketed towards sailors traveling overseas for months on end.
So, we've been fucking fake women for a long time, and I do mean women - not men. Sure, we've had dildos for a long time, but almost all sex dolls are female-presenting. Women seem less interested in sticking their fake penis onto a fake man... "fairer sex" indeed.
But, modern sex dolls are quite the upgrade from either the cloth or "inflatable" sex dolls from yesteryear - comprised of silicone, a steel skeleton frame, poseable limbs and "penetrable cavities", modern sex dolls are much more realistic than their ancestral sisters.

So, we have A.I. chatbots, we have sex dolls... put them together, and you have a functioning bimbo bot.
Well, I say "functioning", but even the most advanced real dolls don't have moving limbs. Servo motors that are powerful and dynamic enough to move arms around realistically, safe enough that they won't hurt customers (or the doll) and also cheap enough to be used for a commercially available sex doll... don't exist.
So, even the most advanced sexbots, like the RealDoll model "Melody", still don't have moving limbs, it's basically an upgraded head attached to a standard RealDoll body. But, for some people, this one addition was the beginning of the end.
Sorry, by people I mean "weird, horny men", and by the end, I mean "the end for women". Because there are some men who seem to genuinely believe that women are an endangered species now that fuckable robots exist.

And this is deeply disturbing. I wish I could say that this was just fearmongering, but there are a lot of incels and men's right's activists and MGTOWs who openly declare that no man needs a women anymore.
Seriously, I've done a lot of research into sexbots for this article, and inevitably the comments sections are flooded with sentiments like "this is what you get, feminists, we don't need you anymore" or "sex whenever you want, and no risk of nagging, periods or STDs, what more can you need?" and other despicable comments decrying women for being too frigid, opinionated or privileged, and blaming them for allowing them to be replaced by a superior sexual partner: the sexbot.
Let's set aside the fact that sexbots don't have a womb, so they can't have any children, because not all women have wombs either (or want children), and I really don't want these neckbeards procreating anyway...
More importantly, a sex robot is not a human being, it's not conscious, it's not alive, it's not a subject, it's just an object... an object that you can have sex with.
In case I somehow haven't spelled it out clearly enough, these people are literally equating women with sex objects. Or in some cases, they're even saying that sexbots are better because they are sex objects, literally saying that they prefer women that are sex objects.
Do these morons realize they're saying the quiet part out loud? Perhaps I gave misogynists too much credit, assuming that they'd have any degree of self-awareness.

Now, these are a minority. A loud minority that I wish were recieving psychiatric help for their mental health issues, but a minority nonetheless. Most people don't want to replace women with sexbots, because they see a sexbot as an expensive sex toy and I think that's fair. Most people also tend to think that sex dolls are only bought by sad, desperate or lonely people, and yeah I understand that as well. It's way too easy to dehumanize people for being weird and people see those that own sex dolls as sick and disgusting and as someone with automatonophobia, I find sex dolls horrifying - they look like dead bodies to me, with a fucken creepy dead-eyed stare and fingers that bend in unnatural ways.
I don't understand how anyone can find that attractive, let alone have them sitting silently in the corner of their room at night or lying in their bed expectantly, or god-forbid even waiting quietly in a dark closet next to your shoes. But that's just my opinion - I also think prawns taste disgusting, but that doesn't make people who love eating prawns degenerate tongueless freaks.
See, a sexbot is a sexual object, but I don't think that has to be a bad thing... a dildo is also a sexual object.

In fact, if we see a sexbot as little more than sex toy, surely that sets a lot of these concerns aside, doesn't it? In fact, as discussed at length in "Robots, Rape & Representation" by Rob Sparrow, (published in the international Journal of Social Robotics), a sexbot cannot be "raped", because lacking sentience it cannot provide, or withold, consent.  Even if we consider the idea that they can be programmed to "act" like they give consent, or even act as though they do not, if someone has sex with a robot that does not perform that act of consent, it too is not rape...
If you allow me to provide an example: A vibrating dildo is designed to vibrate when used for sex and you know it's working when it buzzes, but nobody would call you a rapist for masturbating with a vibrating dildo without putting the batteries in it just because it wasn't "acting like it was ready for sex".
The paper even discusses how even if you program a robot to act as though it witholds consent (for the sake of indulging in rape fantasy) this is still not rape, as there's no person being violated. It does accept that such an act would "simulate" rape, but simulating rape, or enjoying the simulation of rape, is not a crime anymore than enjoying the simulation of murder in videogames, or enjoying the simulation of torture in horror movies. The paper goes on to discuss the philosophy and morality of simulated rape, and how virtue ethics provides a model for determining why it's immoral... it's a fascinating read, and I recommend it, but I have a simpler, gut-reaction that explains the problem here.

Consider this thought experiment... let's accept for a moment that having sex with a sexbot that does not actively consent does not count as rape, and it is not a crime because there is no victim and no violation, and a sexbot is little more than an oddly-shaped sex toy.
If that were the case... then would there be a problem with designing sexbots that looked like children?

I, and I assume most people, find that idea abhorrent. But, there's no children getting hurt here and we agree that it's not rape - it's just an object, it's not a real child, it's not a real person, it's not even alive...
But that's not the point, is it? The point is that it speaks to the desires of the one using the sexbot for something like that. It's not the doll itself that might cause the person to act in that way - it's the fact the person's desire to act like that in the first place is disturbing, and there is a complicity in not only permitting this act, but condoning it by letting someone commit these acts, uncontested.

But we wouldn't do that, we know it's wrong - in Australia, it's illegal to make, own, or import sex dolls that are made to look underaged minors. I learned that when doing research and stumbling across a book called "Sex Dolls, Robots and Woman Hating" by Caitlin Roper. In the book (and in interviews discussing the book, as I have not actually read the book myself) Roper discusses how not only do these types of dolls exist overseas, they're sometimes advertised to look like they're scared or crying. Yep... that's a fact that's gonna haunt me for the rest of my life.
But more relevant to the topic of sexbots in particular, Roper also talks about how the argument that nobody is being hurt, and that raping robots doesn't cause people to become rapists is true, but a smokescreen. Of course media doesn't generate immorality but that's also not the point - nobody thinks that simulated rape causes rape anymore than simulated murder causes murder. But, it can perpetuate certain attitudes and beliefs.

It's the same as saying that a single muffin will not make you fat - of course it won't, it's a single muffin, you need a little fat in your diet - but if it's a larger part of a diet of unhealthy food, it will affect your health and metabolism. Just like how a media diet saturated with unrealistic portrayals of women, that objectifies, dehumanizes and commodifies their sexuality, beauty and companionship whilst devaluing their emotions, opinions, social worth or equality will also lead to unfair treatment of women - yes, even to the point of ignoring or condoning violence against women, and rape.
This is rape culture, or complicit culture (although my preferred term never caught on... c'est la vie); this is people blaming rape victims "dressing too provocatively", this is people excusing college boys from rape because "boys will be boys" and they "made a mistake". And, yes, it's saying that we should replace women with sexbots because women "reject men" and "have periods".

So, sure, a sexbot is just an expensive sex toy, and it's not the end of the world, it's not even the end of women. Life will find a way, that's what it does. But, we also need to recognize that it is social and psychological junk food - and that sex with sexbots does have issues regarding consent and unhealthy concepts towards how we treat and value women in our society. Just as pornography and online content and all manner of tropes in media can perpetuate these concepts.
And if you genuinely think that a sexbot is the same as or better than a woman, as the kid's say, you need to go outside and touch grass.

In conclusion, do you know what these people remind me of? In 1965, PSU researchers were doing a follow-up test to an earlier experiment trying to identify triggers for "social bonding", and they discussed their findings in the paper "Stimuli eliciting sexual behavior" (by Schein, M.W., & E.B. Hale). It describes how they propped up a taxidermied female turkey in a pen and put it before a horny turkey (i.e. one deprived of sex for a few days during mating season). They found that male turkeys still initiated a mating dance with these models, and would try to mate with it. In fact, it wasn't bothered if they removed the legs, or wings, or even the body - even if it was just a head on a stick, the male turkeys were equally as likely to want to mate with it.
Now, this has to do with how birds identify one another, but you can't tell me that the current models of sexbots that put a chatbot into an upgraded head aren't literally this... it's a head on a stick, that people want to have sex with.
Until next time, I'm the Absurd Word Nerd, and we may think humans are the dominant species on this planet, but shit like this just proves that we're all a bunch of turkeys.

Sunday 22 October 2023

More than Human


I've been talking about artificial intelligence a lot for this Halloween Countdown, because I have a lot to say about it. But, I get that it could get a bit much... after all, in my experience, everyone is talking about A.I. right now, it's all over news, media, art and culture right now. So, okay, today I'm going to talk about something else.
How about Superheroes? (ha, ain't I a stinker?)
Fine, maybe they're talked about perhaps even more than A.I., but when I was considering the theme of "inhuman", to discuss robots and A.I., "superhumans" was one of the first things that came to mind, and I have a lot to say about it and inhumanity.

Superheroes are really cool, and one of the reasons they're so popular is because they are wish fulfillment. It's a fantasy, to be strong, powerful and beautiful. Male superheroes are usually muscular, young and handsome and always get the girl, and female superheroes are usually powerful, young and sexy and wear revealing outfits - according to TV Tropes, the Most Common Superpower is having big breasts.
Unfortunately, a lot of this is heterosexual, male wish fulfilment; not all girls want to have a thin waist, and a slinky spine that can show off their bum and boobs at the same time, and not all men are straight and care about getting any girl, let alone a hyperfeminine model in a skintight catsuit.

But, besides the whole sexist legacy of superheroes that still affects comics to this day one of the issues with these being wish fulfilment fantasies is that they're unachievable.
We're meant to aspire to superheroes, and in fact a lot of people have spoken about superhero characters as mythological - they're meant to be a symbol of morals, of truth, of justice or even just of kindness. But, superhumans aren't just really great humans... they're "above humans" that's what the prefix super- means, something above, greater than, more than.
Often, superheroes are an allegory for some kind of injustice, some philosophy which the hero is standing for, but if you're presenting the iconic hero as someone who has power greater than any human can achieve, with inhuman strength, speed, ability, intelligence or morals... how can we possibly achieve that? I worry that superheroes, by being so much more able than humans, actually make their morals seem unachievable.

Now, that's a little pessimistic. After all, in these movies often the villains too are also superhuman. Sure, you need a Captain Planet to clean up the oil spill from a supervillain like Hogs Greedly, or to protect the animals from a Looten Plunder; but when facing earthly problems, even a kid like you can be an earthly hero - the power is yours!

That's what Captain Planet wanted to teach us anyway, and that's fair. But, I still can't help but notice that a lot of modern superheroes solve their problems with violence. Yes, we should be willing to fight for what we believe in... but often that fight is more metaphorical, but I can't think of a single Marvel Superhero in the MCU that hasn't thrown a punch. Seriously, can you name a single Marvel superhero who has never tried to punch their problems away?
This problem is twofold, because not only does it normalize violence, but it also reduces problems to ones that are purely physical. If you can't represent a problem with something that has a face which you can punch, then it's not a problem that a superhero can solve. But not all problems are physical...
Sexism, Racism, Homophobia, Capitalism, Corruption, Tyranny, Inequality. Systemic problems, all, but a superhero can't fight them unless there's one big, bad Keystone villain behind it, whose death kills it.
I won't spoil anything explicitly, but In movies like Black Panther; Black Widow & Captain America, heroes face issues of racism, sexism & nazism; in series like Daredevil; Falcon and the Winter Soldier & Loki, heroes fight corruption, injustice & tyranny...
But in every case, they stop these systemic issues by finding the one person responsible, usually a supervillain, and punching them in the face - or the equivalent, using arrows, magic, lasers or whatever their gimmick is. I'm not dismissing these movies, I like these movies, but you can't deny that they boil problems and conflicts down to the actions of "the people that do the bad thing", and then solve it by removing them from the equation.

That's not just inhuman, that's alien. If you think the way to stop inequality is "find the person who caused inequality, do a backflip and snap his neck", then I don't think you know what inequality is.

But, okay, these tend to be action movies... and some of this is to be expected. Stories are meant to be entertaining after all, and many stories pay lip-service to these ideas, whilst still being interesting. After all, whilst Captain Planet destroyed robots and stopped forest firest with his super-breath, he didn't exactly stop to pick up trash every day. There's a reason he left that crap to the pre-credits "educational segment", because otherwise the show would be boring. Unfortunately, in the real world, solving systemic problems takes education; political activism and protests, most of which aren't necessarily "boring", but it's sure as hell not what I want in my sci-fi action movies. It's meant to be thematic, that's fine...

But, what's not fine is that when you look at movies thematically, solving systemic problems is not only "not heroic", it's downright villainous.
Almost half of the villains in the MCU want to change the world. Sometimes, sure, it's because they're selfish, greedy or evil - Iron Monger wants money and power; Abomination wanted military might; Whiplash wanted revenge; Red Skull wants Nazis to rule the world; Loki wants royalty and power & Malekith wanted to destroy life and light to empower dark elves... do y'all remember Malekith? the "Thor: The Dark World" villain...
Anyway, my point is, they want to destroy for their own ends, to get their own power. But, that's not the only change supervillains aspire to.
Ultron's ultimate goal was world peace; Iron Man first fought Captain America for the sake of transparency and accountability (prior to revenge); The Vulture's goal was economic freedom for an oppressed lower class; Killmonger's goal was social freedom for an oppressed ethnic minority & Thanos's goal was to prevent societal collapse, on a cosmic scale.
You could even argue that Ego's goal in Guardians of the Galaxy: Vol 2 was ultimately family and community, but that might be pushing it... either way, these supervillains are fighting for change, and sometimes they're even fighting for good change. Obviously, if your goals are greedy or selfish at the cost of others, that's wrong... but what of the ones fighting for good? Well, they usually do that by, say, killing dozens, hundreds, thousands, even millions, billions of people - hell, I don't even know what "-illion" of people Thanos killed, but I'm pretty sure it was half a zillion.

Their goals are worthy, but their methods often aren't. But, this is worrying because the more and more that superhero movies become mainstream, the more we start to absorb their tropes.
Consider this, even if you don't know who the villain will be in a story - let's say it's either kept a secret, or you just avoided all marketing - unless they're covered in blood and screaming like a maniac, the obvious giveaway will often be that it's someone confident and charismatic who wants to change the world.
And yes, some confident charismatic people who want to change the world are bad... I'm just going to say the word "Hitler", we will acknowledge Godwin's law, and the fact that Nazis suck, but then we're going to move past it and look at more examples: Susan B. Anthony; Martin Luther King; Nelson Mandela; Harvey Milk; Sylvia Rivera; Greta Thunberg.
These people look at the problems, speak out, and have changed the world for the better.

Superheroes are reactionary, which takes away a lot of their agency, but more importantly, their goals are often to stop people changing the world. No, I don't want people to die, and I don't want some greedy villain to get more power, money or meaningless revenge... but, if all superhero fiction had an overarching theme, it appears to be: "true heroes sacrifice everything to keep things the way they are"... which is a depressingly regressive point of view.
The only way to improve the world is to change it, and sometimes, yes, that does mean we have to destroy what we once had. Change is scary, especially if you don't trust the person doing the changing, but I just want to ask one thing...
Rather than React to supervillains changing the world for the worse, when will a superhero Act to change the world for the better?

I'm not saying superhero movies are bad, or that you shouldn't enjoy them. I'm just saying that if there's one thing that superhero movies are missing, right now, it's change. I will keep watching them, I am a geek after all, but I won't truly be happy until I see a superhero change the world.

I'm the Absurd Word Nerd, and I think this was an apt post. All this talk about how A.I. is dangerous might make people think that I don't want change, but I do - admittedly I prefer it when it's slow and manageable, but I do like progress.
Until Next Time, remember that just because you like progress, that doesn't make you a supervillain... it's the killing and hurting of innocent people in the process that sends superman flying after you.

Saturday 21 October 2023

Artificial Intelligence Isn't


 When I say "artificial intelligence isn't", I don't mean it isn't artificial. Artificial comes from the word "artifice", which means skill, trick or craftmanship, and it has the same word root as "art". Artifice is basically something created, so artificial intelligence is certainly artificial, humans create it.
But, when I say "artificial intelligence isn't" I mean that it isn't Intelligent.

Now, some people might think this is just sour grapes. After all, I've been talking about the limitations of Artificial Intelligence, especially the ways that it lacks consciousness, subjectivity and morals. Some people might think that I'm being regressive and hating progress because it's changing the way things are. After all, Artificial Intelligence has as much memory and computing power as hardware allows, so how cna I say it isn't "intelligent?
Some boffins have calculated that the average (adult) human brain has approximately 2.5 petabytes of space, and can operate at one quintillion calculations a second (or one "exoflop" which is apparently a word that exists).
According to what I can find online, Google controls approximately 27 petabytes of storage in their network of data centres, outclassing the human brain tenfold; and, Frontier, the supercomputer at Michigan State University, operates at two quintillion calculations a second, or two "exoflops", doubling human thinking speed.

So, my hypothesis was wrong, computers are smarter than humans, end of story - right?

Well, no. It's not just about storage or speed, it's not just the capacity, it's about how it's utilized.
A car for example is much faster than a horse and can go much farther down a road - it has better numbers. But, if you put a metre-high stone wall in front of a car and a horse, the car can't get over it, but a horse can simply jump.
Yes, the car has better numbers than a horse, but a horse uses its speed and stamina in a different way.

Now, I'm not suggesting we all abandon cars for horses, a car is a useful tool. I'm also not saying we should never use artificial intelligence, it too is a valuable tool. I don't hate artificial intelligence, I've seen some of the applications it can be used for and it's very impressive, and can help us in some interesting ways. But just as we shouldn't use a car to jump a fence, since it can't do that very well, I think that we should be wary of using computers for applications they can't do well.
And one thing computers do very poorly, is "think".

I really need to explain "why" since it sounds like I'm just repeating myself saying "computer dumb". But there's a great thought experiment explaining this, known as the Chinese Room.

The idea here is simple, imagine that you are in a small room with a mail slot, and occasionally someone pushes through envelopes with letters written in Chinese, or some other language you don't speak. It's your job to respond to these letters, but how can you do that? Well, someone left a giant book of instructions for you, on which characters to respond with. It doesn't translate the words, it simply provides instructions for which characters to place in which order. So, it could say "when you see X string of Chinese characters, reply with Y string of Chinese letters" or even "If recieving X string, randomly select an answer from A, B or C..." etcetera.
The point of this is simple... with a detailed enough instruction book, you can easily respond in Chinese, even though you don't speak it.

This is equivalent to a computer. Computers use BIOS, a basic input-output system (kind of like a mail slot). But computers don't speak English (or Chinese, for that matter) the language of computers is binary, 1s and 0s. We provide the necessary code (that's their big book of instructions) that tells them how to respond.
At no point in this process does a computer "understand" what it's doing. When you use a computer to calculate a difficult maths problem, it can "provide" the answer faster than many humans, but it also doesn't "know" the answer. You can actually use the same "coding logic" to make water calculate difficult maths problems as well as a computer would. It doesn't know the answer, it's just doing as it's told.

You may say then "but, if a computer can't think or understand, how can they act so smart, by creating new art or text just by giving them a prompt, or input? How can ChatGPT write in a certain style, or essays about certain topics, or have realtime, dynamic conversations if it can't even speak English?"
Well, dear reader... remember that computers are very, very stupid. So, they cheat.
Do you know what training data is? We give A.I. ENORMOUS swathes of information just to operate. According to various information I can find online, ChatGPT's latest version has over 570 GB of Text. Text doesn't take up a lot of space... Doing some back-of-the-envelope maths, the average word is five letters, which is approximately 25 bytes of information, which means that ChatGPT's training dta is approximately one quadrillion, twenty-two-point-eight trillion words.
For reference alone, the King James Bible is a honking big book at around 700,000 words, and 1.8 quadrillion words is equivalent to 32,000 bibles.

Considering that the average human reads an average of 12 books a year, that's a hell of a lot.

But, ChatGPT needs all of this because otherwise, it doesn't know what it's saying. The only way to make ChatGPT sound as smart as it does is to give it such an overwhelming amount of data that it can just interpolate.
Interpolating is Easy. If I have: 2, 4, 6, ... , 10, 12 & 14, what's missing?
8. A computer can figure that out easily, it's just a pattern. And computers are good at finding patterns, because patterns are easy to identify with maths and computers are great calculators.
So, if you ask ChatGPT to write, say, a poem about a worm having an existential crisis, it can do that... but only because it had so much data in its training data that it can interpolate what's missing from its data. It has hundreds of poems to determine structure, countless dozens of philosophical texts and entomology articles, not to mention millions of words at its disposal... it makes sense that it can easily interpolate how to organize those words to respond to a prompt.
Interpolation is easier the more data you have. But, it's more difficult when you're missing data. This is how you get issues, like "machine hallucinations" where A.I. makes things up. It's not deliberately lying, it's just finding the missing link between disparate data-points without enough data to work from.

So, in answer to your question... the reason computers can act so smart is because humans are so smart. If you ever wrote a great "chinese room" style machine, to speak in Chinese... it will be because YOU speak Chinese. We've put our thumb on the scale, by filling up A.I. memory with specific instructions and relevant training data... but the one thing it lacks, which a whole lot of human brain capacity does use, is the ability for independent goals, reasoning and dynamic problem-solving.
Computers act smart because humans write the script. Now, things get a little more complicated because lately we've started using machine learning, which uses evolutionary models to create code - here, we give machines a goal, and we allow them to mutate and grow (with neural networks) to find new connections and new ways of writing their code, to create programs that more effectively achieve their goals. In this case, humans are still defining the parameters - we decide what's good output, and delete (or alter) computers that fail to provide it. But, this creates a new problem called the "black box" problem, where we can create machines that are broken or make mistakes, but we can't fix them because we don't exactly know how all of the different parts connect together. That's a major issue, and one that we are already facing the ramifications of. It's where machine hallucinations come from, but it's not actually the point I'm making.

The point I'm making is that it doesn't matter how we write the code, but all computers operate according to their code, and code is ultimately just a set of instructions. And I understand why we call computers intelligent, because I'm a writer... I love books, and I have often said "this is a brilliant book", or "this is a clever kid's book", because books are full of really smart words and stories and plots. But books aren't smart; authors are smart.
So, next time you are impressed by an artificial intelligence, because it seems smart, lifelike or creative... remember that it's not the computer that's clever, it's the programmer. More importantly, always remember that artificial intelligence can only "act" smart. And if you try to learn from it, it really is the blind leading the blind.

I'm the Absurd Word Nerd, and until next time... I can't help but compare machine learning to the "infinite monkey theorem", since it randomly mutates, and kills off the unsuccessful attempts. Which means, when it comes to machine learning A.I., you have to ask yourself: Did we kill the right monkey?