/əb’serd werd nerd/ n. 1. The nom de guerre of Matthew A. J. Anderson. 2. A blog about life, learning & language.
Saturday, 21 October 2023
Artificial Intelligence Isn't
When I say "artificial intelligence isn't", I don't mean it isn't artificial. Artificial comes from the word "artifice", which means skill, trick or craftmanship, and it has the same word root as "art". Artifice is basically something created, so artificial intelligence is certainly artificial, humans create it.
But, when I say "artificial intelligence isn't" I mean that it isn't Intelligent.
Now, some people might think this is just sour grapes. After all, I've been talking about the limitations of Artificial Intelligence, especially the ways that it lacks consciousness, subjectivity and morals. Some people might think that I'm being regressive and hating progress because it's changing the way things are. After all, Artificial Intelligence has as much memory and computing power as hardware allows, so how cna I say it isn't "intelligent?
Some boffins have calculated that the average (adult) human brain has approximately 2.5 petabytes of space, and can operate at one quintillion calculations a second (or one "exoflop" which is apparently a word that exists).
According to what I can find online, Google controls approximately 27 petabytes of storage in their network of data centres, outclassing the human brain tenfold; and, Frontier, the supercomputer at Michigan State University, operates at two quintillion calculations a second, or two "exoflops", doubling human thinking speed.
So, my hypothesis was wrong, computers are smarter than humans, end of story - right?
Well, no. It's not just about storage or speed, it's not just the capacity, it's about how it's utilized.
A car for example is much faster than a horse and can go much farther down a road - it has better numbers. But, if you put a metre-high stone wall in front of a car and a horse, the car can't get over it, but a horse can simply jump.
Yes, the car has better numbers than a horse, but a horse uses its speed and stamina in a different way.
Now, I'm not suggesting we all abandon cars for horses, a car is a useful tool. I'm also not saying we should never use artificial intelligence, it too is a valuable tool. I don't hate artificial intelligence, I've seen some of the applications it can be used for and it's very impressive, and can help us in some interesting ways. But just as we shouldn't use a car to jump a fence, since it can't do that very well, I think that we should be wary of using computers for applications they can't do well.
And one thing computers do very poorly, is "think".
I really need to explain "why" since it sounds like I'm just repeating myself saying "computer dumb". But there's a great thought experiment explaining this, known as the Chinese Room.
The idea here is simple, imagine that you are in a small room with a mail slot, and occasionally someone pushes through envelopes with letters written in Chinese, or some other language you don't speak. It's your job to respond to these letters, but how can you do that? Well, someone left a giant book of instructions for you, on which characters to respond with. It doesn't translate the words, it simply provides instructions for which characters to place in which order. So, it could say "when you see X string of Chinese characters, reply with Y string of Chinese letters" or even "If recieving X string, randomly select an answer from A, B or C..." etcetera.
The point of this is simple... with a detailed enough instruction book, you can easily respond in Chinese, even though you don't speak it.
This is equivalent to a computer. Computers use BIOS, a basic input-output system (kind of like a mail slot). But computers don't speak English (or Chinese, for that matter) the language of computers is binary, 1s and 0s. We provide the necessary code (that's their big book of instructions) that tells them how to respond.
At no point in this process does a computer "understand" what it's doing. When you use a computer to calculate a difficult maths problem, it can "provide" the answer faster than many humans, but it also doesn't "know" the answer. You can actually use the same "coding logic" to make water calculate difficult maths problems as well as a computer would. It doesn't know the answer, it's just doing as it's told.
You may say then "but, if a computer can't think or understand, how can they act so smart, by creating new art or text just by giving them a prompt, or input? How can ChatGPT write in a certain style, or essays about certain topics, or have realtime, dynamic conversations if it can't even speak English?"
Well, dear reader... remember that computers are very, very stupid. So, they cheat.
Do you know what training data is? We give A.I. ENORMOUS swathes of information just to operate. According to various information I can find online, ChatGPT's latest version has over 570 GB of Text. Text doesn't take up a lot of space... Doing some back-of-the-envelope maths, the average word is five letters, which is approximately 25 bytes of information, which means that ChatGPT's training dta is approximately one quadrillion, twenty-two-point-eight trillion words.
For reference alone, the King James Bible is a honking big book at around 700,000 words, and 1.8 quadrillion words is equivalent to 32,000 bibles.
Considering that the average human reads an average of 12 books a year, that's a hell of a lot.
But, ChatGPT needs all of this because otherwise, it doesn't know what it's saying. The only way to make ChatGPT sound as smart as it does is to give it such an overwhelming amount of data that it can just interpolate.
Interpolating is Easy. If I have: 2, 4, 6, ... , 10, 12 & 14, what's missing?
8. A computer can figure that out easily, it's just a pattern. And computers are good at finding patterns, because patterns are easy to identify with maths and computers are great calculators.
So, if you ask ChatGPT to write, say, a poem about a worm having an existential crisis, it can do that... but only because it had so much data in its training data that it can interpolate what's missing from its data. It has hundreds of poems to determine structure, countless dozens of philosophical texts and entomology articles, not to mention millions of words at its disposal... it makes sense that it can easily interpolate how to organize those words to respond to a prompt.
Interpolation is easier the more data you have. But, it's more difficult when you're missing data. This is how you get issues, like "machine hallucinations" where A.I. makes things up. It's not deliberately lying, it's just finding the missing link between disparate data-points without enough data to work from.
So, in answer to your question... the reason computers can act so smart is because humans are so smart. If you ever wrote a great "chinese room" style machine, to speak in Chinese... it will be because YOU speak Chinese. We've put our thumb on the scale, by filling up A.I. memory with specific instructions and relevant training data... but the one thing it lacks, which a whole lot of human brain capacity does use, is the ability for independent goals, reasoning and dynamic problem-solving.
Computers act smart because humans write the script. Now, things get a little more complicated because lately we've started using machine learning, which uses evolutionary models to create code - here, we give machines a goal, and we allow them to mutate and grow (with neural networks) to find new connections and new ways of writing their code, to create programs that more effectively achieve their goals. In this case, humans are still defining the parameters - we decide what's good output, and delete (or alter) computers that fail to provide it. But, this creates a new problem called the "black box" problem, where we can create machines that are broken or make mistakes, but we can't fix them because we don't exactly know how all of the different parts connect together. That's a major issue, and one that we are already facing the ramifications of. It's where machine hallucinations come from, but it's not actually the point I'm making.
The point I'm making is that it doesn't matter how we write the code, but all computers operate according to their code, and code is ultimately just a set of instructions. And I understand why we call computers intelligent, because I'm a writer... I love books, and I have often said "this is a brilliant book", or "this is a clever kid's book", because books are full of really smart words and stories and plots. But books aren't smart; authors are smart.
So, next time you are impressed by an artificial intelligence, because it seems smart, lifelike or creative... remember that it's not the computer that's clever, it's the programmer. More importantly, always remember that artificial intelligence can only "act" smart. And if you try to learn from it, it really is the blind leading the blind.
I'm the Absurd Word Nerd, and until next time... I can't help but compare machine learning to the "infinite monkey theorem", since it randomly mutates, and kills off the unsuccessful attempts. Which means, when it comes to machine learning A.I., you have to ask yourself: Did we kill the right monkey?
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment
Feel free to make suggestions, ask questions & comment . . .
I would love to read your words.