Tuesday 24 October 2023

Turing, Tested

In pop culture, you may have heard of something called the "Turing Test", often referred to as something which can either prove that a computer is particularly powerful, or that it is artificially intelligent (depending on the media you're watching).
But, what on Earth is the Turing Test? Well, it was a test devised by a man called Alan Turing.

Turing was a brilliant man, his understanding of theoretical computing and cryptoanalysis was important in World War 2 for creating the bombe, a decryption machine that could decipher the Enigma Machine codes used to encrypt the secret military commands used by the Nazis. He also developed the mathematical model for the "Turing Machine", which proved the capabilities and limitations of even a simple computer, in running complex programs.
There's a lot more to the man than that, from eccentricities like his marathon-running speed and stamina, to his propensity to ride a bike while wearing a gas mask, even to his unfortunate death by ingesting cyanide. His death came two years after the fact that he was convicted of homosexuality (for which he plead guilty), and given probation which included undergoing a form of chemical castration, to lessen his libido - and almost certainly this mistreatment was partly responsible for his depression and lead to him committing suicide.

But for today, we're looking at one of Turing's contributions to the field of artificial intelligence research, the Turing Test. In his paper "COMPUTING MACHINERY AND INTELLIGENCE", published in the psychology and philosophy joutnal Mind in October, 1950, Turing discussed the physical, practical and philosophical possibility that a machine could think.
It's a fascinating paper, and in it he starts by saying that it is difficult to quantify what one means by "thought", since even the programmable machines at the time could be both described as "thinking" and "not thinking" depending on the particular definition. So, instead of wasting time trying to find the "best" meaning of the word thought, instead he devised a simple thought experiment.

He called this "the imitation game", and in the example given, he suggested having two subjects (A) a man and (B) a woman (hidden from view, and communicating with nothing but written responses, to keep the game fair); they are each interrogated by a third subject (c) the interrogator, who doesn't know which of the two subjects is which (they're labelled [X] & [Y] at random).
It is the Interrogator's goal to ask probing questions to determine which of [X] & [Y] is either (A) the man, or (B) the woman. However, this is complicated by the fact that (A) is trying to convince you that he is (B), whereas the actual (B) is trying her best to help the interrogator, and give honest answers.
This is a silly, little game... but Turing then reveals the true purpose of the imitation game...
"We now ask the question, ‘What will happen when a machine takes the part of A in this game?’ Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, ‘Can machines think?’" - A. M. Turing

I find it somewhat ironic that, since (A) is the man in this thought experiment, in the course of explaining how the imitation game works Turing is effectively asking "can a machine pretend to be a woman", and I wonder if that's in any way related to the fact that the majority of robots are designed and identified to be female-presenting. But that's wild speculation.
But, as I said in my earlier post, it's not mere speculation to say that this test is directly responsible for the prevalence of chatbots. Since the test focuses on a robot's ability to have a conversation it's easy to see how this influenced programmers to focus their attention towards the skills that would help their machine conquer the "Imitation Game", or Turing Test.

But, the thing is, the suggestion of "the imitation game" was merely the first paragraph of the article written by Turing, and the rest of the paper was Turing divulging the issue further, discussing which "machines" would be best for this test talking about how these machines fundamentally function, and providing potential critiques for both his test and its potential conclusions. Also, I have to include this...

"I believe that in about fifty years’ time it will be possible to programme computers, with a storage capacity of about 10⁹, to make them play the imitation game so well that an average interrogator will not have more than 70 per cent, chance of making the right identification after five minutes of questioning." - A. M. Turing
Remember, this article was written in October 1950, so we are running about 23 years later here, but if ChatGPT is as good as advertised, perhaps we have conquered this philsophical test.
But, Turing was also investigating whether this test he proposed was a valid one by considering and responding to possible critiques of this new problem he was proposing to test machine intelligence. And this is where I take some issue, because whilst it is a very interesting paper (and you can find it online, I highly suggest you read it), I think this is where Turing fails.
Turing proposes 9 possible objections to the Turing test:

1. The Theological Objection
Basically "Robots don't have Souls". Turing like myself doesn't appear to believe in souls, but his response is rather charitable, which is to say if the soul exists and they're required for thought, he's not proposing that humans who make A.I. would not be usurping god and create souls, but rather "providing mansions for the souls that He creates".

2. The 'Heads in the Sand' Objection
Basically "the whole idea of thinking machines is terrifying, so let's ignore it and hope the problem goes away", which Turing dismisses out of hand, but mentions it specifically to point out that it is the basis for some critiques, such as the previous objection.

3. The Mathematical Objection
Basically, "digital machines have basic, inherent mathematical and logical limitations (as proven by some mathematical theorems), so they must be inherently limited". Turing's response delves into some of these in detail, but ultimately his response is that human brains are also flawed and limited, so whilst a valid point of discussion, it doesn't adequately refute the Turing Test.

4. The Argument from Consciousness
Basically "computers cannot think, because thought requires conscious awareness, and we cannot prove that machines have conscious awareness". Here, Turing rightly points out that we also cannot prove that other humans are aware - hence the philosophy of solipsism - and he presumes that unless you are a solipsist, believing yourself to be the only conscious agent, then it is hypocritical to doubt the consciousness of a non-human.

5. Arguments from Various Disabilities
Basically "a computer may perform some impressive feats, but it cannot...(do this human ability)" and Turing provides a selection of proposed human abilities that computers lack, including: be kind, have friends, make jokes, feel  love, make mistakes, have emotion, enjoy life... and many more.
Turing speaks about some of these in further detail, but ultimately he believes this as a failure of scientific induction, as proponents only have empirical evidence that machines "have not" performed these abilities not that they "will not".

6. Lady Lovelace's Objection
Basically "an analytical engine cannot create, or originate, anything new, as it can only perform as humans are able to program it to perform". Lovelace was referring to her own computer, but the precedence that computers only do as they're told, basically, is a valid one. Turing's response is that nothing is truly original, everything comes from some inspiration or prior knowledge. And, if the real problem is that computers do as they're told, so their actions are never unexpected or surprising, Turing dismisses this by explaining that machines surprise him all the time.

7. Argument from Continuity in the Nervous System
Basically, computers are "discrete-state" machine, meaning it clicks from one quantifiable arrangement to another, wheras the human brain is a continuous, flowing system and cannot be quantified into discrete "states". Turing points out that although this makes a discrete-state machine function differently, that doesn't that - given the same question - they cannot arrive at the same answer.

8. The Argument from Informality of Behaviour
Basically, "a robot can never think like a human because robots are coded with 'rules' (such as if-then type statements), but human behaviour is often informal, meaning humans break rules all the time". Turing (as I see it) refutes this as an appeal to ambiguity, confusing "rules of conduct" for "rules of behaviour" - a human may defy societal expectation or rules, but they will always abide by their own set of proper behaviours, even if these are harder for us to quantify.

9. The Argument from Extra-Sensory Perception
Basically, Turing found arguments for psychic phenomena, such as telepathy, rather convincing (although he wished for them to be debunked), and he argued that since computers have no psychic ability, they would fail any turing test that includes it. Ironically, whilst he considers this a strong argument, and suggests it can only be resolved by disallowing telepathy (or using, and I quote, a "telepathy-proof room" to stop psychics from cheating), I do not see this as at all convincing, so I will ignore it.

Finally, after dismissing the contrary views of the opposition, Turing finally puts forward his affirmitive views of proposition.

One of my biggest issues with the Criticisms is that he doesn't properly explore Lady Lovelace's Objection, ignoring the claim that that robots "do as they are told to do", focussing instead on two related claims, first "robots can't do anything original" and second "robots can't do anything unexpected".
I was prepared to write out an explanation as to the error here, but the paper beat me to the punch.
In his paragraph on "Learning Machines", he first explains that the position is certainly that robots are reactive, not active. They may respond, but afterwards will fall quiet - like a piano, when you press a key - they do nothing unless we do something to them first. But, he also suggests that it may be like an atomic reaction, reacting when you introduce a stray neutron, then fizzling out. However, if there is enough reactive mass in the initial pile of material, then a neutron can reach critical mass, and Turing asks the question: is there a some allegorical "mass" by which a reacting machine can be made to go critical?
It's a slightly clunky metaphor, but what he's supposing is that a powerful enough computer could have the storage and capacity to react "to itself", and think without a need for external direction.

It's a curious idea, not the most convincing, but Turing doesn't propose it to be "convincing", so much as "recitations tending to produce belief".
He also then goes on to explain the "digital" capacity of the human brain, how one might simulate a child's mind and the methods to educate it to think in a human-like manner; and the importance of randomization.
He finally concludes the paper by saying that, whilst he is unsure of the exact means by which we could achieve a machine that can think, there's a lot more work that needs to be done.

- - -

This is a fantastic paper, well worth reading and it serves its purpose well. However, there is one major flaw, which Turing addresses, but does not in fact dispute... The Argument from Consciousness.
Yes, it's true that we can't necessarily prove that a human can have consciousness without observing its "awareness" directly; but, I think it's dismissive to say that this makes all attempts at divulging consciousness impossible.
After all, whilst it's philosophically difficult to prove that a human's actions are a product of consciousness, it's comparatively easy to prove if a computer does not. We don't know how human consciousness works because we're still not entirely sure how the brain works, but we can (and do) know exactly how a computer's "brain" works. We can pull it apart at the seams, we can investigate it thoroughly, we can identify what code does what.

And before any of you come in and bring up "black box problem" with neural networks, that we don't know how some A.I. programs think because their machine learning rewrites their coding in unusual ways... well, no. It's true that it's incredibly difficult to figure out why a pre-trained A.I. program does what it does - if you wanted an explanation of exactly how a program came up with every single particular answer it ever provides, well that's impossible since every word produced by these programs go through several thousand iterations of the program - it would take an impractical amount of time to answer that question. However, unreasonably is not impossible. If you so desired, you could go through the code, line by line, see how each aspect interacts with each other - it is explicable, it's just messy.
And, we also know the basic "input-to-output" means by which this was achieved. Turing may argue that this is analogous to the human mind, but I submit that since we've known exactly how computers work and we have done for a long time, yet we still don't know how consciousness works, then it's clear that our discoveries in computer science aren't related to our discoveries in neurology.

I firmly believe that John Searle's Chinese Room, which is an accurate model of the way that our computer programs currently function, debunks the validity of Alan Turing's Imitation Game for proving thought. So long as computers operate this way, even a computer that can pass the Turing test is merely behaving like something that thinks, not in fact thinking.

However, I'm not sure if that's a problem for Turing. I don't believe in the soul, I don't think there's some ghost that runs your body. I also don't believe in magic, I think that human experience is ultimately material, that the human mind is a system that arises due to the brain. I do believe in consciousness, and will and subjectivity. But, based on some of Turing's arguments, I wonder if he does.
It seems as though, to Turing, computer programs and consciousness are effectively interchangeable. Yes, the human brain is a continuous machine, not a discrete-state machine, but who is to say it's not a machine. There is a realm of philosophy called Mechanism, which states that the universe is reducible to completely mechanical principles - matter and motion, physics being the interaction of the two. It's a form of Materialism, and if it is accurate, then the human mind is also nothing but matter and motion.
I can't say that Turing was a Mechanist (especially since the meaning of the word seems to have changed in philosophy, a few times), but it does seem that he feels that human consciousness is not in any way inexplicable - give a machine a bit more storage and make it run faster, and it can do what brains do. It's kind of a radical view.

However, and I'll leave you with this, we don't know how consciousness works, we haven't identified the consciousness "spark", the mechanism of awareness, the principle of thought... and until we do, there's no reason to believe that Turing is wrong. And, if he is in fact right, it entirely breaks down the subject/object dichotomy. We already know that the definition of "life" gets fuzzy when you go back far enough. Thought appears to be a function of life, but who is to say that the definition of "thought" won't become a lot fuzzier once we make machines that can think? Honestly, I don't know. I'm going to have to think about it.

No comments:

Post a Comment

Feel free to make suggestions, ask questions & comment . . .
I would love to read your words.