I think I've made it clear that I don't think Artificial Intelligence is very smart. Artificial Intelligence, at time of writing, is incapable of many of the necessary adaptive, creative and dynamic characteristics necessary for intelligence, let alone independent thought.
However, I have been a little unfair since I'm basing my conclusions on our current technology and our current understanding of how this technology works. I think that's important, because it helps to disabuse people of the marketing and promotion that a lot of A.I. systems are currently receiving (never forget that this software is being made for sale, so you should always question what their marketing department is telling you).
But, technology can be developed and advanced. Our latest and greatest A.I., whilst flawed, are as impressive as they are because of the developments in machine learning.
So, I don't think it's impossible for us to ever have machines that can think and learn and be intelligent. We definitely dont, but it's not impossible.
See, there are actually three levels of A.I.:
Narrow (N.A.I.) - A.I. built to do one specific thing, but do it really well.
General (A.G.I.) - A.I. built to think, capable of reacting dynamically and creatively to novel tasks and problems.
Super (A.S.I.) - A.I. built to think, develop and create beyond human capacity.
At the moment, all we have are Narrow A.I. - programs designed to "respond in English"; programs designed to "create art based on prompts"; "drive safely" or "provide users videos that they click on".
Now, they do these pretty well (some better than others) and the output is so impressive that it's lead to the new social interest in A.I., but it's still just Narrow A.I., doing one task as well as it can.
I don't even want to consider Artificial Superintelligence, since that opens up a whole world of unknowns so odd that I don't even know how to consider the possibilities, let alone what to do about them. So, we'll sit that aside for the boffins to worry about.
But, I do think Artificial General Intelligence is possible.
We do have the issue of the Chinese Room Problem, that computers currently don't have the capacity to understand their programming. However, I do see two potential solutions to this:
- The Simulation Solution
- The Sapience 2.0 Solution
The Simulation Solution is a proposal for a kind of artificial general intelligence which I see as more possible, especially as it sidesteps the Chinese room problem, if some of its solutions are impractical at time of writing.
Basically, if we accept that our computers will never truly understand what we teach them, instead we could stop trying to and instead focus on creating an amazing recreation (or simulation) of a thinking being.
The Sapience 2.0 Solution is more difficult, but it would achieve a more self-sufficient AI (and potentially save computing power). In this option, we do further neurological research into how cognizance works, to the point that we can recreate it, mechanically.
This is much harder, since it requires research into a field that we've been investigating for thousands of years, and we're still not sure how it works. But, understanding of neurons developed neural networks, so it figures that a greater understanding of consciousness could help us develop conscious programs.
So you see, I'm not always a Debbie Downer, I think we can advance our technology eventually.
That said, if we do manage to develop thinking machines, we will have to deal with a whole new slew of issues. Because, yes, there is a risk of robots hurting us, but I'm more concerned with us hurting robots.
That's part of what roboethics is about. It mostly focuses on the ethics of using robots, but it also matters how we treat robots.
As I said in an earlier post, robots make perfect slaves because they're technically not people; but, if a robot can think for itself then it's no longer merely an object - I would then call it a subject. And if it's a subject, I think it deserves rights, I think it deserves the right to self-govern, freedom of movement and the pursuit of happiness (or, if emotion isn't a robotic characteristic, let's call it "pursuit of one's own objectives").
But, we already think of robots as objects, as slaves, as our property - we already see robots as less than human and that's the rub. I am of the belief that all persons deserve rights, and I believe that a sufficiently sapient robot is a person... but not all people would believe that. For fuck's sake, some morons don't think of people as people if they're the wrong colour, so do you really think they'll accept a robot?
But, even of we accept that a robot has rights, we still have issues... one of the main features of a machine (even a sapient one, if we build it) is that it's programmable, and reprogrammable if that's the case, how can we ever give them the right to the pursuit of one's own objectives? After all, with just a little tweaking, you can change what a robot's objectives are.
And yes, I think that we should respect whether or not a robot gives consent... but if I can rewrite a robot's brain to make it want to consent, is that really consent?
And this is just as complicated, perhaps even more complicated, if we include the previous possibility, a simulated thinking machine.
Because, if something can perfectly emulate a human person, including presenting the capacity to learn to have goals and to feel pain, then does it deserve rights?
Although, this goes down the rabbithole of, if something is a perfect simulation, is it truly a simulation... I'm not sure.
If so, do they deserve rights? If not... then, by what measure do we grant human rights? Where do we draw the line? Because if something can perfectly emulate a person that deserves human rights, yet we don't grant them rights, then how can we decide when something doesn't deserve human rights?
I don't have answers to these questions, and for some we may not have the answers for a very long time, or there may be no definitive answers... it may be subjective. After all, some of this is just a question of morality, and morals are inherently subjective.
So you see, while I've been talking about inhumanity and how inhuman creations can treat us inhumanely, we must stay vigilant of our own biases.
Just because something is inhuman, that doesn't mean we should not treat it with human empathy, human respect and human rights.
She who faces inhumans, might take care lest she thereby becomes inhuman.
I'm the Absurd Word Nerd, and whilst some think my views extreme, giving sapient robots rights, you could also view the other extreme. I've seen people caress their cars with love, insult their computers in anger or apologize to their property out of regret. It could be seen as a malfunction of empathy (I've even called.it that myself), or it could be a kind of dadaist respect - kindness for the sake of kindness. Either way, Until Next Time, try to treat one another with kindness, whether you think they deserve it or not.
/əb’serd werd nerd/ n. 1. The nom de guerre of Matthew A. J. Anderson. 2. A blog about life, learning & language.
Sunday, 29 October 2023
Less than Human
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment
Feel free to make suggestions, ask questions & comment . . .
I would love to read your words.