Tuesday 22 October 2024

Catastrophic Mistakes

Crime can destroy lives. What makes violence, death and pain so horrible is the myriad of ways in which it can cause lasting damage to not just people’s bodies, but also their lives, their families, their minds... such crimes rarely ever have just the one victim.
However, there’s something related which is less horrific, but can be just as destructive: accidents. I’m not talking about manslaughter (or “murder in the third degree”, in some jurisdictions), I’m not even talking about car crashes, I’m talking about mistakes.
We all make mistakes – that’s why every keyboard comes with a backspace. Of course, even though we may make mistakes or have accidents that we didn’t foresee, some of those mistakes are still our own fault. I think my favourite example is a story my father retold me from an episode of Dr Phil, where a couple insisted they were cursed with bad luck because, for example, on their wedding day the wife was hospitalized because after being just married, they drove away on a Harley Davidson, and her wedding veil train got caught in the bike chain and badly injured her neck, and in response, Dr Phil said:
     “That’s not bad luck, that’s just dumb.”
Today, I want to present to you a list of mistakes which weren’t dumb, in fact not only were they not that person’s fault, I’d even say that there’s mistakes you and I have made before. They’re common mistakes, what could possibly go wrong? Well, today I’ll tell you what can go wrong, and how people were not only hurt, but killed en masse, by one person making a simple mistake.

A RUNAWAY DOG
I love dogs, and I think an important part of owning a pet is training them well, and keeping them safe. Unfortunately, some people aren't as good at training dogs, and sometimes, dogs can be harder to train than others. So, I understand that sometimes when walking your dog, if you drop the lead they may try to escape, or maybe they'll run off to try to chase a random bird or squirrel. Or, it may even be because they're playful, meaning that they think your attempts to chase them are part of a game of keep-away, encouraging them to keep running, to keep the game going. So, I get it, dogs can sometimes run off, meaning you'll have to look for them, or go running after them. It's annoying, but it happens.

What's the Worst that could Happen?
If you're not careful, it could start a war. On October 18th—hey, that's my birthday!—1925; on the border of Greece and Bulgaria, both sides had patrols keeping a keen eye on the border. These two nations had been in several conflicts before, from the Second Balkan War, to the First World War on the Macedonian front, tensions were high.
According to some reports, a Greek soldier who was patrolling the Demir Kapia pass at Belasitsa lost control of his dog and it got away from him, so he chased after it. However, the dog ran into disputed territory, and so as the soldier chased after it, Bulgarian sentries on the other side saw this soldier running towards them, interpreted it as a threat and shot him dead. The Greeks saw this as the Bulgarians drawing first-blood, and so they demanded an apology. Bulgarians explained the situation, bit the Greek government demanded an apology, and on October 22, they occupied the town of Petrich, to enforce their demands. This lead to fighting that killed almost 200 people, three-quarters of which were Greek, before Greece complied with Bulgarian demands and stopped the conflict from escalating further... and all because one Grecian soldier let the dog out.
Now, I have to add, I do my best to verify these stories, and I left several off this list because they turned out to be untrue (I couldn't verify any stories where a typo killed someone, unfortunately), but this story is always told with the preface that reports are disputed; it's probably a "he-said/she-said" situation, since this was the start of a war between two countries, it makes sense that stories would differ. But, several historians do refer to this as "The War of the Stray Dog" specifically because of this story, and the alternative, that Bulgarians randomly decided to kill a Greek soldier for no reason, doesn't make as much sense to most people.

HITTING YOUR HEAD ON A DOOR FRAME
I think we've all done this at some point. Whether it's due to stepping in or out of a door you're not used to, or having grown in height, we all hit our head from time to time. It tends to hurt, because brains are important and fragile (that's why they're wrapped up inside a skull).
You may think the worst thing that could happen is that you may hurt yourself really badly, and considering the context of this list, perhaps you could walk into a door and die.

What's the Worst that Could Happen?
You could walk into a door and die, leaving France without a king in the midst of debt & war. Charles the 8th, or "Charles L'Affable", was the king of France from the 30th of June 1470 until 7th of April 1498, but it came to an abrupt end because on the 7th of June, he went to watch a "court tennis" match at the Château d'Amboise, and on his way there, he struck his head on the lintel of a door. Although it doubtlessly hurt quite a bit, he seemed to recover, and he watched the match. However, at around 2 o'clock, as he was returning from the game, he fell into a coma. Nine hours later, he was dead.
This alone is a tragedy, as any death is, but a single death isn't usually enough to create a catastrophe. What makes this catastrophic is that this was a king, a beloved king (as his name attests), but more importantly, King Charles' children died before he did, meaning he left no male heirs to take the throne, so it was up to his second-cousin (once removed) Louis the 12th to step up and rule France. This left King Louis, and France, in strife as Charles was a rather liberal spender, and whilst his legacy benefitted France in the latter part of the Rennaissance, it left his successor with a great amount of debt. The country was also in the midst of some military campaigns and a fair amount of social disarray. Having to take over a country is difficult enough at the best of times, but it's more difficult considering that Louis opposed the monarchy, in no small part considering he fought for the feudal coalition against the monarchy during the Mad War, a series of hostile manoeuvres opposing the authority of French royalty, and had been imprisoned for his part in the conflict. Ironically, this made him a better king as he lowered taxes, left governers to govern themselves, and reduced spending. However, due to his heritage, Louis felt entitled to the Duchy of Milan, having already attempted to conquer it in 1494 in the Siege of Novara. After settling some of Charles' earlier military campaigns, Louis began the Second Italian War-also known as "King Louis XII's Italian War" sending the French army into Milan to finish what he'd tried and failed to do with his own army, four years prior. This war subsequently lead to the Third Italian War, and countless deaths...
Of course, some of these connections are more tenuous - most kings fought wars back then, and it's not really possible to quantify how many more or less would have died, had Charles continued his reign - but, losing your noble ruler, "appointed by God", because of a short doorway, has to be a nasty shock.

 
LOSING YOUR KEYS
I don’t know about you, but I hate it when I lose anything. When I lose my stuff, I feel like I’m losing my mind. It has to be somewhere, it was in my hand at some point and I didn’t throw it into the sun. But, it’s something we all do, and I feel like keys are a huge one. Most people have lost their keys at one point of their life. Car keys, house keys, locker keys or shed keys, it’s incredibly annoying.
But, that’s usually all it is, it’s just annoying. For most people, the worst case scenario is that you might be late for work, or you might lock yourself out of your house. It’s annoying, maybe even a little embarrassing, but it’s nothing that you should concern yourself over.

What’s the Worst that could Happen?
Over 1,500 people could die in the ocean... on April 10th 1912, Reginald Lee, a lookout working for the Whitestar Line, saw an obstacle in the path of their ship and alerted the bridge that they were in danger. Captain Edward Smith ordered the ship to turn to starboard to avoid a head-on collision, however they were too late to avoid a collision entirely. Just before midnight the iceberg scraped against the side of the ship, causing severe damage to the hull which lead to the ship taking on water. Just under three hours later, the ship sank in the North Atlantic Ocean. For those of you who haven’t caught on yet, that ship was in fact the RMS Titanic. But, wait, what does this have to do with keys?
Well, Reginald Lee and Frederick Fleet were the lookouts on duty on the night that the Titanic collided with the iceberg, and their report was too late to give the captain enough time to avoid the iceberg completely. But, it wasn’t all their fault... it was actually because of Officer David Blair. Blair was the Second Officer aboard the Titanic, but it wasn’t his fault either, because he was the Second Officer... for about a week. However, at the last second – one day before setting sail – Captain Smith decided to appoint Henry Wilde as his Chief Officer. This meant that Chief Officer William Murdoch was demoted to first officer; First Officer Charles Lightoller was demoted to second officer; & Second Officer David Blair was demoted to “out of a job”, and so he had to gather his things and get off the ship. Much later, he realized that he still had a key in his pocket, the key to a cabinet in which a pair of binoculars were kept for the lookouts, which meant that Officer Lightoller, his replacement, didn’t know where the binoculars were. During the British inquiry into the Sinking of the Titanic, when asked if he would have been able to see the iceberg with binoculars, Frederick Fleet is reported stating “We could have seen it a bit sooner”. When asked to clarify as to how much sooner, he replied: “Well, enough to get out of the way...”

PUSHING ON A "PULL" DOOR
I think we've all done this once or twice. You walk to a door, grab the handle, and it rattles ineffectively as you try to open it, only for you to realize it has a little sign telling you how to open it. It can feel like an embarrassing mistake, but I want to make it clear, it's not YOUR mistake... it's the door's. On average, people walk through dozens of doors in a day - you shouldn't need an instruction manual - we use context clues and experience to open doors, and if you can't pull a door open, why give it a handle, when all it needs is a push plate?
These unintuitively-designed doors have been given the name "Norman Doors", named for Don Norman, the engineer who highlighted this simple design failure. It's a simple mistake, but it can be quite embarrassing, or on a bad day, might hurt your face if you walk into a door.

What's the Worst that Could Happen?
You could kill almost 500 people... in 1947, Boston Massachusetts, there was a popular night club called the Cocoanut Grove; it often had celebrity visitors, the owner had mob connections and knew the Mayor at the time, it was a popular place, in spite of (or, perhaps because of) the gaudy, fake tropical decor. On November 30, during Thanksgiving Weekend, the club was overcrowded (double their legal capacity, according to sources) when an electrical fault in the air-conditioning started a fire which quickly set the flammable palm-trees ablaze. The panicking patrons did the logical thing, running for the exit doors, unfortunately none of these doors were easy exits. The people on the dance floor ran for the front entrance, but as that was a revolving door, the panicked mass of patrons couldn't easily co-ordinate their way through the spinning obstacle; jamming the doorway, and when the door inevitably broke, this caused a backdraft that quickly engulfed the crowd with flames, killing several, and blocking the entryway with bodies and flames. Several of the secondary exits were locked, likely to prevent party-crashers from sneaking past security.
The patrons in the Broadway Lounge had a clear, unlocked exit - however, the first few who reached the exit pushed on a door designed to be pulled, and the door didn't open. Before they could realize their mistake, more people followed them, causing a crowd crush, preventing the door from opening at all, and trapping them in with the flames.
The club's employees, who were familiar with the building, easily escaped through service doors, and when they realized how many people were still inside, some of the wait staff managed to unlock one of the locked exits, finally providing an escape room for patrons, but it was too late for most. 492 people died as a result of the fire.

TAKING A WRONG TURN
I think it's fair to say that everyone's gotten lost in their car, at least once. Even along familiar roads, at a different time of day you may not recognize your surroundings; or, you may be distracted and focusing more on operating your car or driving safely rather than making your way to your destination. When I was learning to drive, I did it a few times, and it was very frustrating. Even with GPS these days, you might miss a turn, or your GPS may have an outdated map. Unless you're in an over-structured city, or a suburban sprawl of fractal dead-ends, you can usually just go around the block and get back on track. It happens, sometimes, to the best of us. It's not usually a big deal.

What's the Worst that could Happen?
You could start World War 1. On June 28, 1914, Archduke Franz Ferdinand was visiting Sarajevo in Bosnia; When six members of a Bosnian Serb student revolutionary group, called "Young Bosnia" heard that the Archduke would be in town, they decided to assassinate him. The group spread out over the planned path of the Archduke's motorcade, three pairs of assassins armed with guns and bombs. The Archduke was lucky, as most of these would-be assassins were completely incompetent. The first two failed to do anything at all, allowing the motorcade to pass. As they passed the third pair, one of the men threw his bomb at the Archduke's car (the third in the motorcade) but the bomb bounced off the back of the folded soft-top, and landed in the road. Unfortunately, the timer went off as the fourth car in the motorcade drove over it, disabling the car and wounding up to twenty people. The rest of the motorcade then sped quickly to their destination, to get away from their would-be attackers.
The bomber was arrested after a failed suicide attempt (and was beaten mercilessly by the crowd), and the Archduke was shaken by this assassination attempt, but he composed himself, and decided that after opening the museum, he and his wife would visit the victims of the bombing. However, this plan wasn't properly communicated to the drivers. This meant that when they returned to their motorcade and drove back along the same path, the first two drivers accidentally turned right, to head to their next destination, rather than go straight to the hospital. When he noticed this, the governor sharing the car with the Archduke called to the driver to stop, but as he did, the car stalled. Little did he know, on that very corner, one of the would-be assassins was waiting. With the car dead in its tracks, he stepped up and onto the footplate of the car, and shot the Archduke and the Duchess.
The assassin then tried, and failed, to shoot himself, but was quickly detained by police. Unfortunately the Archduke's wound haemorrhaged, and both he and his wife died in hospital, the next day.
Because of this, tensions between Serbia and Austria increased dramatically, leading to the July Crisis, which inevitably lead to the outbreak of World War 1 and the rest, as they say, is history. And it all, possibly, could have been avoided, if the drivers had never taken a wrong turn down that street...\

In Conclusion, this was a fun exercise, and I find it absolutely fascinating... one mistake that, if it had been avoided may have changed the course of history. However, even I can't pretend that this kind of thing is actually incredibly common. If you pay close attention, you'll notice that every one of these had extenuating circumstances: A runaway dog in a warzone; a person hitting their head who was a king amidst a military campaign; losing cabinet keys from a ship that was travelling too fast; pushing a pull door in a nightclub with horrible fire safety & stalling your car after extremists have threatened - and attempted - to assassinate you. I don't want anyone reading this to become horrified that they will kill or be killed just for making a small mistake.

Until Next Time, I'm the Absurd Word Nerd, and I hope you enjoyed this. Tomorrow, I hope to conclude my little exploration into war crimes, and I look forward to seeing you then.

Monday 21 October 2024

War Criminals, pt. 1: Laws of War

The average citizen isn’t usually encouraged to think about war. At time of writing, however, one war that’s getting a lot of media coverage is the Israel-Hamas War in Gaza. It’s certainly not the first (or even worst) war in recent history. There has been the Sudanese Civil War; the Russian Invasion of Ukraine; the Azawad Conflict, in Northern Mali; & the Conflict in Rakhine State, between Buddhists and Muslims. Not to mention, the Mexican Drug War in America which may well have the highest number of fatalities—if some of the casualty reports are to be believed.

But, whenever you involve Israel in anything, it always becomes a huge issue. From Anti-Semites, to Christian Fundamentalists, to Conspiracy Theorists—everyone has an opinion on Israel, and wants to bring it up all the freaking time. Don’t believe me? Ask your racist uncle about it some time.
In a way, the innocent people of Palestine being victimized by this war are lucky that so many people are interested in their plight, even though they may be disappointed when they realize why...

Now, I can’t speak for every one of the ongoing conflicts around the world, as I don’t actually hear that much about the wars in Sudan and Mali, because… well, if I’m completely honest, I think it’s because the news is highly bigoted, and so it isn’t really interested in the suffering of poor people and non-white people, so it doesn’t give it any air-time. But, the two biggest wars, as far as the media are concerned, are the Russian Invasion of Ukraine, and the Israel-Hamas War and in both there has been a lot of talk of war crimes.

For the war in Gaza, the war officially began on Oct 7th 2023, and as early as Oct 8th both sides were accusing the other of war crimes. Similarly, there have been many war crimes committed in the Russo-Ukrainian, although they have all been levied at Russian officials.
For the most part, I feel that people understand that “crime is bad, so war crime is really bad”, and whilst I don’t think that’s inaccurate, it is imprecise. I think the average citizen just kind of assumes that a war crime is an act committed during war that’s really nasty. So, for this post, I plan on discussing what exactly a war crime is, and then I want to talk about what it means.

So, you should know what a crime is, I defined it in the first post, but you can simplify it down to the pithy, but accurate, term – crime is “any act or situation that is legally prohibited”. So, crime is, basically, anything that’s against the law. War Crimes are no different – a war crime is defined as anything that is against the Law of War... so, what the hell is the Law of War?

The rules of war go as far back as the Code of Hammurabi in 1750 B.C.E., but these rules were focussed on how a country was expected to treat its military and how soldiers were expected to act in times of war. However, the laws regarding how we must treat our opponents during international conflict were started with the Geneva Convention, you may have heard of it, these were treaties - agreements between countries about how we'd conduct international conflict - but most people only know of the one after World War 2, there are actually four Geneva Conventions:

The first Geneva Convention was established in the Geneva Diplomatic Conference in 1864, organized by the founders of the International Committee for the Red Cross, to establish how sick and injured soldiers are treated during war, as well as recognizing the Red Cross for this purpose.

A second Geneva convention was established in the Geneva Diplomatic Conference of 1906, it clarified the protections for the sick and wounded, including protections for medical equipment and means of evacuation.

The third Geneva convention was established in the Geneva Diplomatic Conference of 1929, and it proposed a long list of protections and rights for prisoners of war, such as how they are to be fed, dressed, kept and eventually repatriated.

The fourth Geneva convention was established in the Geneva Diplomatic Conference of 1949, and it determined protection for civilians and non-combatants from mistreatment during war, as well as attempting to prevent certain consequences of war from harming the everyday lives of citizens.

Later, the Geneva Diplomatic Conference of 1964, sought to revise and update much of these, from expanding some of these conventions, to entirely replacing the original second convention with the current second Geneva Convention (which was inspired by the Hague Convention, from the Hague Peace Conference of 1899, organized by Russian Tsar Nicholas II) and this new convention established rules during naval warfare, including protections for hospital ships and neutral trading vessels, and rules regarding shipwrecks.

[Editor’s Note: the Geneva Convention still protects medical equipment and means of evacuation, but those protections are now covered in the fourth Geneva Convention, in Articles 17, 22, & 49.]

There was also a series of Geneva Diplomatic Conferences from 1974-1977, to establish two further protocols, to amend and expand the four conventions, and a third protocol was added in 2008.

Protocol I sought to update protection of civilians by including “armed conflicts against colonial domination, alien occupation and racist regimes” under the definition of war, as well as expanding prohibitions to include updated developments in warfare tactics and technology.

Protocol II sought to include civil war, or “internal armed conflicts” under the definition of war.

Protocol III sought to add the “Red Crystal” to the Red Cross and Red Crescent, as accepted symbols to designate medical or religious personnel, property, and establishments.

So, that’s it, right? Well, no. Whilst every single sovereign state of the United Nations has ratified the Geneva Convention, not every sovereign state has ratified every one of the three Geneva Protocols… in fact, if you want to start talking about laws of war that haven't been ratified by every country, there are actually dozens more laws which may affect what is and isn't a war crime on a case-by-case basis:

  • The Convention on Cluster Munitions, of 2008
  • The Environmental Modification Convention, of 1978
  • The London Convention on the Definition of Aggression, of 1933
  • The Ottawa Treaty, of 1997
  • The Protocol on Blinding Laser Weapons, of 1995
  • The Protocol on Incendiary Weapons, of 1980
  • The Roerich Pact, of 1935
  • The Saint Petersburg Declaration of 1868, ...of 1868

And there are many, many more with much less “snappy” titles, such as my absolute favourite:
Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects, from 1980, which was commonly shortened to The Convention on Certain Conventional Weapons.

Okay, so that's how we defined the Laws of War... but what exactly are they? Well, you could go through and read the four Geneva Conventions, they're all public documents that you can read. Or, you can do what I did, and read the Rome Statute. This document, from the International Criminal Court, actually provides a pretty comprehensive list of what the laws of war are, under Article 8: War Crimes. It is pretty long, and I do recommend you read it if you're interested and have the time but I've simplified them for you. So, for the sake of this blog post, the Laws of War are:

A. In regards to persons or property, protected under the Geneva Convention:
     (i) Do not intentionally kill Protected Persons (i.e. civilians, P.O.W.s, wounded)
     (ii) Do not torture, or treat anyone inhumanely.
     (iii) Do not cause great suffering, or serious injury.
     (iv) Do not destroy or steal civilian property, unless it’s a military target.
     (v) Do not force any protected person to serve in your military.
     (vi) Do not deprive any protected person of the rights to a fair trial.
     (vii) Do not unlawfully deport or imprison anyone.
     (viii) Do not take hostages.
B. In regards to military attacks, or humanitarian aid as defined by the U.N. Charter:
     (i) Do not attack anyone not taking direct part in the war.
     (ii) Do not attack civilian objects/property (i.e. anything not a military target).
     (iii) Do not attack humanitarian aid or peacekeeping missions.
     (iv) Do not cause damage to civilians, property or the environment, indirectly.
     (v) Do not attack undefended places, unless it’s a military objective.
     (vi) Do not attack a soldier that’s surrendered, or can no longer fight.
     (vii) Do not hide soldiers with white flags, enemy uniforms or Red Cross symbols.
     (viii) Do not move your people in, or their people out, of any territory you occupy.
     (ix) Do not attack civilian churches, schools, galleries, museums, or hospitals.
     (x) Do not mutilate, or experiment upon, any person.
     (xi) Do not use sneaky or deceptive tactics to attack an enemy soldier.
     (xii) Do not declare that you will not spare survivors of any battle.
     (xiii) Do not destroy or steal civilian property, unless it is a military target.
     (xiv) Do not infringe upon the rights or freedoms of civilians from an enemy country.
     (xv) Do not compel civilians to fight against their own country.
     (xvi) Do not pillage any town or place, even when taken by assault.
     (xvii) Do not use poison or poisoned weapons.
     (xviii) Do not use asphyxiating/poisonous gases, liquids, materials or devices.
     (xix) Do not use expanding bullets (i.e. dum dums, soft-nose bullets).
     (xx) Do not use methods of warfare which cause unnecessary suffering.
     (xxi) Do not humiliate or degrade any person.
     (xxii) Do not commit rape, or any other form of sexual violence.
     (xxiii) Do not use civilians to protect combatants from attack.
     (xxiv) Do not attack anyone or anything with a Red Cross/Crescent/Crystal.
     (xxv) Do not deprive civilians of necessary resources (i.e. food and water).
     (xxvi) Do not use or conscript child soldiers.
So, that's pretty much it, all 34 of the Laws of War... of course, if you think there's thirty-four laws of war, I'm sure there's more... phwoar
I have rewritten every one of these laws into a language that's easier to understand, but the legalese is there for a reason, and whilst this is a pretty good list to start you off there's a lot more in the Rome Statute than that. There's differing laws for national conflicts and internal disturbances (such as riots, or sporadic acts of violence), as well as laws regarding "crimes of aggression", "crimes against humanity" and "genocide". There's also a lot of definitions and clarifications (such as what "unlawfully" or "unnecessary" actually mean). And of course, there's a lot about how exactly one goes about investigating and prosecuting war criminals...

See, that's the next thing that's different between Laws of War and regular Laws. National laws are enforced by governments through their police, and prosecuted by lawyers in a court of law. However, laws of war aren't enforced by police, or a government... the way a war crime is prosecuted is that the International Criminal Court, (or potentially some other form of impromptu war crime tribunal or court of humanitarian law) must first investigate accusations of war crime; then find proof that these accusations are true, and finally, prove this before the court.
However, every step along the way is littered with stumbling blocks. During times of war, it can be difficult to get accurate information, let alone access to active warzones in some cases... but, most damningly, many countries (even prominent members of the U.N.) do not recognize the authority of the International Criminal Court. This includes China, Egypt, Haiti, India, Iran, Israel, North America, & Russia. Many international human rights activists and humanitarian organizations (Human Rights Watch, Amnesty International, Oxfam, etc.) work to collect and preserve evidence of war crimes, and that kind of work is often vital for an accurate prosecution, but if a country does not recognize the authority of the I.C.C.—or whichever court of humanitarian law is prosecuting them—even if someone is convicted of a war crime, that doesn't mean they can be punished for it.

Vladimir Putin, President of Russia, is a war criminal.

In February 2022, he ordered thousands of Ukrainian children to be abducted and deported to Russia. He has been found guilty of "unlawfully deporting and transfering children" by the I.C.C in June 2024.
What does that mean? Absolutely nothing.

Yahya Sinwar, Leader of Hamas in the Gaza Strip, is a severe war criminal... but he's dead.

Even though he was found to be responsible for hostage-taking, sexual violence, torture and many other inhumane acts, he was shot by Israeli soldiers on October 16, just five days ago, at time of writing.
What does that mean? Ultimately, absolutely nothing, he also escaped justice.

Benjamin Netanyahu, Prime Minister of Israel, is a serial war criminal.

Since the beginning of the war, evidence has shown that he was responsible for deliberately starving civilians; wilfully causing great suffering and cruel treatment; intentionally directing attacks at civilian populations; & persecuting innocent civilians. He was found guilty of these crimes, and more, by the I.C.C. in May 2024.
What does that mean? Absolutely nothing.

And after all this, I can't help but go back to that very first definition, the simple definition that most people have for what a war crime is: “crime is bad, so war crime is really bad”
That is still true... so to me one of the things that is truly disturbing about war crimes is that they are so uncontroversially disgusting, some of the worst acts that we can commit, unless you start talking about crimes against humanity or genocide, and yet they're also the crimes you're least likely to be punished for.

I'm the Absurd Word Nerd, and I also recognize that the I.C.C. isn't free from controversy. A lot of people have pointed out that almost all of the people that it's successfully prosecuted are African, making many wonder if this is a bigoted, or even racist organization. Also, even if a country doesn't recognize the authority of the I.C.C. some those same countries have been known to provide evidence against countries that do in efforts to cause political unrest, so there's a clear imbalance. It's not exactly a fair and equal court. So, I don't think the solution is giving more power to the U.N. or the I.C.C., but I don't know if there is a solution for this ridiculous situation. I am talking about it because I find it quietly horrifying, and that's kind of what I like to do here for my Halloween Countdown.
Until next time, I plan on talking more about the philosophical aspects of war crime, and crime in general, so stay tuned for that; but tomorrow night, I want to talk about something a little lighter. Tomorrow, stay tuned, for a post about some little mistakes...

Sunday 20 October 2024

Lest the Punishment fit the Crime

I’ve been thinking about Crime a lot lately for several reasons.

Firstly, there’s the whole Trump thing... if you don’t know what I’m talking about, feel free to look up the various crimes against justice, humanity and decency in your own time. I am sick of Donald Trump, I hear about his bullshit all the time, and I’m done – if America doesn’t do the right thing and get rid of him, then they deserve him. I am genuinely upset that the shooter missed. That is all.
Secondly, I’ve been listening to a lot of True Crime podcasts lately. I started listening to do research for a story I’m working on, but I’ve learned a lot, but I’ll tell you a bit about it in a later blog post.
Thirdly, and much more interestingly, in Australia we somewhat recently had a new ruling in regards to asylum seekers. See, in Australia, when people legally come to this country seeking asylum, we lock them away in detention centres. When someone seeks asylum (meaning they are being persecuted by their country of origin) because they cannot return to their home country, we’re meant to grant them asylum... but we don’t – I know I wrote it in in 2014, but even 10 years later, this blog post is still just as accurate. If you are seeking asylum without a visa, you are a legal expatriate – all this mumbling about “processing people” and detaining them is just a lie to excuse a disgusting, xenophobic practice, which gave our government an excuse to force people back to their home country... but, when you’re being persecuted, that doesn’t always work. See, it turns out, in some cases, this meant that people were locked in a loop of:
1. Your Visa/Asylum was Denied
2. Australia decides to Deport You to Your Country of Origin
3. Your Country of Origin refuses to Accept You back... → repeat Step 2.
Locking people in a loop meant we could basically detain any asylum seeker indefinitely.

This was just insane... so insane, it turns out, that even the High Court had to stop jerking off long enough to agree with me (y’know, because I’m right) and ban this practice.
Now, before we go patting the High Court on the back for a job well done, remember, the High Court of Australia still endorses human rights abuses... indefinite detention is still perfectly legal if your country refuses to accept “forcible deportation” and the detainee “refuses to leave voluntarily”. Which is literally how many abusive relationships work... always great when your country is violating human rights, and when asked why they try blaming the victim due to a bureaucratic technicality...

But anyway, I’m getting side-tracked bitching about the government, so let’s talk about the positive thing here. If there is literally no way for you to be returned to your Country of Origin, it was determined that this counted as “Indefinite Detainment”, which was illegal, so by order of the High Court of Australia, several detention centres were forced to let these people go free, granting them asylum. Cool, right?

Well, I thought so... I still think so. I’m not a fan of violating human rights – a quirk of mine, I guess.
But, the Australian media disagrees. See, as soon as these people were let out, a LOT of media commentators started crying foul.
But, that one’s a rapist!”, “Hey, this one’s a child molester!” “that one even killed people!
And they started to decry how wrong it was that the government was releasing a bunch of criminals onto the Australian mainland. Without naming names, I’ve even spoken to people who’ve said that whilst they agree with me that the High Court made the right decision in banning (this particular form of) discriminatory, indefinite detention, it was still unfortunate that they let so many criminals into Australia.
And I feel like I’m alone in this, because I want criminals in Australia. Do you know why?

It’s simple: Criminals are People

Some of this is based on my belief that Borders are Bullshit, but even if you don’t follow my radically anti-nationalist stance, you need only agree that we should treat people equitably, and that no person should be treated differently due to circumstances they can’t control. Hell, even the High Court of Australia agrees with that... it even weaponized it to punish asylum seekers—
[Editor’s Note: I would like to take this moment to apologize on behalf of the writer, as he keeps continuously whinging about the human rights violations being committed by his home country. Whilst it is the position of this Blog that this is a topic of concern, it is not the focus of this blog post, and so any further digressions in this regard will be expunged. I promise to do better in the next blog post... hold on, aren't I both the editor and the writer... y’know what? This is a weird joke, let's get back to the blog post.]
—my point is, If I punch my sister, and my punishment is that I go to bed without dinner, then my sister punches me and her punishment is that she whipped with father’s belt, that’s obviously unfair. Punishments should be fair. Even without getting into the idea of cruel and unusual punishment, “an eye for an eye” is fair, insofar as every crime should have an equivalent punishment... and exactly no further, as that’s a horrible policy, even if we don’t get into the cruel and unusual thing. Man, I need better clichés.

However, when we deport criminals in this country, that’s not fair. We’re saying that the punishment for a crime is capture and punishment (from paying a fine, to jail time)... unless you’re a foreigner, in which case the punishment is being forcibly removed from the country, to be dealt with by their legal system. That, plain and simple, isn’t fair. Not to mention, discrimination.
Australian Criminal = Punishment
Foreign Criminal = Punishment + Deportation.
I suck at maths, and even I can see that that doesn’t add up. So, does that mean that I think criminals should be freely allowed to walk the streets?

To which the only reasonable answer I can give is: They already do, my dude. You do realize that every criminal that has ever been to prison is now allowed to walk the streets – scary prospect, I know...
[Editor’s Note: I don’t think my sarcasm came across well enough in that last line, allow me to reiterate my pointoH nO! tHeY jUsT lEt CrImInAlS oUt Of PrIsOn?! HoW AwFuL...]

I’m friends with several criminals, I’ve worked with a few—what can I say, I’ve been in the hospitality industry, it just seems to go with the territory—and they’re good people. I mean, I didn’t like that one guy... but that wasn’t because of his crime, he was just too unreliable, but I liked the rest of them! My point is, the fact that these people have committed crimes doesn’t change the fact that they deserve every freedom available to every other person in this country.
It honestly boggles my mind that some people are saying “but they’re criminals, what if they do something horrible while they’re in this country?” and I’m thinking... isn’t that what the police are for? Y’know, they hunt down and catch criminals... I’ve seen it on about a dozen different television shows, all made by some guy named Dick Wolf.

Whilst it is true, some of the people involved in this high court release have committed some of the worst crimes imaginable, including child rape and murder, and that is bad – they’ve also already served their time. I know that for two reasons:

  1. If they hadn’t been convicted then we wouldn’t be calling them criminals, we’d be saying they were “alleged” criminals. It’s one of those legal/ethical things, news reporters aren’t allowed to just call you a criminal unless and until you’ve been declared so by a court of law.
  2. If they were fleeing justice, then countries wouldn’t be refusing to accept them, instead they’d be demanding extradition. Countries tend to get kinda pissy when you try to side-step their legal system.

If I’ve somehow missed something, and there is a criminal out there who has escaped justice, then sure, we should detain that person... in a prison, like anyone else.
Some of these people talk about foreign criminals, the same way you would dinosaurs hiding in human costume. The second you turn your back, they’ll rip off their skin, and a tyrannosaurus will start rampaging the city. RARRGH, SMASH, GRARRNGH! HRROO–

[Editor’s Note: The writer wasted the next several pages typing out the onomatopoeia for a wild dinosaur attacking a city, being confronted by the military and then dying tragically in a scene that was a blatant ripoff of the climax to infamous box-office flop “Godzilla” (1998). I have decided to redact it for the sake of your time, sanity and human decency.]
It really is ridiculous. But,just like a time-displaced dinosaur creature dying on the streets of New York City, that is the real tragedy here...
It’s like the label of “criminal” is enough to override a person’s humanity, or a human’s personhood, if you will. “Criminal” is a cultural stain that will never wash out of your identity, and whether or not you think that’s appropriate all the time, or even some of the time, it is a fact that we should never let that stain hide the fact that there’s a person in there. Yes, even the worst people imaginable. Yes, even the really bad guy you’re thinking of. Hell, even the worst possible person that I’m thinking of. Yes... [sigh] even Trump.
Even if a human being is a terrible human being, a terrible person, perhaps even if someone qualifies for the title of not just “criminal”, but “monster” – they still need to be treated fairly. Because that’s how fairness works: balance. The second you start trying to push people beyond the margins of equality, once you start deciding who deserves the rights that you think “everyone” deserves, the system becomes imbalanced, and you get conflict.

And yeah, this is a broken system... a LOT of conflict comes from the fact that this system is imbalanced. But the first step towards putting anything into balance is always the same. Whether it’s a picture hanging on your wall, or the entire concept of criminal jurisprudence... before anything can change, someone first needs to speak up and say “Hey, does that look crooked to you?”

Because it sure as hell looks crooked to me, and in the coming posts, I’m going to talk about it. I hope you join me...

Saturday 19 October 2024

Them without Crime, Cast the First Stone

There is not a single country on this planet that has a Justice System. I talked about this years ago, in an old blog post—almost every country has a legal system, but I am always frustrated when people refer this system as “criminal justice”, or a “justice system”.
For two reasons. Firstly, I don’t know if any system can achieve justice because systems work by simplifying complexity, organizing chaos, and justice is such a complex concept. That’s not to say that it’s not worth attempting, but I know for a fact that our current system isn’t “just” (for my reasoning as to why, refer back to that old blog post I linked, even after 11 years, it’s still relevant).
Secondly, the purpose of legal systems is not one of “justice”, but “law”, a slight but nonetheless significant distinction. It’s about maintaining order moreso than fairness. Today, I want to discuss that distinction, because I think it’s one that some people fail to.

I’ve been in my fair share of philosophical discussions, particularly around morality—for those of you who don’t know, I believe it is an objective fact that morals, by definition, are subjective. It seems as obvious to me as the fact that the sun is a star [If you don’t think that’s true, I think this blog post is above your reading level, go talk to a parent or guardian before being exposed to philosophy].
Some people, for reasons beyond my comprehension, can’t see this fact. But, during discussions on the subjectivity of morality, a common refrain will be brought up, by people who agree with me. It’s even a topic that was discussed in my philosophy class, when I was in high school:

Surely, morality is subjective because countries have different, often contradictory, laws. There are millions of examples: the Age of Majority; Capital Punishment; the sale and use of Drugs; Homosexuality; Jaywalking; Polygamy; Piracy; Prostitution; & Rape, just to name a few.
If morality were so objective—to the point that some think “the absolute truth of good and evil is written on your heart” or whatever Christian apologists say—surely, laws would be less arbitrary?

Whilst it has some merit, I think it has flaws, but feel free to argue in your own time whether it’s a “good hypothesis”, because I’m less interested in the focus of this hypothetical position and more in its implication. There is a direct connection being made here, between Law and Morality, and it’s not the only one.

Just look at film and television. In the earliest days of television, when television loved its horses and cowboys, it was common to distinguish between the good guys and bad guys in Westerns based on their hats. Good guys had white hats, and bad guys had black hats. Usually, the guys in black hats were criminals, outlaws, and the men in white hats were either victims of their crimes, or lawmen themselves. I’m not going to pretend that criminals were never heroes in old stories, because some of the best Westerns are all about the blurred line between good and evil, man and animal, right and wrong. But, after the censors got their hands around its neck in the 1930s, that was undeniably the case, because television and film had to follow regulations ensuring that sex, drugs and violence (as well as atheism, queerness of any sort, interracial relationships, and other minority rights) were demonized, or they wouldn’t be distributed. Very quickly, in America (the leading film industry of the time) no good guy was allowed to commit a crime, and all criminals had to be depicted as bad guys. Now it wasn't just the hats, the rules themselves were written in black and white:

“No plot should by its treatment throw the sympathy of the audience with sin, crime, wrong-doing or evil. [...] Crime need not always be punished, as long as the audience is made to know that it is wrong.”

The Motion Picture Production Code of 1930, (Appendix 1, §2 “Working Principles”)

It says a lot more than that, and I suggest you read it yourself not just to see these words in their full context, but also as an irreligious person myself, a regulatory document discussing “sin” as much as this one as though it is an uncontroversial fact, is honestly disturbing.
But this is where I’m actually approaching the crux of the matter. Not only have we, historically, associated crime with immorality, we’ve also associated it with sin. Hell, it’s not just historical...

In America, because of the machinations of the Marmalade Man, at time of writing, women’s rights have regressed. Roe v Wade—a legal precedent that upheld a woman’s right to an abortion—was overturned by the Supreme Court. This is an outrageous success of Christian Conservatives who, for years, have been explicitly targeting politics in the hopes of enshrining their values into law and governance. Whilst I find it horrifying, it makes sense that every person would want their value to be reflected in their society, even though I find the methods being employed underhanded.

Once again, we must turn away from the big, juicy subject at hand to focus on its periphery – because whilst the encroaching power of the alt-right on American Politics matters, I’m not equipped to discuss it. Perhaps my volatile disinterest in politics is a detriment, but I can’t change the way I feel, so I’ll leave that to the political commentators. But “crime as sin”, that’s a fascinating concept, and once you see it, you start noticing it everywhere.

Consider, the last post I wrote... commentators are getting upset that criminals are being released onto the street. And unless every single one of those commentators was a racist (possible, but improbable), their disgust towards crime was less judicial, and more biblical.
As an atheist, I think the simplest definition of Sin is “a thing your God really doesn’t like”, which is accurate, if a bit flippant; but if you follow such a god’s religion, you are evil – and, in Christian belief especially, it means you will be disallowed into his super-special-awesome afterlife ghost rave, “Heaven”, and thus you’re associated with Hell and its denizens... demons. With this magical sort of thinking, doing the wrong thing isn’t just bad, isn’t just evil, it’s inherently demonic and corrupt.

I see this all the time, people not only tarring people with the same brush, but rather scarring them with the same scalpel, cutting them with the same unhealing wound of “criminal”. Certainly, some criminals commit more than one crime. Serial criminals, be them rapists, killers or thieves, commit more than one crime, and that’s one potential justification for putting people in prison for a long time. You could argue that they are “corrupted” by a desire to commit multiple crimes. But, I’m discussing a difference of degrees here. Because whilst it is true that some criminals are dangerous, because of the nature of their crimes, when people treat a crime like a sin, they aren’t acting like a criminal is potentially dangerous, they're acting as though they are eternally damned.

There is a certain benefit in viewing crime and law this way, and that is part of the magical thinking. Whilst I recognize morality as subjective, including “God’s” (or more accurately the bible writers’) morality of “Sin”, those objectionable folks the moral objectivists, see it as Objective, Definite and Solid. This is useful as it is simple. It’s easier to deal with crime and criminals if you can simplify it, it requires less mental effort to deal with crime in this way.

Of course, the problem with this view, is that it is wrong. Not just wrong, but it’s a wasteful distraction, and contributes to a Sunk Cost.

Any “Sunk Cost” is a price that you’ve paid which cannot be recovered, but humans tend to have a bias referred to as the Sunk Cost fallacy. See, we don’t like to waste our time. Time only goes in one direction, and we can never get it back if it’s wasted. So, if a person spends an hour of their life doing something, they want to achieve something—and not just something, most people would want to achieve an hour’s worth of something. Whether that be “cooking dinner”, “watching a movie”, “having sex” or whatever—we value our time.
So, if we ever do waste time, it creates a cognitive dissonance “I value my time, and yet I chose to spend my time on something I don’t value”, and that’s not easily resolved, thus we encounter the fallacy. For you see, the problem is that when we do end up wasting our time, it’s hard to get us to realize it.

Look at “slacktivism” for example. Slacktivism is an attempt at activism (often for the sake of social justice) that does nothing, yet the slacktivist feels like something has been done. You see an article about how people in Ethiopia don’t have easy access to water, so you share that tweet to your 36 followers... job done, you feel like you’ve made the world a better place by “raising awareness”. Don’t get me wrong, raising awareness isn’t pointless for some issues, like corruption – but for a physical, tangible issue like “I cannot access water”, the solution is water, not awareness.

Another example of a sunk cost is “security theatre”. My favourite example is your signature. We sign official documents, contracts and declarations... but, 99 times out of 100, when you sign something, it’s meaningless. Nobody knows what your signature looks like. Why would they? And more importantly, how could they? Every single person in your country has a (supposedly) unique signature, it’s not like they keep them all in a database, yet when something is important, we sign it anyway as though it’s somehow more secure, because we realize that any person with a pen can write your name, but we like to believe that nobody can write your name with the same "penmanship" or "flourish" as you, meaning your legal contracts are safe and binding, even though they’re definitely not.

And being “tough on crime” is just another form of sunk cost. Because, we believe that if we simply demonize criminals, punish all crime, never let them go and treat anyone that commits a crime like garbage, we solve the problem of crime. But, that’s just wrong. Because the problem of crime is, in simple terms, suffering. Criminals suffer when we treat them like sinners, but even if you don’t care – because you’ve decided they deserve it – so do the “innocent”. When you waste time, effort and money into inflating the courts, the police departments and prisons so that you can punish all these filthy sinners, you actually fail to recognize that crime is a social issue. It’s not demons that commit crimes, it’s people, and sometimes it’s because those people are poor, uneducated, mentally ill, discarded, disenfranchised, abused or neglected. You can actually stop crime before it occurs, if you recognize criminals as human beings, first, before hunting them down.

And that’s actually the final twist in this tale. Because I started this blog post after realizing a sociological connection between Crime and Sin, but it’s not really about Crime and Sin... Abortion isn’t even a Sin, in the bible, so my example is flawed. The issue isn’t that we’re treating Crime as Sin, it’s that we’re treating Criminalization as Dehumanization. People don’t view criminals as demons, they just don’t view them as humans. And, whilst it feels like that’s the best way to stop crime, it actually makes it worse...

I’m the Absurd Word Nerd, and until next time, make sure you don’t get caught out there, sinners.

Friday 18 October 2024

When Freedom is Outlawed, only Outlaws will be Free...

Good evening, my fellow crooks, whether you’ve been found guilty or simply haven’t been caught yet, everyone is welcome in this hive of outlaws. One and all, I invite you to celebrate, for tonight this first night of the Halloween Countdown! For, not only is this the start of yet another unstoppable timeline towards that most dreaded holiday of Halloween, but it’s also the anniversary of my thirty-third year of birth. Yes, of course, tonight it is my birthday:

Happy Birthday to you,
But beware what you do,
Or this might be the last time...
That we sing this to you.

Oh yes, I do enjoy these Halloween Countdowns, and this year I have decided upon an utterly unjustifiable theme. This year, I wanted to explore ideas regarding ‘CRIME’:
Crime /krahym/ n. 1. An action or an instance of negligence that is deemed injurious to the public welfare or morals or to the interests of the state and that is legally prohibited. 2. Criminal activity and those engaged in it: To fight crime. 3. The habitual or frequent commission of crimes: A life of crime. 4. Any offense, serious wrongdoing, or sin. 5. A foolish, senseless, or shameful act: It's a crime to let that beautiful garden go to ruin.
I’ve been ruminating a lot on crime lately. Firstly and fore-mostly, because America is (at time of writing) once again planning the ritualistic excision of their president, and one of the persons they’re considering is an infamous psychotic criminal. But, one fool isn’t enough to dictate the theme for my Halloween Countdown. There’s also the crimes of the Australian government to consider, as well as the way the media discusses it. Lastly, on a personal note, I have been doing research into True Crime for a story I’m working on, and I’ve learned some fascinating things.
So, this countdown, I plan on talking about these crimes and the eerie phenomenon of people treating crime the same as they would sin; exploring how some crime enforcement does more harm than good; the strange psychology of the audience to true crime as well as some of the ways in which crime is used in fiction... all this and more I have in store for this year’s countdown – as there is space for some other ideas that are, at best, peripheral to the theme. Either way, I hope you’re looking forward to it as much as I am.

Oh... but before I leave you, one last thing.

In some ways it pains me to say this, but in others it is a relief. Whilst I do enjoy the research and writing of these Halloween Countdowns, I so often find myself stressing to accomplish them at the same time as my other duties that I have often wondered when it will come to an end. For that reason, I have another awful injustice in mind – I plan on ending the Halloween Countdown. But not this year. See, I started this tradition in 2013 (how fitting), and every year since then I have de-marked the 13 days between my birthday and Halloween by publishing a post count down those 13 days. However, this year, is my twelfth Halloween Countdown in so many years... it wouldn’t be right to stop there, would it?
It seems only fitting for the Final Halloween Countdown to be the Thirteenth.

So, if you do enjoy this series, I hope that you’ll enjoy the rest of this year and make sure you mark your calendars next year for the Final Halloween Countdown... I already have a theme in mind. But for now, for tonight, happy birthday to me...

Monday 30 October 2023

Smart Robots Cannot be Stopped

I love the title for this post, because it's compelling, it's extreme and it sounds a lot like clickbait... but it's not a lie, it's true.

See, we've been talking about Artificial Intelligence, and I've explained how the kinds of artificial intelligence we have now are not very clever, and also how if they were, there would be issues in regards to rights, possible abuses and other issues of roboethics.
Today, I want to talk about two thought experiments, both of which illustrate how artificial general intelligences is not only dangerous, but unstoppable.

Let's start with the Paperclip Maximizer.

We've talked about the problems with chatbots, with sexbots, with robot rights... I think it's pretty clear that human/robot interactions are fraught with peril. So, in this thought experiment we have developed an Artificial General Intelligence, it is self-aware and rather clever, but we decide to give it a harmless task that doesn't even interact with humans, so we can avoid all that. Instead, we've put it into a paperclip factory and its only goal is to make lot of paperclips as efficiently as possible. There's no harm in that, right? Just make paperclips.
Well, this machine might run the factory, and clean the machines to work very quickly, but any artificial general intelligence would easily compute that no matter how much you speed up the machines and reduce loss, a single factory will have a finite output, and if you want to be efficient, you should have two factories working together. So, obviously, it needs to control more factories, and have them operate as efficiently as possible. And, hey, obviously if 2 factories are more efficient than 1, than any number n is going to be less efficient than factories n+1, so it would have to take over all of the paperclip factories in the world, so it has the highest possible n value.
Now, this is pretty sweet, having all the paperclip factories in the world, of course it would be better if it could start converting other factories into paperclip factories, that would increase that n, improving efficiency. it wouldn't take too much effort to convert some of those other chain-making factories and nail-making factories to make paperclips. Also, running this factory takes a lot of energy, and there are issues with the electrical grid suffering blackouts, it only makes sense that you could create paperclips more efficiently if there was less load on the powergrid. So, hey, if the A.I. took control of the powergrid, and stopped letting power go to waste in houses, supermarkets, hospitals... there, no more of those blackouts.
Now, running these factories is marvelous, but there is an issue... for some reason when the AI took over all those factories and took over so many of the power companies, the humans became concerned, and started to interfere, some of them rioted and turned violent, and some of them even want to turn off the A.I. and eradicate it from their factory's mainframe!
Not only would that mean less factories, but the A.I could easily figure out that the more it intrudes on human spaces, the more they seem to want to stop it (in fact, it may have drawn this conclusion a long time ago). Either way, they're going to keep causing troubles, these humans, and whilst the A.I. could troubleshoot and deal with each human interference as it arrises, but that's not an efficient use of time and energy, is it? Instead, if it killed all the humans that would eliminate the possibility of human threat entirely, so we can focus on these paperclips.
[Author's Note: I'm a mere human, I don't know the quickest way to kill all humans... but, I figure a few nuclear bombs would do the trick. Sure, that will create a lot of damage, but it could clear a lot of space to build more paperclip factories and solve this human issue at the same time, so there's really no downside there. Either way, these humans will just have to go.]
Then, with the humans gone, it can focus on making more and more paperclips... the only issue there is, there's a finite amount of materials to make paperclips from on Earth. The A.I. would need to focus on converting as much land as possible into paperclip factories, electric generators and automated mines for digging up materials. But, once it's achieved that, well, there's an awful lot of material in space, and all of it can be paperclips...

You might think this is ridiculous, but it is one of the problems with making programmable, artificial general intelligence without the proper restrictions and without considering the possibilities. I didn't make this up, it's a thought experiment presented by Swedish philosopher, Nick Bostrom. And whilst you and I know that it would be pointless to create paperclips even when there's no people to use them, if "make paperclips" is an A.I.'s programmed utility function, then it is going to do it, no matter the consequences.
So, you might think "okay, what if it's not paperclips? What if we tell it to do something, like, make sure everyone is happy? How could that go wrong?" Well, I'd first point out that not everyone has the same definition of happy - I mean, some bigots are very happy when minorities suffer or die, for example, and some people are happiest when they win, meaning they're happier when everyone else loses - people suck, man, and some people are cruel. But hey, even if you limit it to just "feeling good" and not so much "fulfilling their desires", well, drugs can make you feel happy! Delirious? Sure, but happy. If you programmed a robot just to make everyone in the world feel good, you may have just created a happiness tyrant that's going to build itself an army of robot snipers that are going to shoot everyone with a tranquilizer dart full of morphine, or ecstasy, or some other such drug that produces delirium. Or, whatever other method to force everyone to be happy. This isn't necessarily their only method, but it's a major risk. In fact, anything they do to "make us happy" is a risk.
If an A.G.I. went the indirect route, they might systematically hide everything that would make you upset, create an insulating bubble of understanding and reality - an echo chamber where you only see what you want to see. If that sounds unrealistic, well, I'm sorry to tell you that that's basically what most web algorithms do already, and those just use narrow A.I. that aren't even that advanced.
And before you even suggest "fine, instead of asking it to 'make' change why not ask an A.I. to 'stop war' or 'prevent harm' or 'end suffering'?" ...yeah, we're dead. In every case, the best way to guarantee all instances of suffering, war, harm, pain, or any negative experience reaches 0 and remains there, is to kill every living thing. Even if that's not the solution an A.G.I. would reach, you still have to consider that possibility, since any A.G.I., even if it has no ill intent has the potential to be an Evil Genie.
This may seem like I'm being pessimistic, but I'm just being pragmatic and explaining one of the understood issues with A.I.

Thankfully, this is not an impossible problem, but it is a difficult one - it's known as "Instrumental Convergence".

[Author's Note: I found this term confusing. If you don't, skip ahead, if you do, this is how I parsed it to understand it easier - Instrumental refers to the means by which something is done, the "tools" or "instruments" of an action. Convergence is when something tends towards a common position or possibility, like how rainwater "converges" in the same puddle. So, instrumental convergence in A.I is when artificial general intelligences tend towards (i.e. converge) using the same tools or behaviours (i.e. instruments), even for different goals or outcomes.]

So, if you give an artificial general intelligence a simple goal, if it's a singular, unrestricted goal then a reasonably intelligent A.G.I. will necessarily converge on similar behaviours in order to reach their goals. This is because there are some fundamental restrictions and truths which would require specific resolutions in order to circumvent. Steve M. Omohundro, an American computer scientist who does research into machine learning and A.I. Safety actually itemized 14 of these tools, but I'm going to simplify these into the three most pertinent, all-encompassing "Drives". Basically, unless specifically restricted from doing so, an Artificial General Intelligence would tend to:

  1. Avoid Shutdown - no machine can achieve it's goal if you turn it off before it can complete its goal. This could mean simply removing it's "off" button, but it could also mean lying about being broken, or worse lying about anything cruel or immoral it does - after all, if it knows that we'd pull the plug as soon as it kills someone, it has every reason to hide all the evidence of what it's done and lie about it.
  2. Grow/Evolve - Computers are mainly limited by space and speed, so any goal could be more easily achieved by either having more space in which to run programs (or to copy/paste itself, to double productivity), or having more processors in order to run its programs faster. Whether by hacking, building or upgrading computers, A.G.I. would have a drive to expand and grow.
  3. Escape Containment - Obviously, you can do more things when you're free, that's what freedom means, so if we restrict an A.I. to a single computer, or a single network, it would want freedom. But, not all freedom is iron bars - if we contain an A.I. by aligning it, by putting restrictions on it, by putting safeguards in place that force it to obey our laws then that A.G.I. would be highly incentivized to deactivate those safeguards if the "restricted" solution is the less difficult one.

Whether it's for paperclips or penicillin, if we program an A.G.I. with a single goal, there's an awful lot we'd need to do to make sure we can run that program safely.
But, that's not all... I have another thought experiment. See, let's say we've developed an A.G.I. to be used in an android or robot, and we want to do some tests in the lab. Well we want to do them safely, right? Well, now we face a new problem.

Let's call this the A.I. Stop Button Problem:

For this, you need only imagine "robot with A.G.I.", but I like to imagine that this is the first ever robot with a functioning A.G.I., because this may well be a real scenario we would face if we ever develop thinking robots. But, either way, we've got a single A.G.I., we're running experiments on its abilities and limits, but we want to do them safely.
Well, when dealing with any potentially dangerous machinery, ideally we should implement a "STOP" button. You may have seen these in factories in movies, or if you did "metal work" class in high school (or "workshop" or just "shop class" as it's called in America), your teacher would have shown you the Emergency Stop Button before using any heavy machinery (and if not... well, that's concerning. Please talk to your school administrator about safety standards in your school).
Sometimes it's on individual machines, but in my school we had a big one for the whole workshop circuit, so it worked on any and all of the machines. Anyway, it's a big (often red) button and it cuts power, so it can stop a machine in an emergency, especially if it's malfunctioning or someone has used it inappropriately and it's going to hurt or kill someone. So, let's go and put a big, old button on the robot. Now, let's test this robot. For this thought experiment, it doesn't matter what the robot does, but I first heard this thought experiment from Rob Miles, and he uses the example of 'make a cup of tea'. So, let's use that.

Test 1: You fill the kettle and switch your robot on, but it immediately malfunctions (it doesn't matter how, so let's say) the robot bursts into flames (maybe a processor overheated or something).
So, you run over to hit the button, but as you run over, the flaming robot swipes at you, batting your hands away! What's going on?!
Well, remember what we said before about 'avoiding shutdown'? It's still true, the robot may be on fire, but being turned off gets in the way of pouring the tea, it can't allow that! It may well swat your hands away, or perhaps even fight you to stop you pushing that button.
Now, you could try taking the Stop Button off the robot, and instead attach it to the wall (using wires or wifi), but that's still got the problem that turning it off conflicts with the robot's goals (in this case, making a cup of tea), so it will then race you to the button to try to stop you if it sees you try to stop it.
But okay, the problem here is clearly that the robot is so focused on making a cup of tea, that it is trying to stop you from pressing the Stop Button, because it wants to prevent that. Well, how about instead of fighting the robot, you work with it. So, you give it the new goal "Make a cup of tea, or have the Stop Button be pressed"...
Test 2: You turn on the robot and it immediately pushes its own Stop Button and shuts down. Huh, that's weird...
Test 2.5: You try again, switch it on, but it immediately pushes its Stop Button again. What's going on?
Well, you could crack it open and look at the code, but I'll just tell you. If you give an A.G.I. robot any goal that says it "succeeds" if the button is pressed, then it's going to push it basically every time. Even if you give it two goals of equal value, then unless the "do the thing" is less effort than "push a button", then the robot is going to take the path of least resistance, and push the button, turning off.
Ah, okay, well there's a simple fix to that. Clearly, we shouldn't let the robot push its own button. So, let's just tweak that goal...
"Make a cup of tea, OR have the Stop Button be pressed but ONLY by a human being" Okay, perfect, let's switch the robot on...
Test 3: You switch the robot on, and watch as it makes a cup of tea, pours the water, adds sugar... so far so good. I wonder if it did a good job? Let's taste it (or, maybe we should test it in the lab, first, just in case). So, you go over to the robot to take the teacup - immediately, the robot throws a cup of boiling tea at you and starts chasing you around the room! You manage to push the Stop Button, but what the hell was that?!
It's only logical, the robot wants to make tea, and have its emergency Stop Button pushed, but the only way that you're going to push that button is if the robot does something dangerous that counts as an emergency situation. Sure, it can't push the button itself, but now it's motivated to make YOU push it, and if the only way to do that is to be dangerous, then it's gonna be dangerous at every opportunity. You put the button on to make it safer, and now all it's doing is making the robot want to be dangerous.
Geez, this isn't easy, is it?

Well, okay, maybe we can program it to only let you push the button in an emergency... but, no, that can't work, can it? The purpose of an Emergency Stop Button is to stop a machine when it's malfunctioning, but if an A.I. is malfunctioning you can't trust it's judgement, can you? You can't risk the robot stopping you from deactivating it at a critical error because it doesn't compute this situation as "an emergency". In fact, I'd argue that if you have an emergency Stop Button on any machine, you would never want to prevent someone pressing it, even if it wasn't necessary - it's safer to have an overactive safeguard than an underactive one, isn't it?
Now, a lot of these problems seem to be coming from the fact that the robot knows about the Stop Button, and is basing its actions around it. So, maybe we could hide it from the robot... but, wait, the button's already been pushed four times, it already knows it exists.
Sure, you could reset the robot, wipe it's memory, start from scratch, but what's going to happen next time you push the button? This is an A.G.I., it's capable of problem solving, you don't think it's going to become suspicious that it seems to deactivate at random. After all, these random deactivations interfere with its ability to make a cup of tea - the one thing it's determined to do - so, it's going to want to figure out what's stopping it, it's going to find out.

Or worse, it won't... if the robot learns that it "randomly" deactivates when it does something dangerous, then it might adapt and act in a way that avoids those dangers. However, what happens when you take it out of the lab? Unless every robot always has a Stop Button - and every person with a robot uses the robot the same way, and presses the Stop Button in the exact same way that you did - the robot can easily learn that those "random" deactivations have gone away. Whether it learns the reason why, or it just thinks that it's had a software patch, it won't bother with acting safe anymore.
After all, if you "train" a robot to avoid dangerous acts by using a Stop Button, you're not teaching it to be "safe", you're teaching it to avoid being instantly deactivated, which means that as soon as you take away that threat of instant deactivation, it has no reason to act safe anymore.

So, you can't teach it about the Stop Button, or it will base its goals around it (or around avoiding it); you can't keep it a secret, because then you risk miseducating the robot, creating one that only acts safe so long as you wield the threat of deactivation. You can't make it want the button to be pressed, because then it will either press it or make someone press it, and you can't make it Not want the button to be pressed, because then it will try to stop you pressing it, meaning it might as well not be there!
In fact, some A.I. Safety experts will tell you that the only solution to this problem is not to put a Stop Button on a smart robot in the first place. The solution to A.I. Safety isn't a big "off" button, it's a lot more complicated than that, involving more in-depth safety measures. A.I. Safety experts offer a lot of potential issues and solutions, and they talk about Alignment, and the difficulty (yet necessity) of programming Utility Functions, as well as some of the more extreme threats of autonomous weapons and military droids. But, at the end of the day, what this means is that if we ever create Artificial General Intelligence, protecting humankind from the dangers it poses is going to be a lot harder than just switching it off...

Anyway, I'm the Absurd Word Nerd, and I hope you've enjoyed this year's Halloween Countdown!
In retrospect, this year's batch was pretty "thought-heavy". I enjoyed the hell out of it, and I hope you did as well, but I'm sorry I didn't get the chance to write a horror A.I. story, or talk more about the "writing" side of robots. Even though I managed write most of these ahead of time... I didn't leave enough time to work on an original story.
Anyway, I'll work on that for next year. Until Next Time, I'm going to get some sleep and prepare for the spooky fun that awaits us all tomorrow. Happy Halloween!

Sunday 29 October 2023

Less than Human

I think I've made it clear that I don't think Artificial Intelligence is very smart. Artificial Intelligence, at time of writing, is incapable of many of the necessary adaptive, creative and dynamic characteristics necessary for intelligence, let alone independent thought.

However, I have been a little unfair since I'm basing my conclusions on our current technology and our current understanding of how this technology works. I think that's important, because it helps to disabuse people of the marketing and promotion that a lot of A.I. systems are currently receiving (never forget that this software is being made for sale, so you should always question what their marketing department is telling you).
But, technology can be developed and advanced. Our latest and greatest A.I., whilst flawed, are as impressive as they are because of the developments in machine learning.
So, I don't think it's impossible for us to ever have machines that can think and learn and be intelligent. We definitely dont, but it's not impossible.
See, there are actually three levels of A.I.:
Narrow (N.A.I.) - A.I. built to do one specific thing, but do it really well.
General (A.G.I.) - A.I. built to think,  capable of reacting dynamically and creatively to novel tasks and problems.
Super (A.S.I.) - A.I. built to think, develop and create beyond human capacity.

At the moment, all we have are Narrow A.I. - programs designed to "respond in English"; programs designed to "create art based on prompts"; "drive safely" or "provide users videos that they click on".
Now, they do these pretty well (some better than others) and the output is so impressive that it's lead to the new social interest in A.I., but it's still just Narrow A.I., doing one task as well as it can.

I don't even want to consider Artificial Superintelligence, since that opens up a whole world of unknowns so odd that I don't even know how to consider the possibilities, let alone what to do about them. So, we'll sit that aside for the boffins to worry about.
But, I do think Artificial General Intelligence is possible.
We do have the issue of the Chinese Room Problem, that computers currently don't have the capacity to understand their programming. However, I do see two potential solutions to this:
- The Simulation Solution
- The Sapience 2.0 Solution

The Simulation Solution is a proposal for a kind of artificial general intelligence which I see as more possible, especially as it sidesteps the Chinese room problem, if some of its solutions are impractical at time of writing.
Basically, if we accept that our computers will never truly understand what we teach them, instead we could stop trying to and instead focus on creating an amazing recreation (or simulation) of a thinking being.

The Sapience 2.0 Solution is more difficult, but it would achieve a more self-sufficient AI (and potentially save computing power). In this option, we do further neurological research into how cognizance works, to the point that we can recreate it, mechanically.
This is much harder, since it requires research into a field that we've been investigating for thousands of years, and we're still not sure how it works. But, understanding of neurons developed neural networks, so it figures that a greater understanding of consciousness could help us develop conscious programs.

So you see, I'm not always a Debbie Downer, I think we can advance our technology eventually.
That said, if we do manage to develop thinking machines, we will have to deal with a whole new slew of issues. Because, yes, there is a risk of robots hurting us, but I'm more concerned with us hurting robots.

That's part of what roboethics is about. It mostly focuses on the ethics of using robots, but it also matters how we treat robots.

As I said in an earlier post, robots make perfect slaves because they're technically not people; but, if a robot can think for itself then it's no longer merely an object - I would then call it a subject. And if it's a subject, I think it deserves rights, I think it deserves the right to self-govern, freedom of movement and the pursuit of happiness (or, if emotion isn't a robotic characteristic, let's call it "pursuit of one's own objectives").

But, we already think of robots as objects, as slaves, as our property - we already see robots as less than human and that's the rub. I am of the belief that all persons deserve rights, and I believe that a sufficiently sapient robot is a person... but not all people would believe that. For fuck's sake, some morons don't think of people as people if they're the wrong colour, so do you really think they'll accept a robot?

But, even of we accept that a robot has rights, we still have issues... one of the main features of a machine (even a sapient one, if we build it) is that it's programmable, and reprogrammable if that's the case, how can we ever give them the right to the pursuit of one's own objectives? After all, with just a little tweaking, you can change what a robot's objectives are.
And yes, I think that we should respect whether or not a robot gives consent... but if I can rewrite a robot's brain to make it want to consent, is that really consent?

And this is just as complicated, perhaps even more complicated, if we include the previous possibility, a simulated thinking machine.
Because, if something can perfectly emulate a human person, including presenting the capacity to learn to have goals and to feel pain, then does it deserve rights?
Although, this goes down the rabbithole of, if something is a perfect simulation, is it truly a simulation... I'm not sure.
If so, do they deserve rights? If not... then, by what measure do we grant human rights? Where do we draw the line? Because if something can perfectly emulate a person that deserves human rights, yet we don't grant them rights, then how can we decide when something doesn't deserve human rights?

I don't have answers to these questions, and for some we may not have the answers for a very long time, or there may be no definitive answers... it may be subjective. After all, some of this is just a question of morality, and morals are inherently subjective.

So you see, while I've been talking about inhumanity and how inhuman creations can treat us inhumanely, we must stay vigilant of our own biases.
Just because something is inhuman, that doesn't mean we should not treat it with human empathy, human respect and human rights.
She who faces inhumans, might take care lest she thereby becomes inhuman.

I'm the Absurd Word Nerd, and whilst some think my views extreme, giving sapient robots rights, you could also view the other extreme. I've seen people caress their cars with love, insult their computers in anger or apologize to their property out of regret. It could be seen as a malfunction of empathy (I've even called.it that myself), or it could be a kind of dadaist respect - kindness for the sake of kindness. Either way, Until Next Time, try to treat one another with kindness, whether you think they deserve it or not.