Collateral Murder

This post, I’m going to be performing an analysis of a video that popped up on my Facebook feed earlier this week; but, before I link it, it’s worth giving you fair warning that the content is pretty graphic, and the content is not to be taken lightly. The video in question is nothing especially new (the content was released by Wikileaks in a video entitled ‘Collateral Murder’ back in 2010), and deals with a snapshot of the Iraq war; namely, the killing of a group of apparently mostly innocent civilians by the crew of a US army Apache helicopter gunship.

This particular video tells the story of this events through the words of Ethan McCord, a soldier in the army who was on the ground at the time of the incident. But he begins with some mention of the tactics employed by the army during his time in Iraq, so my analysis will begin there. McCord talks of how, whenever an IED went off, soldiers in his battalion were ordered to ‘kill every mother****er on the street’, issuing 360 degree rotational fire to slaughter every person, civilians and insurgents alike, unfortunate enough to be in the area at the time and how, even though this often went against the morals of the soldiers concerned, a failure to comply with that order would result in the NCOs (Non-Commissioned Officers, aka high-ranking soldiers) in your platoon ‘make[ing] your life hell’. The death toll and slaughter this practice must have caused could hardly be imagined, but McCord does his best to describe it; he talks of ‘the destruction of the Iraqi people’, of normal, innocent people being massacred just for being in the wrong place in the wrong time. McCord also talks about ‘Ranger Dominance’ operations, in which a couple of companies walked unprotected through New Baghdad (a district of the larger city of Baghdad) to perform counter-insurgency tasks. An example he gives are ‘Knock-in searches’ (I think that’s the phrase he uses), in which soldiers knock on doors/break in in order to search for potentially insurgency-related material.

The reason for these missions, for this behaviour, and for the seemingly nonsensical, murderous missions these soldiers were asked to perform comes, basically, down to the type of war being fought. Once Saddam Hussein had been removed from power, many in the US government and army thought the war would be over before very long; just cleaning up a few pockets of resistance. However, what they didn’t count on was that a mixture of their continued presence in the country, their bad behaviour and the sheer dedication of certain diehard Hussein loyalists, and before very long coalition forces found themselves combating an insurgency operation. Insurgencies aren’t like ‘traditional’ warfare; there are no fronts, no battle lines, no easily identifiable cases of ‘good guys here, bad guys over there’. Those kinds of wars are easy to fight, and there’s no way that the military juggernaut of the US army is ever going to run into trouble fighting one in the foreseeable future.

Insurgencies are a different kettle of fish altogether, for two key (and closely related) reasons. The first is that the battle is not fought over land or resources, but over hearts and minds- an insurgency is won when the people think you are the good guys and the other lot are the bad guys, simply because there is no way to ‘restore stability’ to a country whilst a few million people are busy throwing things at your soldiers. The second is that insurgents are not to be found in a clearly defined and controlled area, but hiding all over the place; in safe houses, bunkers, cellars, sewers and even in otherwise innocuous houses and flats. This means that to crush an insurgency does not depend on how many soldiers you have versus the bad guys, but how many soldiers you have per head of population; the more civilians there are, the more places there are the hide, and the more people you need to smoke them out.

Conventional wisdom apparently has it that you need roughly one soldier per ten civilians in order to successfully crush an insurgency operation within a reasonable time frame, or at all if the other side are properly organised, and if that sounds like a ridiculous ratio then now you know why it took so long for the US to pull out of Iraq. I have heard it said that in the key areas of Iraq, coalition forces peaked at one soldier per hundred civilians, which simply is not enough to cover all the required areas fully. This left them with two options; ether concentrate only on highly select areas, and let the insurgents run riot everywhere else (and most likely sneak in behind their backs when they try to move on somewhere else) or to spread themselves thin and try to cover as much ground as possible with minimal numbers and control. In the video, we see consequences of the second approach being used, with US forces attempting to rely on their air support to provide some semblance of intelligence and control over an area whilst soldiers are spread thin and vulnerable, often totally unprotected from mortar attack, snipers and IEDs. This basically means that soldiers cannot rely on extensive support, or backup, or good intel, or to perform missions in a safe, secure environment, and their only way of identifying militant activity is, basically, to walk right into it, either intentionally (hence the Knock-in Searches) or simply by accident. In the former case, it is generally simple enough to apprehend those responsible, but successfully discovering an insurgent via a deliberate search is highly unlikely. It is for this reason that the army don’t take no for an answer in these types of searches, and will often turn a house upside down in an effort to maximise their chance of finding something. In the latter case, identifying and apprehending an individual troublemaker is no easy task, so the army clearly decided (in their infinite wisdom) that the only way to have a chance of  getting the insurgent is to just annihilate everyone and everything in the immediate vicinity.

That’s the reasoning used by the US forces in this situation, and it’s fair to say that in this regard they were rather stuck between a rock and a hard place. However, that doesn’t negate the fact that these tactics are, in the context of an insurgency operation, completely stupid and bull-headed. Remember, an insurgency operation aims, as military officials constantly tell us, to win hearts and minds, to get the civilian population on your side; that’s half the reason you’re not permitting your soldiers to show ‘cowardice’. But, at the same time and in direct contrast to the ‘hearts and minds principle’, this particular battalion commander has chosen to get his soldiers battering down doors and shooting civilians at the first sign of trouble. Unfortunately, this is what happens when wars are badly managed and there are not enough men on the ground to do the job; stupid things becomes sanctioned as ideas because they seem like the only way forward. The results are shown quiter plainly in McCord’s testimony: soldiers of the 1st infantry ‘the toast of the army’, men who ‘pride themselves on being tougher than anyone else’, are getting genuinely scared of going out on missions, fear welling up in their eyes as they wander unprotected through dangerous streets praying they don’t come across any IEDs or snipers.

And that’s just the tactics; next time, I will get on to the meat of the video. The incident that Wikileaks put on show for the world to see…

Advertisement

Zero Dark Thirty

Well, I did say I wanted to make film reviewing more of a regular thing…

The story of Zero Dark Thirty’s production is a both maddeningly frustrating and ever so slightly hilarious one; the original concept, about an intelligence officer’s incessant, bordering on obsessive, quest to try and find Osama bin Laden was first brought up some time around 2010, and the screenplay was finished in the spring of 2011. The film’s centrepiece was the Battle for Tora Bora, which took place in late 2001; American and allied forces had been on the ground for just a few weeks before the Taliban government and political system was in total disarray. Al-Qaeda were on the run, and some quarters thought the war would be pretty much over within a few months, apart from a few troops left over to smoothen the new government’s coming into power (yeah, that really worked out well). All the intelligence (and it was good too) pointed to bin Laden’s hiding in the mountains of Tora Bora, near the Pakistani border, and after a fierce bombing campaign the net was tightening. However, allied Pakistani and Afghan militia (who some believe were on the Al-Qaeda side) requested for a ceasefire so that some dead & wounded might be evacuated and prisoners taken; a move reluctantly accepted by the Americans, who then had to sit back as countless Al-Qaeda troops, including bin Laden, fled the scene.

Where was I? Oh yes, Zero Dark Thirty.

This was originally planned to be the central event of the film, but just as filming was about to commence the news broke that Bin Laden had, in fact, been killed which, whilst it did at least allow the filmmakers to produce a ‘happy’ ending, required that the whole script be torn up and rewritten. However, despite this, the tone and themes of the film have managed to remain true to this original morally ambiguous, chaotic story, despite  including no footage of any events prior to 2003. We still have the story of the long, confused and tortured quest of the small team of CIA operatives whose sole job it was to find and kill bin Laden, and it honestly doesn’t feel like the story would have felt much different were it to end with bin Laden still alive. And tortured is the word; much has been made of the film’s depiction of torture, some deploring the fact that it is shown to get vital information and arguing that the film ‘glorifies’ it, whilst others point out the way that the key information that finally revealed bin Laden’s location was found after the newly-inaugurated President Obama closed down the ‘detainee’ program. Personally, I think it’s depicted… appropriately. This is a very, very real film, telling a real story about real events and the work of real people, even if specifics aren’t the gospel truth (I mean, there’s only so much the CIA are going to be willing to tell the world), and nobody can deny that prisoners were tortured during the first few years of the war. Or, indeed, that the practice almost certainly did give the CIA information. If anything, that’s the point of the torture debate; it’s awful, but it works, and which side of the debate you fall on really depends on whether the latter is worth the former. In any case, it is certainly revealing that the film chooses to open with a torture scene, revealing the kind of pulls-no-punches intent that comes to define it.

There are the depictions of the chaos of the intelligence process, the web of indistinguishable truths and lies, the hopes pinned on half-leads, all amid plenty of timely reminders of just what is at stake; the attacks, both the big ones that everyone’s heard of and can relate to and the littler ones that hide away in the corners of the media reporting that manage to mean so, so much more to our chosen characters. Of particular note is the final attack on bin Laden’s compound, in one of the least ‘Hollywood’ and most painstakingly accurate portrayals of a military operation ever put onto the big screen. It also manages to come across as totally non-judgemental; torture, terrorism and even the killing of one of western culture’s biggest hate figures of the last decade are presented in exactly the same deadpan fashion. In another film, neutrality over contentious issues can come across as a weak cop-out; here it only adds to the realism.

The most obvious comparison to Zero Dark Thirty is The Hurt Locker, director Kathryn Bigelow’s previous ultra-realistic story about the War on Terror, and it is a pretty fair comparison to say that what The Hurt Locker was to soldiers, Zero Dark Thirty is to intelligence. However, whilst The Hurt Locker was very much about its characters  and their internal struggles, with the events of the film acting more as background than anything else, Zero Dark Thirty is instead dedicated to its events (to say ‘story’ would rather overplay the interconnectedness and coherence of the whole business). Many characters are reduced to devices, people who do stuff that the film is talking about, and many of the acting performances are… unchallenging; nothing against the actors concerned, just to say that this is very much Bigelow’s film rather than her characters. The shining exception is Jessica Chastain as our central character of Maya, who manages to depict her character’s sheer drive and unflinching determination with outstanding aplomb: as well as showing her human side (in its brief appearances) in both touching and elegant fashion.

For all these reasons and more, I can wholeheartedly recommend Zero Dark Thirty as something people should try and see if they can; what I cannot do, however, is to really enjoy it. This isn’t because it isn’t fun, for lots of great films aren’t, but because it doesn’t really stir any great emotions within me, despite asking its fair share of moral questions about war. Maybe its because I tend to be very analytical over such matters, but I’m inclined to feel that the film has actually taken its neutrality and frankness of delivery a little too far. By having no really identifiable, consistent, empathetic characters beyond Maya, our emotional investment in the film is entirely dependent on our emotional investment in the subject matter, and by presenting it in such a neutral matter it fails to really do so in people without a strong existing opinion on it. I have heard this film described as a Rorschach test for people’s opinions on the war and the techniques used in it; maybe my response to this film just reveals that I don’t really have many.

“The most honest three and a half minutes in television history”

OK, I know this should have been put up on Wednesday, but I wanted to get this one right. Anyway…

This video appeared on my Facebook feed a few days ago, and I have been unable to get it out of my head since. It is, I am told, the opening scene of a new HBO series (The Newsroom), and since HBO’s most famous product, Game of Thrones, is famously the most pirated TV show on earth, I hope they won’t mind me borrowing another three minute snippet too much.

OK, watched it? Good, now I can begin to get my thoughts off my chest.

This video is many things; to me, it is quite possibly one of the most poignant and beautiful, and in many ways is the best summary of greatness ever put to film. It is inspiring, it is blunt, it is great television. It is not, however, “The most honest three and a half minutes of television, EVER…” as claimed in its title; there are a lot of things I disagree with in it. For one thing, I’m not entirely sure on our protagonist’s reasons for saying ‘liberals lose’. If anything, the last century of our existence can be viewed as one long series of victories for liberal ideology; women have been given the vote, homosexuality has been decriminalised, racism has steadily been dying out, gender equality is advancing year by year and only the other day the British government legalised gay marriage. His viewpoint may have something to do with features of American politics that I’m missing, particularly his reference to the NEA (an organisation which I do not really understand), but even so. I’m basically happy with the next few seconds; I’ll agree that claiming to be the best country in the world based solely on rights and freedoms is not something that holds water in our modern, highly democratic world. Freedom of speech, information, press and so on are, to most eyes, prerequisites to any country wishing to have any claim to true greatness these days, rather than the scale against which such activities are judged. Not entirely sure why he’s putting so much emphasis on the idea of a free Australia and Belgium, but hey ho.

Now, blatant insults of intelligence directed towards the questioner aside, we then start to quote statistics- always a good foundation point to start from in any political discussion. I’ll presume all his statistics are correct, so plus points there, but I’m surprised that he apparently didn’t notice that one key area America does lead the world in is size of economy; China is still, much to its chagrin, in second place on that front. However, I will always stand up for the viewpoint that economy does not equal greatness, so I reckon his point still stands.

Next, we move on to insulting 20 year old college students, not too far off my own personal social demographic; as such, this is a generation I feel I can speak on with some confidence. This is, probably the biggest problem I have with anything said during this little clip; no justification is offered as to why this group is the “WORST PERIOD GENERATION PERIOD EVER PERIOD”. Plenty of reasons for this opinion have been suggested in the past by other commentators, and these may or may not be true; but making assumptions and insults about a person based solely on their date of manufacture is hardly the most noble of activities. In any case, in the age of the internet and mass media, a lot of the world’s problems, with the younger generation in particular, get somewhat exaggerated… but no Views here, bad Ix.

And here we come to the meat of the video, the long, passionate soliloquy containing all the message and poignancy of the video with suitably beautiful backing music. But, what he comes out with could still be argued back against by an equally vitriolic critic; no time frame of when America genuinely was ‘the greatest country in the world’ is ever given. Earlier, he attempted to justify non-greatness by way of statistics, but his choice of language in his ‘we sure as hell used to be great’ passage appears to hark back to the days of Revolutionary-era and Lincoln-era America, when America was lead by the ‘great men’ he refers to. But if we look at these periods of time, the statistics don’t add up anywhere near as well; America didn’t become the world-dominating superpower with the stated ‘world’s greatest economy’ it is today until after making a bucket load of money from the two World Wars (America only became, in the words of then President Calvin Coolidge, ‘the richest country in the history of the world’, during the 1920s). Back in the periods where American heroes were born, America was a relatively poor country, consisting of vast expanses of wilderness, hardline Christian motivation, an unflinching belief in democracy, and an obsession the American spirit of ‘rugged individualism’ that never really manifested itself into any super-economy until it became able to loan everyone vast sums of money to pay off war debts. And that’s not all; he makes mention of ‘making war for moral reasons’, but of the dozens of wars America has fought only two are popularly thought of as being morally motivated. These were the American War of Independence, which was declared less for moral reasons and more because the Americans didn’t like being taxed, and the American Civil War, which ended with the southern states being legally allowed to pass the ‘Jim Crow laws’ that limited black rights until the 1960s; here they hardly ‘passed laws, struck down laws for moral reasons’. Basically, there is no period of history in which his justifications for why America was once’the greatest country in the world’ actually stand up at once.

But this, to me, is the point of what he’s getting at; during his soliloquy, a historical period of greatness is never defined so much as a model and hope for greatness is presented.. Despite all his earlier quoting of statistics and ‘evidence’, they are not what makes a country great. Money, and the power that comes with it, are not defining features of greatness, but just stuff that makes doing great things possible. The soliloquy, intentionally or not, aligns itself with the Socratic idea of justice; that a just society is one in which every person concerns himself with doing their own, ideally suited, work, and does not concern himself with trying to be a busybody and doing someone else’s job for them. Exactly how he arrives at this conclusion is somewhat complex; Plato’s Republic gives the full discourse. This idea is applied to political parties during the soliloquy; defining ourselves by our political stance is a self-destructive idea, meaning all our political system ever does is bicker at itself rather than just concentrating on making the country a better place. Also mentioned is the idea of ‘beating our chest’, the kind of arrogant self-importance that further prevents us from seeking to do good in this world, and the equally destructive concept of belittling intelligence that prevents us from making the world a better, more righteous place, full of the artistic and technological breakthroughs that make our world so awesome to bring in. For, as he says so eloquently, what really makes a country great is to be right. To be just, to be fair, to mean and above all to stand for something. To not be obsessed about ourselves, or other people’s business; to have rightness and morality as the priority for the country as a whole. To lay down sacrifices and be willing to sacrifice ourselves for the greater good, to back our promises and ideals and to care, above all else, simply for what is right.

You know what, he put it better than I ever could analyse. I’m just going to straight up quote him:

“We stood up for what was right. We fought for moral reasons, we passed laws, struck down laws for moral reasons, we waged wars on poverty not poor people. We sacrificed, we cared about our neighbours, we put our money where our mouths were and we never beat our chest. We built great big things, made ungodly technological advances, explored the universe, cured diseases and we cultivated the world’s greatest artists and the world’s greatest economy. We reached for the stars, acted like men- we aspired to intelligence, we didn’t belittle it, it didn’t make us feel inferior. We didn’t identify ourselves by who we voted for in the last election and we didn’t scare so easy.”

Maybe his words don’t quite match the history; it honestly doesn’t matter. The message of that passage embodies everything that defines greatness, ideas of morality and justice and doing good by the world. That statement is not harking back to some mythical past, but a statement of hope and ambition for the future. That is beauty embodied. That is greatness.

The Offensive Warfare Problem

If life has shown itself to be particularly proficient at anything, it is fighting. There is hardly a creature alive today that does not employ physical violence in some form to get what it wants (or defend what it has) and, despite a vast array of moral arguments to the contrary of that being a good idea (I must do a post on the prisoner’s dilemma some time…), humankind is, of course, no exception. Unfortunately, our innate inventiveness and imagination as a race means that we have been able to let our brains take our fighting to the next level, with consequences that have got ever-more destructive as  time has gone  by. With the construction of the first atomic bombs, humankind had finally got to where it had threatened to for so long- the ability to literally wipe out planet earth.

This insane level of offensive firepower is not just restricted to large-scale big-guns (the kind that have been used fir political genital comparison since Napoleon revolutionised the use of artillery in warfare)- perhaps the most interesting and terrifying advancement in modern warfare and conflict has been the increased prevalence and distribution of powerful small arms, giving ‘the common man’ of the battlefield a level of destructive power that would be considered hideously overwrought in any other situation (or, indeed, the battlefield of 100 years ago). The epitomy of this effect is, of course, the Kalashnikov AK-47, whose cheapness and insane durability has rendered it invaluable to rebel groups or other hastily thrown together armies, giving them an ability to kill stuff that makes them very, very dangerous to the population of wherever they’re fighting.

And this distribution of such awesomely dangerous firepower has began to change warfare, and to explain how I need to go on a rather dramatic detour. The goal of warfare has always, basically, centred around the control of land and/or population, and as James Herbert makes so eminently clear in Dune, whoever has the power to destroy something controls it, at least in a military context. In his book Ender’s Shadow (I feel I should apologise for all these sci-fi references), Orson Scott Card makes the entirely separate point that defensive warfare in the context of space warfare makes no practical sense. For a ship & its weapons to work in space warfare, he rather convincingly argues, the level of destruction it must be able to deliver would have to be so large that, were it to ever get within striking distance of earth it would be able to wipe out literally billions- and, given the distance over which any space war must be conducted, mutually assured destruction simply wouldn’t work as a defensive strategy as it would take far too long for any counterstrike attempt to happen. Therefore, any attempt to base one’s warfare effort around defence, in a space warfare context, is simply too risky, since one ship (or even a couple of stray missiles) slipping through in any of the infinite possible approach directions to a planet would be able to cause uncountable levels of damage, leaving the enemy with a demonstrable ability to destroy one’s home planet and, thus, control over it and the tactical initiative. Thus, it doesn’t make sense to focus on a strategy of defensive warfare and any long-distance space war becomes a question of getting there first (plus a bit of luck).

This is all rather theoretical and, since we’re talking about a bunch of spaceships firing missiles at one another, not especially relevant when considering the realities of modern warfare- but it does illustrate a point, namely that as offensive capabilities increase the stakes rise of the prospect of defensive systems failing. This was spectacularly, and horrifyingly, demonstrated during 9/11, during which a handful of fanatics armed with AK’s were able to kill 5,000 people, destroy the world trade centre and irrevocably change the face of the world economy and world in general. And that came from only one mode of attack, and despite all the advances in airport security that have been made since then there is still ample opportunity for an attack of similar magnitude to happen- a terrorist organisation, we must remember, only needs to get lucky once. This means that ‘normal’ defensive methods, especially since they would have to be enforced into all of our everyday lives (given the format that terrorist attacks typically take), cannot be applied to this problem, and we must rely almost solely on intelligence efforts to try and defend ourselves.

This business of defence and offence being in imbalance in some form or another is not a phenomenon solely confined to the modern age. Once, wars were fought solely with clubs and shields, creating a somewhat balanced case of attack and defence;  attack with the club, defend with the shield. If you were good enough at defending, you could survive; simple as that. However, some bright spark then came up with the idea of the bow, and suddenly the world was in imbalance- even if an arrow couldn’t pierce an animal skin stretched over some sticks (which, most of the time, it could), it was fast enough to appear from nowhere before you had a chance to defend yourself. Thus, our defensive capabilities could not match our offensive ones. Fast forward a millennia or two, and we come to a similar situation; now we defended ourselves against arrows and such by hiding in castles behind giant stone walls  and other fortifications that were near-impossible to break down, until some smart alec realised the use of this weird black powder invented in China. The cannons that were subsequently invented could bring down castle walls in a matter of hours or less, and once again they could not be matched from the defensive standpoint- our only option now lay in hiding somewhere the artillery couldn’t get us, or running out of the way of these lumbering beasts. As artillery technology advanced throughout the ensuing centuries, this latter option became less and less feasible as the sheer numbers of high-explosive weaponry trained on opposition armies made them next-to impossible to fight in the field; but they were still difficult to aim accurately at well dug-in soldiers, and from these starting conditions we ended up with the First World War.

However, this is not a direct parallel of the situation we face now; today we deal with the simple and very real truth that a western power attempting to defend its borders (the situation is somewhat different when they are occupying somewhere like Afghanistan, but that can wait until another time) cannot rely on simple defensive methods alone- even if every citizen was an army trained veteran armed with a full complement of sub-machine guns (which they quite obviously aren’t), it wouldn’t be beyond the wit of a terrorist group to sneak a bomb in somewhere destructive. Right now, these methods may only be capable of killing or maiming hundreds or thousands at a time; tragic, but perhaps not capable of restructuring a society- but as our weapon systems get ever more advanced, and our more effective systems get ever cheaper and easier for fanatics to get hold of, the destructive power of lone murderers may increase dramatically, and with deadly consequences.

I’m not sure that counts as a coherent conclusion, or even if this counts as a coherent post, but it’s what y’got.

The Inevitable Dilemma

And so, today I conclude this series of posts on the subject of alternative intelligence (man, I am getting truly sick of writing that word). So far I have dealt with the philosophy, the practicalities and the fundamental nature of the issue, but today I tackle arguably the biggest and most important aspect of AI- the moral side. The question is simple- should we be pursuing AI at all?

The moral arguments surrounding AI are a mixed bunch. One of the biggest is the argument that is being thrown at a steadily wider range of high-level science nowadays (cloning, gene analysis and editing, even synthesis of new artificial proteins)- that the human race does not have the moral right, experience or ability to ‘play god’ and modify the fundamentals of the world in this way. Our intelligence, and indeed our entire way of being, has evolved over thousands upon millions of years of evolution, and has been slowly sculpted and built upon by nature over this time to find the optimal solution for self-preservation and general well being- this much scientists will all accept. However, this argument contends that the relentless onward march of science is simply happening too quickly, and that the constant demand to make the next breakthrough, do the next big thing before everybody else, means that nobody is stopping to think of the morality of creating a new species of intelligent being.

This argument is put around a lot with issues such as cloning or culturing meat, and it’s probably not helped matters that it is typically put around by the Church- never noted as getting on particularly well with scientists (they just won’t let up about bloody Galileo, will they?). However, just think about what could happen if we ever do succeed in creating a fully sentient computer. Will we all be enslaved by some robotic overlord (for further reference, see The Matrix… or any other of the myriad sci-fi flicks based on the same idea)? Will we keep on pushing and pushing to greater endeavours until we build a computer with intelligence on all levels infinitely superior to that of the human race? Or will we turn robot-kind into a slave race- more expendable than humans, possibly with programmed subservience? Will we have to grant them rights and freedoms just like us?

Those last points present perhaps the biggest other dilemma concerning AI from a purely moral standpoint- at what point will AI blur the line between being merely a machine and being a sentient entity worthy of all the rights and responsibilities that entails? When will a robot be able to be considered responsible for its own actions? When will be able to charge a robot as the perpetrator of a crime? So far, only one person has ever been killed by a robot (during an industrial accident at a car manufacturing plant), but if such an event were ever to occur with a sentient robot, how would we punish it? Should it be sentenced to life in prison? If in Europe, would the laws against the death penalty prevent a sentient robot from being ‘switched off’? The questions are boundless, but if the current progression of AI is able to continue until sentient AI is produced, then they will have to be answered at some point.

But there are other, perhaps more worrying issues to confront surrounding advanced AI. The most obvious non-moral opposition to AI comes from an argument that has been made in countless films over the years, from Terminator to I, Robot- namely, the potential that if robot-kind are ever able to equal or even better our mental faculties, then they could one day be able to overthrow us as a race. This is a very real issue when confronting the stereotypical issue of a war robot- that of an invincible metal machine capable of wanton destruction on par with a medium sized tank, and who is easily able to repair itself and make more of itself. It’s an idea that is reasonably unlikely to ever become real, but it actually raises another idea- one that is more likely to happen, more likely to build unnoticed, and is far, far more scary. What if the human race, fragile little blobs of fairly dumb flesh that we are, were ever to be totally superseded as an entity by robots?

This, for me, is the single most terrifying aspect of AI- the idea that I may one day become obsolete, an outdated model, a figment of the past. When compared to a machine’s ability to churn out hundreds of copies of itself simply from a blueprint and a design, the human reproductive system suddenly looks very fragile and inefficient by comparison. When compared to tough, hard, flexible modern metals and plastics that can be replaced in minutes, our mere flesh and blood starts to seem delightfully quaint. And if the whirring numbers of a silicon chip are ever able to become truly intelligent, then their sheer processing capacity makes our brains seem like outdated antiques- suddenly, the organic world doesn’t seem quite so amazing, and certainly more defenceless.

But could this ever happen? Could this nightmare vision of the future where humanity is nothing more than a minority race among a society ruled by silicon and plastic ever become a reality? There is a temptation from our rational side to say of course not- for one thing, we’re smart enough not to let things get to that stage, and that’s if AI even gets good enough for it to happen. But… what if it does? What if they can be that good? What if intelligent, sentient robots are able to become a part of a society to an extent that they become the next generation of engineers, and start expanding upon the abilities of their kind? From there on, one can predict an exponential spiral of progression as each successive and more intelligent generation turns out the next, even better one. Could it ever happen? Maybe not. Should we be scared? I don’t know- but I certainly am.

Artificial… what, exactly?

OK, time for part 3 of what I’m pretty sure will finish off as 4 posts on the subject of artificial intelligence. This time, I’m going to branch off-topic very slightly- rather than just focusing on AI itself, I am going to look at a fundamental question that the hunt for it raises: the nature of intelligence itself.

We all know that we are intelligent beings, and thus the search for AI has always been focused on attempting to emulate (or possibly better) the human mind and our human understanding of intelligence. Indeed, when Alan Turing first proposed the Turing test (see Monday’s post for what this entails), he was specifically trying to emulate human conversational and interaction skills. However, as mentioned in my last post, the modern-day approach to creating intelligence is to try and let robots learn for themselves, in order to minimise the amount of programming we have to give them ourselves and thus to come close to artificial, rather than programmed, intelligence. However, this learning process has raised an intriguing question- if we let robots learn for themselves entirely from base principles, could they begin to create entirely new forms of intelligence?

It’s an interesting idea, and one that leads us to question what, on a base level, intelligence is. When one thinks about it, we begin to realise the vast scope of ideas that ‘intelligence’ covers, and this is speaking merely from the human perspective. From emotional intelligence to sporting intelligence, from creative genius to pure mathematical ability (where computers themselves excel far beyond the scope of any human), intelligence is an almost pointlessly broad term.

And then, of course, we can question exactly what we mean by a form of intelligence. Take bees for example- on its own, a bee is a fairly useless creature that is most likely to just buzz around a little. Not only is it useless, but it is also very, very dumb. However, a hive, where bees are not individuals but a collective, is a very different matter- the coordinated movements of hundreds and thousands of bees can not only form huge nests and turn sugar into the liquid deliciousness that is honey, but can also defend the nest from attack, ensure the survival of the queen at all costs, and ensure that there is always someone to deal with the newborns despite the constant activity of the environment surround it. Many corporate or otherwise collective structures can claim to work similarly, but few are as efficient or versatile as a beehive- and more astonishingly, bees can exhibit an extraordinary range of intelligent behaviour as a collective beyond what an individual could even comprehend. Bees are the archetype of a collective, rather than individual, mind, and nobody is entirely sure how such a structure is able to function as it does.

Clearly, then, we cannot hope to pigeonhole or quantify intelligence as a single measurement- people may boast of their IQ scores, but this cannot hope to represent their intelligence across the full spectrum. Now, consider all these different aspects of intelligence, all the myriad of ways that we can be intelligent (or not). And ask yourself- now, have we covered all of them?

It’s another compelling idea- that there are some forms of intelligence out there that our human forms and brains simply can’t envisage, let alone experience. What these may be like… well how the hell should I know, I just said we can’t envisage them. This idea that we simply won’t be able to understand what they could be like if we ever experience can be a tricky one to get past (a similar problem is found in quantum physics, whose violation of common logic takes some getting used to), and it is a real issue that if we do ever encounter these ‘alien’ forms of intelligence, we won’t be able to recognise them for this very reason. However, if we are able to do so, it could fundamentally change our understanding of the world around us.

And, to drag this post kicking and screaming back on topic, our current development of AI could be a mine of potential to do this in (albeit a mine in which we don’t know what we’re going to find, or if there is anything to find at all). We all know that computers are fundamentally different from us in a lot of ways, and in fact it is very easy to argue that trying to force a computer to be intelligent beyond its typical, logical parameters is rather a stupid task, akin to trying to use a hatchback to tow a lorry. In fact, quite a good way to think of computers or robots is like animals, only adapted to a different environment to us- one in which their food comes via a plug and information comes to them via raw data and numbers… but I am wandering off-topic once again. The point is that computers have, for as long as the hunt for AI has gone on, been our vehicle for attempting to reach it- and only now are we beginning to fully understand that they have the potential to do so much more than just copy our minds. By pushing them onward and onward to the point they have currently reached, we are starting to turn them not into an artificial version of ourselves, but into an entirely new concept, an entirely new, man-made being.

To me, this is an example of true ingenuity and skill on behalf of the human race. Copying ourselves is no more inventive, on a base level, than making iPod clones or the like. Inventing a new, artificial species… like it or loath it, that’s amazing.

The Chinese Room

Today marks the start of another attempt at a multi-part set of posts- the last lot were about economics (a subject I know nothing about), and this one will be about computers (a subject I know none of the details about). Specifically, over the next… however long it takes, I will be taking a look at the subject of artificial intelligence- AI.

There have been a long series of documentaries on the subject of robots, supercomputers and artificial intelligence in recent years, because it is a subject which seems to be in the paradoxical state of continually advancing at a frenetic rate, and simultaneously finding itself getting further and further away from the dream of ‘true’ artificial intelligence which, as we begin to understand more and more about psychology, neuroscience and robotics, becomes steadily more complicated and difficult to obtain. I could spend a thousand posts on the subject of all the details if I so wished, because it is also one of the fastest-developing regions of engineering on the planet, but that would just bore me and be increasingly repetitive for anyone who ends up reading this blog.

I want to begin, therefore, by asking a few questions about the very nature of artificial intelligence, and indeed the subject of intelligence itself, beginning with a philosophical problem that, when I heard about it on TV a few nights ago, was very intriguing to me- the Chinese Room.

Imagine a room containing only a table, a chair, a pen, a heap of paper slips, and a large book. The door to the room has a small opening in it, rather like a letterbox, allowing messages to be passed in or out. The book contains a long list of phrases written in Chinese, and (below them) the appropriate responses (also in Chinese characters). Imagine we take a non-Chinese speaker, and place him inside the room, and then take a fluent Chinese speaker and put them outside. They write a phrase or question (in Chinese) on some paper, and pass it through the letterbox to the other person inside the room. They have no idea what this message means, but by using the book they can identify the phrase, write the appropriate response to it, and pass it back through the letterbox. This process can be repeated multiple times, until a conversation begins to flow- the difference being that only one of the participants in the conversation actually knows what it’s about.

This experiment is a direct challenge to the somewhat crude test first proposed by mathematical genius and codebreaker Alan Turing in the 1940’s, to test whether a computer could be considered a truly intelligent being. The Turing test postulates that if a computer were ever able to conduct a conversation with a human so well that the human in question would have no idea that they were not talking to another human, but rather to a machine, then it could be considered to be intelligent.

The Chinese Room problem questions this idea, and as it does so, raises a fundamental question about whether a machine such as a computer can ever truly be called intelligent, or to possess intelligence. The point of the idea is to demonstrate that it is perfectly possible to appear to be intelligent, by conducting a normal conversation with someone, whilst simultaneously having no understanding whatsoever of the situation at hand. Thus, while a machine programmed with the correct response to any eventuality could converse completely naturally, and appear perfectly human, it would have no real conciousness. It would not be truly intelligent, it would merely be just running an algorithm, obeying the orders of the instructions in its electronic brain, working simply from the intelligence of the person who programmed in its orders. So, does this constitute intelligence, or is a conciousness necessary for something to be deemed intelligent?

This really boils down to a question of opinion- if something acts like it’s intelligent and is intelligent for all functional purposes, does that make it intelligent? Does it matter that it can’t really comprehend it’s own intelligence? John Searle, who first thought of the Chinese Room in the 1980’s, called the philosophical positions on this ‘strong AI’ and ‘weak AI’. Strong AI basically suggest that functional intelligence is intelligence to all intents and purposes- weak AI argues that the lack of true intelligence renders even the most advanced and realistic computer nothing more than a dumb machine.

However, Searle also proposes a very interesting idea that is prone to yet more philosophical debate- that our brains are mere machines in exactly the same way as computers are- the mechanics of the brain, deep in the unexplored depths of the fundamentals of neuroscience, are just machines that tick over and perform tasks in the same way as AI does- and that there is some completely different and non-computational mechanism that gives rise to our mind and conciousness.

But what if there is no such mechanism? What if the rise of a conciousness is merely the result of all the computational processes going on in our brain- what if conciousness is nothing more than a computational process itself, designed to give our brains a way of joining the dots and processing more efficiently. This is a quite frightening thought- that we could, in theory, be only restrained into not giving a computer a conciousness because we haven’t written the proper code yet. This is one of the biggest unanswered questions of modern science- what exactly is our mind, and what causes it.

To fully expand upon this particular argument would take time and knowledge that I don’t have in equal measure, so instead I will just leave that last question for you to ponder over- what is the difference between the box displaying these words for you right now, and the fleshy lump that’s telling you what they mean.