Excuse time…

If there is actually anyone who would be considered a regular to this blog (I can’t honestly tell), they may have noticed that I missed my normal post on Wednesday. Not an unusual occurrence in itself (except that I forgot to do my usual thing and make rather pathetic apologies for it in Saturday’s post), but it was symptomatic of something- namely the pretty intense workload I’ve got myself into at the moment. And that workload is only going to grow in size over the coming weeks.

It is due to shrink again, but for the moment three 1000-word essays a week is just too much for me to keep up on a regular basis. I am therefore going to be taking a one-month break from this blog whilst I get on top of my oncoming workload.

I like blogging, despite the minimal traffic I get, and it’s a good outlet for me. Unfortunately, it is a big killer of time, and time is becoming an increasingly precious resource. Once the month is up, I shall try to resume normal progress. See you then.

The Encyclopaedia Webbanica

Once again, today’s post will begin with a story- this time, one about a place that was envisaged over a hundred years ago. It was called the Mundaneum.

The Mundaneum today is a tiny museum in the city of Mons, Belgium, which opened in its current form in 1998. It is a far cry from the original, first conceptualised by Nobel Peace Prize winner Henri la Fontaine and fellow lawyer and pioneer Paul Otlet in 1895. The two men, Otlet in particular, had a vision- to create a place where every single piece of knowledge in the world was housed. Absolutely all of it.

Even in the 19th century, when the breadth of scientific knowledge was a million times smaller than it is today (a 19th century version of New Scientist would be publishable about once a year), this was a huge undertaking, this was a truly gigantic undertaking from a practical perspective. Not only did Otlet and la Fontaine attempt to collect a copy of just about every book ever written in search of information, but went further than any conventional library of the time by also looking through pamphlets, photographs, magazines, and posters in search of data. The entire thing was stored on small 3×5 index cards and kept in a carefully organised and detailed system of files, and this paper database eventually grew to contain over 12 million entries. People would send letters or telegraphs to the government-funded Mundaneum (the name referencing to the French monde, meaning world, rather than mundane as in boring), who in turn would have their staff search through their files in order to give a response to just about any question that could be asked.

However, the most interesting thing of all about Otlet’s operation, quite apart from the sheer conceptual genius of a man who was light-years ahead of his time, was his response to the problems posed when the enterprise got too big for its boots. After a while, the sheer volume of information and, more importantly, paper, meant that the filing system was getting too big to be practical for the real world. Otlet realised that this was not a problem that could ever be resolved by more space or manpower- the problem lay in the use of paper. And this was where Otlet pulled his masterstroke of foresight.

Otlet envisaged a version of the Mundaneum where the whole paper and telegraph business would be unnecessary- instead, he foresaw a “mechanical, collective brain”, through which people of the world could access all the information the world had to offer stored within it via a system of “electric microscopes”. Not only that, but he envisaged the potential for these ‘microscopes’ to connect to one another, and letting people “participate, applaud, give ovations, [or] sing in the chorus”. Basically, a pre-war Belgian lawyer predicted the internet (and, in the latter statement, social networking too).

Otlet has never been included in the pantheon of web pioneers- he died in 1944 after his beloved Mundaneum had been occupied and used to house a Nazi art collection, and his vision of the web as more of an information storage tool for nerdy types is hardly what we have today. But, to me, his vision of a web as a hub for sharing information and a man-made font of all knowledge is envisaged, at least in part, by one huge and desperately appealing corner of the web today: Wikipedia.

If you take a step back and look at Wikipedia as a whole, its enormous success and popularity can be quite hard to understand. Beginning from a practical perspective, it is a notoriously difficult site to work with- whilst accessing the information is very user-friendly, the editing process can be hideously confusing and difficult, especially for the not very computer-literate (seriously, try it). My own personal attempts at article-editing have almost always resulted in failure, bar some very small changes and additions to existing text (where I don’t have to deal with the formatting). This difficulty in formatting is a large contributor to another issue- Wikipedia articles are incredibly text-heavy, usually with only a few pictures and captions, which would be a major turn-off in a magazine or book. The very concept of an encyclopaedia edited and made by the masses, rather than a select team of experts, also (initially) seems incredibly foolhardy. Literally anyone can type in just about anything they want, leaving the site incredibly prone to either vandalism or accidental misdirection (see xkcd.com/978/ for Randall Munroe’s take on how it can get things wrong). The site has come under heavy criticism over the years for this fact, particularly on its pages about people (Dan Carter, the New Zealand fly-half, has apparently considered taking up stamp collecting, after hundreds of fans have sent him stamps based on a Wikipedia entry stating that he was a philatelist), and just letting normal people edit it also leaves bias prone to creep in, despite the best efforts of Wikipedia’s team of writers and editors (personally, I think that the site keeps its editing software deliberately difficult to use to minimise the amount of people who can use it easily and so try to minimise this problem).

But, all that aside… Wikipedia is truly wonderful- it epitomises all that is good about the web. It is a free to use service, run by a not-for-profit organisation that is devoid of advertising and is funded solely by the people of the web whom it serves. It is the font of all knowledge to an entire generation of students and schoolchildren, and is the number one place to go for anyone looking for an answer about anything- or who’s just interested in something and would like to learn more. It is built on the principles of everyone sharing and contributing- even flaws or areas lacking citation are denoted by casual users if they slip up past the editors the first time around. It’s success is built upon its size, both big and small- the sheer quantity of articles (there are now almost four million, most of which are a bit bigger than would have fitted on one of Otlet’s index cards), means that it can be relied upon for just about any query (and will be at the top of 80% of my Google searches), but its small server space, and staff size (less than 50,000, most of whom are volunteers- the Wikimedia foundation employs less than 150 people) keeps running costs low and allows it to keep on functioning despite its user-sourced funding model. Wikipedia is currently the 6th (ish) most visited website in the world, with 12 billion page views a month. And all this from an entirely not-for-profit organisation designed to let people know facts.

Nowadays, the Mundaneum is a small museum, a monument to a noble but ultimately flawed experiment. It original offices in Brussels were left empty, gathering dust after the war until a graduate student discovered it and eventually provoked enough interest to move the old collection to Mons, where it currently resides as a shadow of its former glory. But its spirit lives on in the collective brain that its founder envisaged. God bless you, Wikipedia- long may you continue.

The story of Curveball

2012 has been the first year for almost as long as public conciousness seems able to remember that the world has not lived under the shadow of one of the most controversial and tumultuous events of the 21st century- the Iraq war. From 2003 to December 2011, the presence and deaths of western soldiers in Iraq was an ever-present and constantly touchy issue, and it will be many years before Iraq recovers from the war’s devastating effects.

Everybody knows the story of why the war was started in the first place- the US government convinced the rest of the world that Iraq’s notoriously brutal and tyrannical dictator Saddam Hussein (who had famously gassed vast swathes of Iraq’s Kurdish population prior to his invasion of Kuwait and the triggering of the First Gulf War) was in possession of weapons of mass destruction. The main reason for the US government’s fears was, according to the news of the time, the fact that Hussein had refused UN weapons inspectors to enter and search the country. Lots of people know, or at least knew, this story. But much fewer know the other story- the story of how one man was able to, almost single-handedly, turn political posturing into a full-scale war.

This man’s name is Rafid Ahmed Alwan, but he was known to the world’s intelligence services simply as ‘Curveball’. Alwan is an Iraqi-born chemical engineer, who in 1999 fled to Germany, having embezzled government money. He then claimed that he had worked on an Iraqi project to design and produce mobile labs to produce biological weapons. Between late 1999 and 2001, German intelligence services interrogated him, granted him political asylum, and listened to his descriptions of the process. They were even able to create 3-D models of the facilities being designed, to a level of detail that CIA scientists were later able to identify major technical flaws in them. Despite the identification of such inconsistencies, when Curveball’s assertions that Iraq was indeed trying to produce biological WMD’s got into the hands of US intelligence, they went straight to the top. US Secretary of State Colin Powell referred to Curveball’s evidence in a 2003 speech to the UN on the subject of Iraq’s weapons situation, and his evidence, despite its flaws, pretty much sealed the deal for the USA. And where the US goes, the rest of the world tends to follow.

Since then, Curveball has, naturally, come under a lot of criticism. Accused of being an alcoholic, a ‘congenital liar’ and a ‘con artist’, he is quite possibly the world record holder for the most damaging ‘rogue source’ in intelligence history. Since he first made his claims, the amount of evidence showing how completely and utterly false they were has only stacked up- a facility he attested was a docking station was found to have an immovable brick wall in front of it, his designs were completely technically unsound, and his claims that he had finished top of his class at Baghdad University and had been drafted straight into the weapons program were replaced by the fact that he had finished bottom of his class and had, as he admitted in 2011, made the whole story up.

But, of course, by far the biggest source of hatred towards Curveball has been what his lies snowballed into- the justification of one of the western world’s least proud and most controversial events- the Second Iraq War. The cost of the war has been estimated to be in the region of two trillion dollars, and partly as a result of disruption to Iraqi oil production the price of oil has nearly quadrupled since the war began. The US and its allies have come under a hail of criticism for their poor planning of the invasion, the number of troops required and the clean up process, which was quite possibly entirely to blame for the subsequent 7 years of insurgent warfare after the actual invasion- quite apart from  some rather large questions surrounding the invasion’s legality in the first place. America has also taken a battering to its already rather weathered global public image, losing support from some of its traditional allies, and the country of Iraq has, despite having had an undoubtedly oppressive dictatorship removed, become (rather like Afghanistan) a far more corrupt, poverty-stricken, damaged and dangerous society than it was even under Hussein- it will take many years for it to recover. Not only that, but there is also evidence to suggest that the anger caused by the Western invasion has been played for its PR value by al-Qaeda and other terrorist groups, actually increasing the terrorism threat. But worse than all of that has been the human cost- estimates of the death toll range from 87,000 to over a million, the majority of whom have been civilian casualties from bomb attacks (courtesy of both sides). All parties have also been accused of sanctioning torture and of various counts of murder of civilians.

But I am not here to point fingers or play the blame game- suffice it to say that the main loser in the war has been humanity. The point is that, whilst Curveball cannot be said to be the cause of the war, or even the main one, the paper trail can be traced right back to him as one of the primary trigger causes. Just one man, and just a few little lies.

Curveball has since said that he was (justifiably) shocked that his words were used as justification for the war, but, crucially, that he was proud that what he had said had toppled Hussein’s government. When asked in an interview about all the death and pain the war he had sparked had caused, he was unable to give an answer.

This, for me, was a both shocking and deeply interesting moral dilemma. Hussein was without a doubt a black mark on the face of humanity, and in the long run I doubt that Iraq will be worse off as a democracy than it was under his rule. But that will not be for many years, and right now Iraq is a shadow of a country.

Put yourself in Curveball’s position- somebody who thought his words could bring down a dictator, a hate figure, and who then could only watch as the world tore itself apart because of them. Could you live with that thought? Were your words worth their terrible price? Could your conscience ever sleep easy?

 

 

Kony 2012 in hindsight

Yesterday, April 20th, marked two at least reasonably significant events. The first of these was it being 4/20, which is to cannabis smokers what Easter is to Christians- the major festival of the year, where everyone gathers together to smoke, relax and make their collective will felt (this is, I feel I should point out, speaking only from what I can pick up online- I don’t actually smoke pot). This is an annual tradition, and has grown into something of a political event for pro-legalisation groups.

The other event is specific to this year (probably, anyway), and just about marks the conclusion of one of the 21st century’s most startling (and tumultuous) events- the Kony 2012 campaign’s ‘cover the night’ event.

Since going from an almost unknown organisation to the creators of the fastest-spreading viral video of all time, Kony 2012’s founders Invisible Children have found their organisation changed forever. For most of the last decade the charity has existed, but only now has it gone from being a medium-sized organisation relying on brute zealotry for support to a internationally known about group. Similarly, the target of their campaign, warlord and wanted human rights criminal Joseph Kony, has gone from a man known only in the local area and by politicians nobody’s ever heard of, to a worldwide hate figure inspiring discussion in the world’s governments (albeit one with more than his fair share of lighthearted memes- in fact he is increasingly reminding me of Osama Bin Laden in terms of status).

Invisible Children’s meteoric rise has not been without backlash- they have come under intense scrutiny for both their less-than-transparent finances, and the fact that only around a third of their turnover goes to supporting their African projects. Then there was the now-infamous ‘Bony 2012’ incident, where co-founder Jason Russell was found making a public nuisance of himself, and masturbating in public, after a week of constant stress and exhaustion, and rather too much to drink.

Not only that, but the campaign’s supporters have come under attack. This is partly because the internet always loves to have a go at committed Christians, as Russell and many of his followers are, but there are several recurring issues people appear to have with the campaign in general. One of the most common is the idea that ‘rich white kids’ sticking up posters and watching a video, and then claiming that they’ve helped change something is both ridiculous and wrong. Another concerns the current situation in the Uganda/CAR/South Sudan/Congo area- this is one of hideously bloody political strife, and Joseph Kony is not the only one with a poor human rights record. Eastern Congo is still recovering from a major civil war that officially ended in 2003 but still exists in some local, and extremely bloody, conflicts, the Central African Republic is one of the poorest countries in the world with a history of political strife, South Sudan has only just emerged as independent from a constant civil war and the bloody, oppressive dictatorship of Omar al-Bashir, and Uganda has an incredibly poor record for war and corruption, and has even been accused of using child soldiers in much the same way as Kony’s organisation, the Lord’s Resistance Army. Then there have been the accusations that Invisible Children have overexaggerated and oversimplified the issue, misleading the general public, and the argument that, with the LRA numbering less than a thousand, Kony isn’t too much of an issue anyway- certainly not when compared to the thousands of children who die every day from malnourishment and disease in the area.  Finally, some take issue with the aim of the Kony 2012 campaign- to get governments to listen and to step up the level of involvement in their attempts to capture Kony, which is an aim disliked by those who feel that the USA doesn’t need any more encouragement to invade somewhere, and disliked even more by those who claim Kony died 5 years ago.

All of these are completely valid, true and important arguments to consider (well, apart from the one about him being dead, which is probably not true). And I have one answer to every single one of them:

IT. DOESN’T. MATTER.

Put it this way- what slogan does the Kony 2012 video say is it’s aim? Answer- to make Kony famous, and in that regard Invisible Children have succeeded beyond their wildest dreams. Most  of the world (well, most of it with an internet connection at least), now knows about one of the worst perpetrators of human rights violators in the world, and a major humanitarian issue is now being forced upon governments worldwide.  It doesn’t matter that Invisible Children has some dodgy finances, it doesn’t matter that Kony is by no means the biggest problem in the area, and it certainly doesn’t matter that Jason Russell managed to give the world’s media a field day. All that matters is that people know about a serious issue, because if nobody knows about it nobody cares, and if nobody cares then nothing can be done about it.

There is, in fact, one criticism levelled at Invisible Children supporters that I take major issue with, and that is the idea that its efforts at spreading awareness do not matter. This could not be more untrue. There is only one force on this earth that will ever have the power to potentially find and bring to justice Joseph Kony, and that is the effort of the world’s governments- armies, advisors, police, whatever. But governments simply do not get involved in stuff if it doesn’t matter to them, and the only way to get something (that doesn’t concern oil, power or money) to matter to a government is to make sure people know and care about it. In modern politics, awareness is absolutely everything- without that, nothing matters.

Anyone can stand and level criticisms at the Kony campaign all day if they wanted to. I myself have not given Invisible Children any money, and don’t agree with a lot of the charity’s activities. But I am still able to admire what they have done, and realise what a great service they have done to the world at large. In the grand scheme of things, their flaws don’t really matter one jot. Because everyone will agree that Kony is most definitely a bad guy, and most definitely needs to be brought to justice- until now, the chances of that happening were minimal. Until Kony 2012.

 

 

 

Also… WOO 50 POSTS!!!!!

Rule Brittania

As I have mentioned a few times over the course of this blog, I am British (I prefer not to say English unless I’m talking about sport. Not sure why, exactly). The British as a race have a long list of achievements, giant-scale cock-ups and things we like to brush under the carpet (see the Crimean War for all three of those things), and since we spent most of the 17th-19th century either fighting over or controlling fairly massive swathes of the earth, the essence of Britishness has managed to make itself known in the psyche of just about every nation on Earth. Or, to put it another way, people have tons of stereotypes about the Brits, but not quite so many about, say, the Lithuanians (my apologies to any Lithuanians who end up reading this, but the British national psyche at least isn’t that good at distinguishing you from the rest of Eastern Europe).

British national stereotypes are a mixed bunch. We have the ‘ye olde’ stereotypical Brit- a top-hatted, tea drinking cricketer for whom the word ‘quaint’ was invented and who would never speak out of turn to anybody. Then there is the colonial stereotype- the old-fashioned, borderline-racist yet inherently capable silver-moustached ‘old boy’ living in a big house somewhere in the tropics with a few servants. He puts a lot of cash into the local public school down the road, paying for the cricket facilities. Or something. And then we have the hideously polite- just as obsessed about manners as his ‘ye olde’ cousin, but this time in a very subservient, almost Canadian, manner (I should clarify that I get this particular Canadian stereotype from the internet, since the only Canadians I know all seem… actually, I’ll get back to you on a generalistic stereotype)

However, modern Britain is, of course, not really like this- we are a very modern, incredibly diverse culture (despite David Cameron’s insistence that “multiculturalism has failed”- not one of his better lines) with a surprising geographical diversity too, for such a small country. So, since I am not really in the mood for anything particularly heavy today, I thought that this would be a good time to inform the internet as to a few new British stereotypes for you, just to bring you up to date.

1) The Chav
Used to be that inner-city Londoners all got classed as Cockneys- nowadays we have chavs instead. The chav in his natural state is a pack animal, rarely seen without company, and vulnerable when alone. His is the main market for bad rap music, oversized baseball caps and hoodies two sizes too big for them. They are a notoriously hard to become assimilated with, partly due to the natural verbal aggression of the pack, but also due to their strange tongue- officially known as London Street English (LSE), this bizarre dialect, calling from influences ranging from Vietnamese to Arabic, has now spread across large tracts of southern England, where it is generally confined to council estate, and has more recently been simply dubbed ‘Chav’. Despite a reputation for drugs, violence and vandalism, they are not to be feared by the confident, especially if numbers lie in one’s favour.

2) The West Country
The farming stereotype- round-cheeked, stick-bearing and (to complete the look) with a length of straw poking out of the mouth. Their dialect (a rather bumptious, heavily accented tongue where many a syllable may be lost beneath an ‘Aarrr’) can be no less strange and confusing than LSE, and despite being typically associated with the area west of Bristol (excluding, of course, Wales), is also to be found in East Anglia. Since we have progressed from the days of needing an army of bored young men to till the fields, leaving us to use combine harvesters and such instead, this tends to be a reasonably well-off group- there are no longer starving farmhands, only really farm owners & family. They tend to drive Land Rovers, and view science with roughly the same suspicion as an oncoming bush fire.

3) The Gap Yah…
The modern public schoolboy. Eton being a touch old-fashioned nowadays, the stereotype will now come from Harrow or Stowe (for whatever reason). Typically long of face, short of hair and severely lacking in both age and experience, these come in two subtly different classes. There is the overbearer- the one whose intense access to the very best that Daddy’s money can buy has left them better than everybody else at practically everything they care to mention, and will point this out to you at every opportunity. These may be recognised by the incessant and constantly nagging desire to break their face. The second is the wannabe- the kid who got bullied at Whitgift, who isn’t actually that good at anything but is still richer than you and likes you to know it. They are characterised by always pretending to be of the overbearer class, and endeavouring to be as competent as them, but always cocking up. Interestingly, failure provides the main distinction between the two classes- whilst a wannabe will just act cool and pretend that you cheated them, an overbearer will simply cut out all the timewasting and begin the vitriolic hatred then and there. Both classes are likely to drink heavily (of proper drinks of course- stuff like cider is for plebs and Muggles), travel widel, and hopefully meet their match one of these days soon.

That list was not what you’d call exhaustive, but it’s reasonably accurate from what I’ve experienced. Plus, it was quite nice and relaxing for me.

(If I have in any way offended you or the stereotype you represent over the course of this post, then please feel free to ignore it and laugh at the other ones instead)

The Inevitable Dilemma

And so, today I conclude this series of posts on the subject of alternative intelligence (man, I am getting truly sick of writing that word). So far I have dealt with the philosophy, the practicalities and the fundamental nature of the issue, but today I tackle arguably the biggest and most important aspect of AI- the moral side. The question is simple- should we be pursuing AI at all?

The moral arguments surrounding AI are a mixed bunch. One of the biggest is the argument that is being thrown at a steadily wider range of high-level science nowadays (cloning, gene analysis and editing, even synthesis of new artificial proteins)- that the human race does not have the moral right, experience or ability to ‘play god’ and modify the fundamentals of the world in this way. Our intelligence, and indeed our entire way of being, has evolved over thousands upon millions of years of evolution, and has been slowly sculpted and built upon by nature over this time to find the optimal solution for self-preservation and general well being- this much scientists will all accept. However, this argument contends that the relentless onward march of science is simply happening too quickly, and that the constant demand to make the next breakthrough, do the next big thing before everybody else, means that nobody is stopping to think of the morality of creating a new species of intelligent being.

This argument is put around a lot with issues such as cloning or culturing meat, and it’s probably not helped matters that it is typically put around by the Church- never noted as getting on particularly well with scientists (they just won’t let up about bloody Galileo, will they?). However, just think about what could happen if we ever do succeed in creating a fully sentient computer. Will we all be enslaved by some robotic overlord (for further reference, see The Matrix… or any other of the myriad sci-fi flicks based on the same idea)? Will we keep on pushing and pushing to greater endeavours until we build a computer with intelligence on all levels infinitely superior to that of the human race? Or will we turn robot-kind into a slave race- more expendable than humans, possibly with programmed subservience? Will we have to grant them rights and freedoms just like us?

Those last points present perhaps the biggest other dilemma concerning AI from a purely moral standpoint- at what point will AI blur the line between being merely a machine and being a sentient entity worthy of all the rights and responsibilities that entails? When will a robot be able to be considered responsible for its own actions? When will be able to charge a robot as the perpetrator of a crime? So far, only one person has ever been killed by a robot (during an industrial accident at a car manufacturing plant), but if such an event were ever to occur with a sentient robot, how would we punish it? Should it be sentenced to life in prison? If in Europe, would the laws against the death penalty prevent a sentient robot from being ‘switched off’? The questions are boundless, but if the current progression of AI is able to continue until sentient AI is produced, then they will have to be answered at some point.

But there are other, perhaps more worrying issues to confront surrounding advanced AI. The most obvious non-moral opposition to AI comes from an argument that has been made in countless films over the years, from Terminator to I, Robot- namely, the potential that if robot-kind are ever able to equal or even better our mental faculties, then they could one day be able to overthrow us as a race. This is a very real issue when confronting the stereotypical issue of a war robot- that of an invincible metal machine capable of wanton destruction on par with a medium sized tank, and who is easily able to repair itself and make more of itself. It’s an idea that is reasonably unlikely to ever become real, but it actually raises another idea- one that is more likely to happen, more likely to build unnoticed, and is far, far more scary. What if the human race, fragile little blobs of fairly dumb flesh that we are, were ever to be totally superseded as an entity by robots?

This, for me, is the single most terrifying aspect of AI- the idea that I may one day become obsolete, an outdated model, a figment of the past. When compared to a machine’s ability to churn out hundreds of copies of itself simply from a blueprint and a design, the human reproductive system suddenly looks very fragile and inefficient by comparison. When compared to tough, hard, flexible modern metals and plastics that can be replaced in minutes, our mere flesh and blood starts to seem delightfully quaint. And if the whirring numbers of a silicon chip are ever able to become truly intelligent, then their sheer processing capacity makes our brains seem like outdated antiques- suddenly, the organic world doesn’t seem quite so amazing, and certainly more defenceless.

But could this ever happen? Could this nightmare vision of the future where humanity is nothing more than a minority race among a society ruled by silicon and plastic ever become a reality? There is a temptation from our rational side to say of course not- for one thing, we’re smart enough not to let things get to that stage, and that’s if AI even gets good enough for it to happen. But… what if it does? What if they can be that good? What if intelligent, sentient robots are able to become a part of a society to an extent that they become the next generation of engineers, and start expanding upon the abilities of their kind? From there on, one can predict an exponential spiral of progression as each successive and more intelligent generation turns out the next, even better one. Could it ever happen? Maybe not. Should we be scared? I don’t know- but I certainly am.

Artificial… what, exactly?

OK, time for part 3 of what I’m pretty sure will finish off as 4 posts on the subject of artificial intelligence. This time, I’m going to branch off-topic very slightly- rather than just focusing on AI itself, I am going to look at a fundamental question that the hunt for it raises: the nature of intelligence itself.

We all know that we are intelligent beings, and thus the search for AI has always been focused on attempting to emulate (or possibly better) the human mind and our human understanding of intelligence. Indeed, when Alan Turing first proposed the Turing test (see Monday’s post for what this entails), he was specifically trying to emulate human conversational and interaction skills. However, as mentioned in my last post, the modern-day approach to creating intelligence is to try and let robots learn for themselves, in order to minimise the amount of programming we have to give them ourselves and thus to come close to artificial, rather than programmed, intelligence. However, this learning process has raised an intriguing question- if we let robots learn for themselves entirely from base principles, could they begin to create entirely new forms of intelligence?

It’s an interesting idea, and one that leads us to question what, on a base level, intelligence is. When one thinks about it, we begin to realise the vast scope of ideas that ‘intelligence’ covers, and this is speaking merely from the human perspective. From emotional intelligence to sporting intelligence, from creative genius to pure mathematical ability (where computers themselves excel far beyond the scope of any human), intelligence is an almost pointlessly broad term.

And then, of course, we can question exactly what we mean by a form of intelligence. Take bees for example- on its own, a bee is a fairly useless creature that is most likely to just buzz around a little. Not only is it useless, but it is also very, very dumb. However, a hive, where bees are not individuals but a collective, is a very different matter- the coordinated movements of hundreds and thousands of bees can not only form huge nests and turn sugar into the liquid deliciousness that is honey, but can also defend the nest from attack, ensure the survival of the queen at all costs, and ensure that there is always someone to deal with the newborns despite the constant activity of the environment surround it. Many corporate or otherwise collective structures can claim to work similarly, but few are as efficient or versatile as a beehive- and more astonishingly, bees can exhibit an extraordinary range of intelligent behaviour as a collective beyond what an individual could even comprehend. Bees are the archetype of a collective, rather than individual, mind, and nobody is entirely sure how such a structure is able to function as it does.

Clearly, then, we cannot hope to pigeonhole or quantify intelligence as a single measurement- people may boast of their IQ scores, but this cannot hope to represent their intelligence across the full spectrum. Now, consider all these different aspects of intelligence, all the myriad of ways that we can be intelligent (or not). And ask yourself- now, have we covered all of them?

It’s another compelling idea- that there are some forms of intelligence out there that our human forms and brains simply can’t envisage, let alone experience. What these may be like… well how the hell should I know, I just said we can’t envisage them. This idea that we simply won’t be able to understand what they could be like if we ever experience can be a tricky one to get past (a similar problem is found in quantum physics, whose violation of common logic takes some getting used to), and it is a real issue that if we do ever encounter these ‘alien’ forms of intelligence, we won’t be able to recognise them for this very reason. However, if we are able to do so, it could fundamentally change our understanding of the world around us.

And, to drag this post kicking and screaming back on topic, our current development of AI could be a mine of potential to do this in (albeit a mine in which we don’t know what we’re going to find, or if there is anything to find at all). We all know that computers are fundamentally different from us in a lot of ways, and in fact it is very easy to argue that trying to force a computer to be intelligent beyond its typical, logical parameters is rather a stupid task, akin to trying to use a hatchback to tow a lorry. In fact, quite a good way to think of computers or robots is like animals, only adapted to a different environment to us- one in which their food comes via a plug and information comes to them via raw data and numbers… but I am wandering off-topic once again. The point is that computers have, for as long as the hunt for AI has gone on, been our vehicle for attempting to reach it- and only now are we beginning to fully understand that they have the potential to do so much more than just copy our minds. By pushing them onward and onward to the point they have currently reached, we are starting to turn them not into an artificial version of ourselves, but into an entirely new concept, an entirely new, man-made being.

To me, this is an example of true ingenuity and skill on behalf of the human race. Copying ourselves is no more inventive, on a base level, than making iPod clones or the like. Inventing a new, artificial species… like it or loath it, that’s amazing.

The Problems of the Real World

My last post on the subject of artificial intelligence was something of a philosophical argument on its nature- today I am going to take on a more practical perspective, and have a go at just scratching the surface of the monumental challenges that the real world poses to the development of AI- and, indeed, how they are (broadly speaking) solved.

To understand the issues surrounding the AI problem, we must first consider what, in the strictest sense of the matter, a computer is. To quote… someone, I can’t quite remember who: “A computer is basically just a dumb adding machine that counts on its fingers- except that it has an awful lot of fingers and counts terribly fast”. This, rather simplistic model, is in fact rather good for explaining exactly what it is that computers are good and bad at- they are very good at numbers, data crunching, the processing of information. Information is the key thing here- if something can be inputted into a computer purely in terms of information, then the computer is perfectly capable of modelling and processing it with ease- which is why a computer is very good at playing games. Even real-world problems that can be expressed in terms of rules and numbers can be converted into computer-recognisable format and mastered with ease, which is why computers make short work of things like ballistics modelling (such as gunnery tables, the US’s first usage of them), and logical games like chess.

However, where a computer develops problems is in the barrier between the real world and the virtual. One must remember that the actual ‘mind’ of a computer itself is confined exclusively to the virtual world- the processing within a robot has no actual concept of the world surrounding it, and as such is notoriously poor at interacting with it. The problem is twofold- firstly, the real world is not a mere simulation, where rules are constant and predictable; rather, it is an incredibly complicated, constantly changing environment where there are a thousand different things that we living humans keep track of without even thinking. As such, there are a LOT of very complicated inputs and outputs for a computer to keep track of in the real world, which makes it very hard to deal with. But this is merely a matter of grumbling over the engineering specifications and trying to meet the design brief of the programmers- it is the second problem which is the real stumbling block for the development of AI.

The second issue is related to the way a computer processes information- bit by bit, without any real grasp of the big picture. Take, for example, the computer monitor in front of you. To you, it is quite clearly a screen- the most notable clue being the pretty pattern of lights in front of you. Now, turn your screen slightly so that you are looking at it from an angle. It’s still got a pattern of lights coming out of it, it’s still the same colours- it’s still a screen. To a computer however, if you were to line up two pictures of your monitor from two different angles, it would be completely unable to realise that they were the same screen, or even that they were the same kind of objects. Because the pixels are in a different order, and as such the data’s different, the two pictures are completely different- the computer has no concept of the idea that the two patterns of lights are the same basic shape, just from different angles.

There are two potential solutions to this problem. Firstly, the computer can look at the monitor and store an image of it from every conceivable angle with every conceivable background, so that it would be able to recognise it anywhere, from any viewpoint- this would however take up a library’s worth of memory space and be stupidly wasteful. The alternative requires some cleverer programming- by training the computer to spot patterns of pixels that look roughly similar (either shifted along by a few bytes, or missing a few here and there), they can be ‘trained’ to pick out basic shapes- by using an algorithm to pick out changes in colour (an old trick that’s been used for years to clean up photos), the edges of objects can be identified and separate objects themselves picked out. I am not by any stretch of the imagination an expert in this field so won’t go into details, but by this basic method a computer can begin to step back and begin to look at the pattern of a picture as a whole.

But all that information inputting, all that work…  so your computer can identify just a monitor? What about all the myriad of other things our brains can recognise with such ease- animals, buildings, cars? And we haven’t even got on to differentiating between different types of things yet… how will we ever match the human brain?

This idea presented a big setback for the development of modern AI- so far we have been able to develop AI that allows one computer to handle a few real-world tasks or applications very well (and in some cases, depending on the task’s suitability to the computational mind, better than humans), but scientists and engineers were presented with a monumental challenge when faced with the prospect of trying to come close to the human mind (let alone its body) in anything like the breadth of tasks it is able to perform. So they went back to basics, and began to think of exactly how humans are able to do so much stuff.

Some of it can be put down to instinct, but then came the idea of learning. The human mind is especially remarkable in its ability to take in new information and learn new things about the world around it- and then take this new-found information and try to apply it to our own bodies. Not only can we do this, but we can also do it remarkably quickly- it is one of the main traits which has pushed us forward as a race.

So this is what inspires the current generation of AI programmers and robotocists- the idea of building into the robot’s design a capacity for learning. The latest generation of the Japanese ‘Asimo’ robots can learn what various objects presented to it are, and is then able to recognise them when shown them again- as well as having the best-functioning humanoid chassis of any existing robot, being able to run and climb stairs. Perhaps more excitingly are a pair of robots currently under development that start pretty much from first principles, just like babies do- first they are presented with a mirror and learn to manipulate their leg motors in such a way that allows them to stand up straight and walk (although they aren’t quite so good at picking themselves up if they fail in this endeavour). They then face one another and begin to demonstrate and repeat actions to one another, giving each action a name as they do so.  In doing this they build up an entirely new, if unsophisticated, language with which to make sense of the world around them- currently, this is just actions, but who knows what lies around the corner…

The Chinese Room

Today marks the start of another attempt at a multi-part set of posts- the last lot were about economics (a subject I know nothing about), and this one will be about computers (a subject I know none of the details about). Specifically, over the next… however long it takes, I will be taking a look at the subject of artificial intelligence- AI.

There have been a long series of documentaries on the subject of robots, supercomputers and artificial intelligence in recent years, because it is a subject which seems to be in the paradoxical state of continually advancing at a frenetic rate, and simultaneously finding itself getting further and further away from the dream of ‘true’ artificial intelligence which, as we begin to understand more and more about psychology, neuroscience and robotics, becomes steadily more complicated and difficult to obtain. I could spend a thousand posts on the subject of all the details if I so wished, because it is also one of the fastest-developing regions of engineering on the planet, but that would just bore me and be increasingly repetitive for anyone who ends up reading this blog.

I want to begin, therefore, by asking a few questions about the very nature of artificial intelligence, and indeed the subject of intelligence itself, beginning with a philosophical problem that, when I heard about it on TV a few nights ago, was very intriguing to me- the Chinese Room.

Imagine a room containing only a table, a chair, a pen, a heap of paper slips, and a large book. The door to the room has a small opening in it, rather like a letterbox, allowing messages to be passed in or out. The book contains a long list of phrases written in Chinese, and (below them) the appropriate responses (also in Chinese characters). Imagine we take a non-Chinese speaker, and place him inside the room, and then take a fluent Chinese speaker and put them outside. They write a phrase or question (in Chinese) on some paper, and pass it through the letterbox to the other person inside the room. They have no idea what this message means, but by using the book they can identify the phrase, write the appropriate response to it, and pass it back through the letterbox. This process can be repeated multiple times, until a conversation begins to flow- the difference being that only one of the participants in the conversation actually knows what it’s about.

This experiment is a direct challenge to the somewhat crude test first proposed by mathematical genius and codebreaker Alan Turing in the 1940’s, to test whether a computer could be considered a truly intelligent being. The Turing test postulates that if a computer were ever able to conduct a conversation with a human so well that the human in question would have no idea that they were not talking to another human, but rather to a machine, then it could be considered to be intelligent.

The Chinese Room problem questions this idea, and as it does so, raises a fundamental question about whether a machine such as a computer can ever truly be called intelligent, or to possess intelligence. The point of the idea is to demonstrate that it is perfectly possible to appear to be intelligent, by conducting a normal conversation with someone, whilst simultaneously having no understanding whatsoever of the situation at hand. Thus, while a machine programmed with the correct response to any eventuality could converse completely naturally, and appear perfectly human, it would have no real conciousness. It would not be truly intelligent, it would merely be just running an algorithm, obeying the orders of the instructions in its electronic brain, working simply from the intelligence of the person who programmed in its orders. So, does this constitute intelligence, or is a conciousness necessary for something to be deemed intelligent?

This really boils down to a question of opinion- if something acts like it’s intelligent and is intelligent for all functional purposes, does that make it intelligent? Does it matter that it can’t really comprehend it’s own intelligence? John Searle, who first thought of the Chinese Room in the 1980’s, called the philosophical positions on this ‘strong AI’ and ‘weak AI’. Strong AI basically suggest that functional intelligence is intelligence to all intents and purposes- weak AI argues that the lack of true intelligence renders even the most advanced and realistic computer nothing more than a dumb machine.

However, Searle also proposes a very interesting idea that is prone to yet more philosophical debate- that our brains are mere machines in exactly the same way as computers are- the mechanics of the brain, deep in the unexplored depths of the fundamentals of neuroscience, are just machines that tick over and perform tasks in the same way as AI does- and that there is some completely different and non-computational mechanism that gives rise to our mind and conciousness.

But what if there is no such mechanism? What if the rise of a conciousness is merely the result of all the computational processes going on in our brain- what if conciousness is nothing more than a computational process itself, designed to give our brains a way of joining the dots and processing more efficiently. This is a quite frightening thought- that we could, in theory, be only restrained into not giving a computer a conciousness because we haven’t written the proper code yet. This is one of the biggest unanswered questions of modern science- what exactly is our mind, and what causes it.

To fully expand upon this particular argument would take time and knowledge that I don’t have in equal measure, so instead I will just leave that last question for you to ponder over- what is the difference between the box displaying these words for you right now, and the fleshy lump that’s telling you what they mean.

Web vs. Money

Twice now, this blog has strayed onto the subject of legal bills attempting to in some way regulate the internet, based on the idea that it violates certain copyright restrictions, and everything suggests that SOPA, PIPA and ACTA will not be the last of such attempts (unless ACTA is so successful that it not only gets ratified, but also renders the internet functionally brain-dead). However, a while ago I caught myself wondering exactly why the internet gets targeted with these bills at all. There are two angles to take with regards to this problem; why there is any cause for the internet to be targeted with these bills, and why this particular problem has bills dedicated to it, rather than simply being left alone.

To begin with the second one of these- why the web? Copyright violation most definitely existed before the internet’s invention, and many a pirate business even nowadays may be run without even venturing online. All that’s required is a copy of whatever you’re pirating, some cheap software, and a lot of blank discs (or USB’s or hard drives or whatever). However, such operations tended to be necessarily small-scale in order to avoid detection, and because the market really isn’t large enough to sustain a larger-scale operation. It’s rather off-putting actually acquiring pirated stuff in real life, as it feels slightly wrong- on the web, however, it’s far easier and more relaxed. Thus, rather than a small, fairly meaningless operation, on the internet (which is, remember a throbbing network with literally billions of users) piracy is huge- exactly how big is hard to tell, but it’s a fairly safe bet that it’s bigger than a few blokes flogging ripped off DVD’s out of the boot of a car. This therefore presents a far more significant loss of potential earnings than the more traditional market, and is subsequently a bigger issue.

However, perhaps more important than the scale of the operation is that it’s actually a fairly easy one to target. Modern police will struggle to catch massive-scale drugs lords or crime barons, because the real world is one in which it’s very easy to hide, sneak, bury information and bribe. It can be impossible to find the spider at the centre of the web, and even if he can be found, harder still to pin anything on him. Online however is a different story- sites violating the law are easy to find for anyone with a web connection, and their IP address is basically put on display as a massive ‘LOOK HERE’ notice, making potential criminals easy to find and locate. The web is a collective entity, the virtual equivalent of a large and fairly open ghetto- it’s very easy to collectively target and wrap up the whole shabang. Put simply, dealing with the internet, if a bill were to get through, would be very, very easy

But… why the cause for dispute in the first place? It’s an interesting quandary, because the web doesn’t consider what it’s doing to be wrong anyway. This is partly because much of what a corporation might consider piracy online isn’t technically illegal- as long as nothing gets downloaded or made a hard copy of, streaming a video isn’t against the law. It’s the virtual equivalent of inviting your mates round to watch a film (although technically, since a lot of commercial DVD’s are ‘NOT FOR PUBLIC PERFORMANCE’, this is strictly speaking illegal too- not so online as there is no way to prove it’s not from a public performance copy). Downloading copyrighted content is illegal and is punishable by existing law, but this currently often goes unregulated because the problem is so widespread and the punishment for the crimes so small that it is simply too much bother for effective regulation. The only reason Napster got hit so hard when it was offering free downloads is because it was shifting stuff by the millions, and because it was the only one out there. One of the great benefits that bills like SOPA offered to big corporations was a quick, easy solution to crack down on copyright violators, and which didn’t entail lengthy, costly and inconvenient court proceedings.

However, downloading is a far smaller ‘problem’ than people streaming stuff from Megavideo and YouTube, which happens on a gigantic scale- think how many views the last music video you saw on YouTube had. This is what corporations are attempting to stop- the mass distribution of their content via free sharing of it online, which to them represents a potentially huge loss in income. To what extent it does cost them money, and to what extent it actually gets them more publicity is somewhat up for debate, but in the minds of corporations its enough of a problem to try and force through SOPA and PIPA.

This, really is the nub of the matter- the web and the world of business have a different definition of what constitutes violation of copyrighted content. To the internet, all the streaming and similar is simply sharing, and this is a reflection of the internet’s overarching philosophy- that everything should be free and open to everyone, without corporate influence (a principle which is astoundingly not adhered to when one thinks of the level of control exerted by Facebook and Google, but that’s another story in itself). To a corporation however, streaming on the huge scale of the web is stealing- simple as that. And it is this difference of opinion that has led to such controversy surrounding web-controlling bills.

If the next bill proposed to combat online piracy were simply one that increased the powers corporations could take the prevent illegal downloading of copyrighted content, I don’t think anyone could really complain- it’s already definitely illegal, those doing it know that they really shouldn’t and if anyone wants to grumble then they can probably stream it anyway. The contentious part of all the bills thus far have been those which attempt to restrict the streaming and sharing of such content online- and this is one battle that is not going to go away. At the moment, the law is on the side of the web. Whether that will stay the case remains to be seen…