Excuse time…

If there is actually anyone who would be considered a regular to this blog (I can’t honestly tell), they may have noticed that I missed my normal post on Wednesday. Not an unusual occurrence in itself (except that I forgot to do my usual thing and make rather pathetic apologies for it in Saturday’s post), but it was symptomatic of something- namely the pretty intense workload I’ve got myself into at the moment. And that workload is only going to grow in size over the coming weeks.

It is due to shrink again, but for the moment three 1000-word essays a week is just too much for me to keep up on a regular basis. I am therefore going to be taking a one-month break from this blog whilst I get on top of my oncoming workload.

I like blogging, despite the minimal traffic I get, and it’s a good outlet for me. Unfortunately, it is a big killer of time, and time is becoming an increasingly precious resource. Once the month is up, I shall try to resume normal progress. See you then.

The Encyclopaedia Webbanica

Once again, today’s post will begin with a story- this time, one about a place that was envisaged over a hundred years ago. It was called the Mundaneum.

The Mundaneum today is a tiny museum in the city of Mons, Belgium, which opened in its current form in 1998. It is a far cry from the original, first conceptualised by Nobel Peace Prize winner Henri la Fontaine and fellow lawyer and pioneer Paul Otlet in 1895. The two men, Otlet in particular, had a vision- to create a place where every single piece of knowledge in the world was housed. Absolutely all of it.

Even in the 19th century, when the breadth of scientific knowledge was a million times smaller than it is today (a 19th century version of New Scientist would be publishable about once a year), this was a huge undertaking, this was a truly gigantic undertaking from a practical perspective. Not only did Otlet and la Fontaine attempt to collect a copy of just about every book ever written in search of information, but went further than any conventional library of the time by also looking through pamphlets, photographs, magazines, and posters in search of data. The entire thing was stored on small 3×5 index cards and kept in a carefully organised and detailed system of files, and this paper database eventually grew to contain over 12 million entries. People would send letters or telegraphs to the government-funded Mundaneum (the name referencing to the French monde, meaning world, rather than mundane as in boring), who in turn would have their staff search through their files in order to give a response to just about any question that could be asked.

However, the most interesting thing of all about Otlet’s operation, quite apart from the sheer conceptual genius of a man who was light-years ahead of his time, was his response to the problems posed when the enterprise got too big for its boots. After a while, the sheer volume of information and, more importantly, paper, meant that the filing system was getting too big to be practical for the real world. Otlet realised that this was not a problem that could ever be resolved by more space or manpower- the problem lay in the use of paper. And this was where Otlet pulled his masterstroke of foresight.

Otlet envisaged a version of the Mundaneum where the whole paper and telegraph business would be unnecessary- instead, he foresaw a “mechanical, collective brain”, through which people of the world could access all the information the world had to offer stored within it via a system of “electric microscopes”. Not only that, but he envisaged the potential for these ‘microscopes’ to connect to one another, and letting people “participate, applaud, give ovations, [or] sing in the chorus”. Basically, a pre-war Belgian lawyer predicted the internet (and, in the latter statement, social networking too).

Otlet has never been included in the pantheon of web pioneers- he died in 1944 after his beloved Mundaneum had been occupied and used to house a Nazi art collection, and his vision of the web as more of an information storage tool for nerdy types is hardly what we have today. But, to me, his vision of a web as a hub for sharing information and a man-made font of all knowledge is envisaged, at least in part, by one huge and desperately appealing corner of the web today: Wikipedia.

If you take a step back and look at Wikipedia as a whole, its enormous success and popularity can be quite hard to understand. Beginning from a practical perspective, it is a notoriously difficult site to work with- whilst accessing the information is very user-friendly, the editing process can be hideously confusing and difficult, especially for the not very computer-literate (seriously, try it). My own personal attempts at article-editing have almost always resulted in failure, bar some very small changes and additions to existing text (where I don’t have to deal with the formatting). This difficulty in formatting is a large contributor to another issue- Wikipedia articles are incredibly text-heavy, usually with only a few pictures and captions, which would be a major turn-off in a magazine or book. The very concept of an encyclopaedia edited and made by the masses, rather than a select team of experts, also (initially) seems incredibly foolhardy. Literally anyone can type in just about anything they want, leaving the site incredibly prone to either vandalism or accidental misdirection (see xkcd.com/978/ for Randall Munroe’s take on how it can get things wrong). The site has come under heavy criticism over the years for this fact, particularly on its pages about people (Dan Carter, the New Zealand fly-half, has apparently considered taking up stamp collecting, after hundreds of fans have sent him stamps based on a Wikipedia entry stating that he was a philatelist), and just letting normal people edit it also leaves bias prone to creep in, despite the best efforts of Wikipedia’s team of writers and editors (personally, I think that the site keeps its editing software deliberately difficult to use to minimise the amount of people who can use it easily and so try to minimise this problem).

But, all that aside… Wikipedia is truly wonderful- it epitomises all that is good about the web. It is a free to use service, run by a not-for-profit organisation that is devoid of advertising and is funded solely by the people of the web whom it serves. It is the font of all knowledge to an entire generation of students and schoolchildren, and is the number one place to go for anyone looking for an answer about anything- or who’s just interested in something and would like to learn more. It is built on the principles of everyone sharing and contributing- even flaws or areas lacking citation are denoted by casual users if they slip up past the editors the first time around. It’s success is built upon its size, both big and small- the sheer quantity of articles (there are now almost four million, most of which are a bit bigger than would have fitted on one of Otlet’s index cards), means that it can be relied upon for just about any query (and will be at the top of 80% of my Google searches), but its small server space, and staff size (less than 50,000, most of whom are volunteers- the Wikimedia foundation employs less than 150 people) keeps running costs low and allows it to keep on functioning despite its user-sourced funding model. Wikipedia is currently the 6th (ish) most visited website in the world, with 12 billion page views a month. And all this from an entirely not-for-profit organisation designed to let people know facts.

Nowadays, the Mundaneum is a small museum, a monument to a noble but ultimately flawed experiment. It original offices in Brussels were left empty, gathering dust after the war until a graduate student discovered it and eventually provoked enough interest to move the old collection to Mons, where it currently resides as a shadow of its former glory. But its spirit lives on in the collective brain that its founder envisaged. God bless you, Wikipedia- long may you continue.

The story of Curveball

2012 has been the first year for almost as long as public conciousness seems able to remember that the world has not lived under the shadow of one of the most controversial and tumultuous events of the 21st century- the Iraq war. From 2003 to December 2011, the presence and deaths of western soldiers in Iraq was an ever-present and constantly touchy issue, and it will be many years before Iraq recovers from the war’s devastating effects.

Everybody knows the story of why the war was started in the first place- the US government convinced the rest of the world that Iraq’s notoriously brutal and tyrannical dictator Saddam Hussein (who had famously gassed vast swathes of Iraq’s Kurdish population prior to his invasion of Kuwait and the triggering of the First Gulf War) was in possession of weapons of mass destruction. The main reason for the US government’s fears was, according to the news of the time, the fact that Hussein had refused UN weapons inspectors to enter and search the country. Lots of people know, or at least knew, this story. But much fewer know the other story- the story of how one man was able to, almost single-handedly, turn political posturing into a full-scale war.

This man’s name is Rafid Ahmed Alwan, but he was known to the world’s intelligence services simply as ‘Curveball’. Alwan is an Iraqi-born chemical engineer, who in 1999 fled to Germany, having embezzled government money. He then claimed that he had worked on an Iraqi project to design and produce mobile labs to produce biological weapons. Between late 1999 and 2001, German intelligence services interrogated him, granted him political asylum, and listened to his descriptions of the process. They were even able to create 3-D models of the facilities being designed, to a level of detail that CIA scientists were later able to identify major technical flaws in them. Despite the identification of such inconsistencies, when Curveball’s assertions that Iraq was indeed trying to produce biological WMD’s got into the hands of US intelligence, they went straight to the top. US Secretary of State Colin Powell referred to Curveball’s evidence in a 2003 speech to the UN on the subject of Iraq’s weapons situation, and his evidence, despite its flaws, pretty much sealed the deal for the USA. And where the US goes, the rest of the world tends to follow.

Since then, Curveball has, naturally, come under a lot of criticism. Accused of being an alcoholic, a ‘congenital liar’ and a ‘con artist’, he is quite possibly the world record holder for the most damaging ‘rogue source’ in intelligence history. Since he first made his claims, the amount of evidence showing how completely and utterly false they were has only stacked up- a facility he attested was a docking station was found to have an immovable brick wall in front of it, his designs were completely technically unsound, and his claims that he had finished top of his class at Baghdad University and had been drafted straight into the weapons program were replaced by the fact that he had finished bottom of his class and had, as he admitted in 2011, made the whole story up.

But, of course, by far the biggest source of hatred towards Curveball has been what his lies snowballed into- the justification of one of the western world’s least proud and most controversial events- the Second Iraq War. The cost of the war has been estimated to be in the region of two trillion dollars, and partly as a result of disruption to Iraqi oil production the price of oil has nearly quadrupled since the war began. The US and its allies have come under a hail of criticism for their poor planning of the invasion, the number of troops required and the clean up process, which was quite possibly entirely to blame for the subsequent 7 years of insurgent warfare after the actual invasion- quite apart from  some rather large questions surrounding the invasion’s legality in the first place. America has also taken a battering to its already rather weathered global public image, losing support from some of its traditional allies, and the country of Iraq has, despite having had an undoubtedly oppressive dictatorship removed, become (rather like Afghanistan) a far more corrupt, poverty-stricken, damaged and dangerous society than it was even under Hussein- it will take many years for it to recover. Not only that, but there is also evidence to suggest that the anger caused by the Western invasion has been played for its PR value by al-Qaeda and other terrorist groups, actually increasing the terrorism threat. But worse than all of that has been the human cost- estimates of the death toll range from 87,000 to over a million, the majority of whom have been civilian casualties from bomb attacks (courtesy of both sides). All parties have also been accused of sanctioning torture and of various counts of murder of civilians.

But I am not here to point fingers or play the blame game- suffice it to say that the main loser in the war has been humanity. The point is that, whilst Curveball cannot be said to be the cause of the war, or even the main one, the paper trail can be traced right back to him as one of the primary trigger causes. Just one man, and just a few little lies.

Curveball has since said that he was (justifiably) shocked that his words were used as justification for the war, but, crucially, that he was proud that what he had said had toppled Hussein’s government. When asked in an interview about all the death and pain the war he had sparked had caused, he was unable to give an answer.

This, for me, was a both shocking and deeply interesting moral dilemma. Hussein was without a doubt a black mark on the face of humanity, and in the long run I doubt that Iraq will be worse off as a democracy than it was under his rule. But that will not be for many years, and right now Iraq is a shadow of a country.

Put yourself in Curveball’s position- somebody who thought his words could bring down a dictator, a hate figure, and who then could only watch as the world tore itself apart because of them. Could you live with that thought? Were your words worth their terrible price? Could your conscience ever sleep easy?

 

 

Kony 2012 in hindsight

Yesterday, April 20th, marked two at least reasonably significant events. The first of these was it being 4/20, which is to cannabis smokers what Easter is to Christians- the major festival of the year, where everyone gathers together to smoke, relax and make their collective will felt (this is, I feel I should point out, speaking only from what I can pick up online- I don’t actually smoke pot). This is an annual tradition, and has grown into something of a political event for pro-legalisation groups.

The other event is specific to this year (probably, anyway), and just about marks the conclusion of one of the 21st century’s most startling (and tumultuous) events- the Kony 2012 campaign’s ‘cover the night’ event.

Since going from an almost unknown organisation to the creators of the fastest-spreading viral video of all time, Kony 2012’s founders Invisible Children have found their organisation changed forever. For most of the last decade the charity has existed, but only now has it gone from being a medium-sized organisation relying on brute zealotry for support to a internationally known about group. Similarly, the target of their campaign, warlord and wanted human rights criminal Joseph Kony, has gone from a man known only in the local area and by politicians nobody’s ever heard of, to a worldwide hate figure inspiring discussion in the world’s governments (albeit one with more than his fair share of lighthearted memes- in fact he is increasingly reminding me of Osama Bin Laden in terms of status).

Invisible Children’s meteoric rise has not been without backlash- they have come under intense scrutiny for both their less-than-transparent finances, and the fact that only around a third of their turnover goes to supporting their African projects. Then there was the now-infamous ‘Bony 2012’ incident, where co-founder Jason Russell was found making a public nuisance of himself, and masturbating in public, after a week of constant stress and exhaustion, and rather too much to drink.

Not only that, but the campaign’s supporters have come under attack. This is partly because the internet always loves to have a go at committed Christians, as Russell and many of his followers are, but there are several recurring issues people appear to have with the campaign in general. One of the most common is the idea that ‘rich white kids’ sticking up posters and watching a video, and then claiming that they’ve helped change something is both ridiculous and wrong. Another concerns the current situation in the Uganda/CAR/South Sudan/Congo area- this is one of hideously bloody political strife, and Joseph Kony is not the only one with a poor human rights record. Eastern Congo is still recovering from a major civil war that officially ended in 2003 but still exists in some local, and extremely bloody, conflicts, the Central African Republic is one of the poorest countries in the world with a history of political strife, South Sudan has only just emerged as independent from a constant civil war and the bloody, oppressive dictatorship of Omar al-Bashir, and Uganda has an incredibly poor record for war and corruption, and has even been accused of using child soldiers in much the same way as Kony’s organisation, the Lord’s Resistance Army. Then there have been the accusations that Invisible Children have overexaggerated and oversimplified the issue, misleading the general public, and the argument that, with the LRA numbering less than a thousand, Kony isn’t too much of an issue anyway- certainly not when compared to the thousands of children who die every day from malnourishment and disease in the area.  Finally, some take issue with the aim of the Kony 2012 campaign- to get governments to listen and to step up the level of involvement in their attempts to capture Kony, which is an aim disliked by those who feel that the USA doesn’t need any more encouragement to invade somewhere, and disliked even more by those who claim Kony died 5 years ago.

All of these are completely valid, true and important arguments to consider (well, apart from the one about him being dead, which is probably not true). And I have one answer to every single one of them:

IT. DOESN’T. MATTER.

Put it this way- what slogan does the Kony 2012 video say is it’s aim? Answer- to make Kony famous, and in that regard Invisible Children have succeeded beyond their wildest dreams. Most  of the world (well, most of it with an internet connection at least), now knows about one of the worst perpetrators of human rights violators in the world, and a major humanitarian issue is now being forced upon governments worldwide.  It doesn’t matter that Invisible Children has some dodgy finances, it doesn’t matter that Kony is by no means the biggest problem in the area, and it certainly doesn’t matter that Jason Russell managed to give the world’s media a field day. All that matters is that people know about a serious issue, because if nobody knows about it nobody cares, and if nobody cares then nothing can be done about it.

There is, in fact, one criticism levelled at Invisible Children supporters that I take major issue with, and that is the idea that its efforts at spreading awareness do not matter. This could not be more untrue. There is only one force on this earth that will ever have the power to potentially find and bring to justice Joseph Kony, and that is the effort of the world’s governments- armies, advisors, police, whatever. But governments simply do not get involved in stuff if it doesn’t matter to them, and the only way to get something (that doesn’t concern oil, power or money) to matter to a government is to make sure people know and care about it. In modern politics, awareness is absolutely everything- without that, nothing matters.

Anyone can stand and level criticisms at the Kony campaign all day if they wanted to. I myself have not given Invisible Children any money, and don’t agree with a lot of the charity’s activities. But I am still able to admire what they have done, and realise what a great service they have done to the world at large. In the grand scheme of things, their flaws don’t really matter one jot. Because everyone will agree that Kony is most definitely a bad guy, and most definitely needs to be brought to justice- until now, the chances of that happening were minimal. Until Kony 2012.

 

 

 

Also… WOO 50 POSTS!!!!!

Rule Brittania

As I have mentioned a few times over the course of this blog, I am British (I prefer not to say English unless I’m talking about sport. Not sure why, exactly). The British as a race have a long list of achievements, giant-scale cock-ups and things we like to brush under the carpet (see the Crimean War for all three of those things), and since we spent most of the 17th-19th century either fighting over or controlling fairly massive swathes of the earth, the essence of Britishness has managed to make itself known in the psyche of just about every nation on Earth. Or, to put it another way, people have tons of stereotypes about the Brits, but not quite so many about, say, the Lithuanians (my apologies to any Lithuanians who end up reading this, but the British national psyche at least isn’t that good at distinguishing you from the rest of Eastern Europe).

British national stereotypes are a mixed bunch. We have the ‘ye olde’ stereotypical Brit- a top-hatted, tea drinking cricketer for whom the word ‘quaint’ was invented and who would never speak out of turn to anybody. Then there is the colonial stereotype- the old-fashioned, borderline-racist yet inherently capable silver-moustached ‘old boy’ living in a big house somewhere in the tropics with a few servants. He puts a lot of cash into the local public school down the road, paying for the cricket facilities. Or something. And then we have the hideously polite- just as obsessed about manners as his ‘ye olde’ cousin, but this time in a very subservient, almost Canadian, manner (I should clarify that I get this particular Canadian stereotype from the internet, since the only Canadians I know all seem… actually, I’ll get back to you on a generalistic stereotype)

However, modern Britain is, of course, not really like this- we are a very modern, incredibly diverse culture (despite David Cameron’s insistence that “multiculturalism has failed”- not one of his better lines) with a surprising geographical diversity too, for such a small country. So, since I am not really in the mood for anything particularly heavy today, I thought that this would be a good time to inform the internet as to a few new British stereotypes for you, just to bring you up to date.

1) The Chav
Used to be that inner-city Londoners all got classed as Cockneys- nowadays we have chavs instead. The chav in his natural state is a pack animal, rarely seen without company, and vulnerable when alone. His is the main market for bad rap music, oversized baseball caps and hoodies two sizes too big for them. They are a notoriously hard to become assimilated with, partly due to the natural verbal aggression of the pack, but also due to their strange tongue- officially known as London Street English (LSE), this bizarre dialect, calling from influences ranging from Vietnamese to Arabic, has now spread across large tracts of southern England, where it is generally confined to council estate, and has more recently been simply dubbed ‘Chav’. Despite a reputation for drugs, violence and vandalism, they are not to be feared by the confident, especially if numbers lie in one’s favour.

2) The West Country
The farming stereotype- round-cheeked, stick-bearing and (to complete the look) with a length of straw poking out of the mouth. Their dialect (a rather bumptious, heavily accented tongue where many a syllable may be lost beneath an ‘Aarrr’) can be no less strange and confusing than LSE, and despite being typically associated with the area west of Bristol (excluding, of course, Wales), is also to be found in East Anglia. Since we have progressed from the days of needing an army of bored young men to till the fields, leaving us to use combine harvesters and such instead, this tends to be a reasonably well-off group- there are no longer starving farmhands, only really farm owners & family. They tend to drive Land Rovers, and view science with roughly the same suspicion as an oncoming bush fire.

3) The Gap Yah…
The modern public schoolboy. Eton being a touch old-fashioned nowadays, the stereotype will now come from Harrow or Stowe (for whatever reason). Typically long of face, short of hair and severely lacking in both age and experience, these come in two subtly different classes. There is the overbearer- the one whose intense access to the very best that Daddy’s money can buy has left them better than everybody else at practically everything they care to mention, and will point this out to you at every opportunity. These may be recognised by the incessant and constantly nagging desire to break their face. The second is the wannabe- the kid who got bullied at Whitgift, who isn’t actually that good at anything but is still richer than you and likes you to know it. They are characterised by always pretending to be of the overbearer class, and endeavouring to be as competent as them, but always cocking up. Interestingly, failure provides the main distinction between the two classes- whilst a wannabe will just act cool and pretend that you cheated them, an overbearer will simply cut out all the timewasting and begin the vitriolic hatred then and there. Both classes are likely to drink heavily (of proper drinks of course- stuff like cider is for plebs and Muggles), travel widel, and hopefully meet their match one of these days soon.

That list was not what you’d call exhaustive, but it’s reasonably accurate from what I’ve experienced. Plus, it was quite nice and relaxing for me.

(If I have in any way offended you or the stereotype you represent over the course of this post, then please feel free to ignore it and laugh at the other ones instead)

The Inevitable Dilemma

And so, today I conclude this series of posts on the subject of alternative intelligence (man, I am getting truly sick of writing that word). So far I have dealt with the philosophy, the practicalities and the fundamental nature of the issue, but today I tackle arguably the biggest and most important aspect of AI- the moral side. The question is simple- should we be pursuing AI at all?

The moral arguments surrounding AI are a mixed bunch. One of the biggest is the argument that is being thrown at a steadily wider range of high-level science nowadays (cloning, gene analysis and editing, even synthesis of new artificial proteins)- that the human race does not have the moral right, experience or ability to ‘play god’ and modify the fundamentals of the world in this way. Our intelligence, and indeed our entire way of being, has evolved over thousands upon millions of years of evolution, and has been slowly sculpted and built upon by nature over this time to find the optimal solution for self-preservation and general well being- this much scientists will all accept. However, this argument contends that the relentless onward march of science is simply happening too quickly, and that the constant demand to make the next breakthrough, do the next big thing before everybody else, means that nobody is stopping to think of the morality of creating a new species of intelligent being.

This argument is put around a lot with issues such as cloning or culturing meat, and it’s probably not helped matters that it is typically put around by the Church- never noted as getting on particularly well with scientists (they just won’t let up about bloody Galileo, will they?). However, just think about what could happen if we ever do succeed in creating a fully sentient computer. Will we all be enslaved by some robotic overlord (for further reference, see The Matrix… or any other of the myriad sci-fi flicks based on the same idea)? Will we keep on pushing and pushing to greater endeavours until we build a computer with intelligence on all levels infinitely superior to that of the human race? Or will we turn robot-kind into a slave race- more expendable than humans, possibly with programmed subservience? Will we have to grant them rights and freedoms just like us?

Those last points present perhaps the biggest other dilemma concerning AI from a purely moral standpoint- at what point will AI blur the line between being merely a machine and being a sentient entity worthy of all the rights and responsibilities that entails? When will a robot be able to be considered responsible for its own actions? When will be able to charge a robot as the perpetrator of a crime? So far, only one person has ever been killed by a robot (during an industrial accident at a car manufacturing plant), but if such an event were ever to occur with a sentient robot, how would we punish it? Should it be sentenced to life in prison? If in Europe, would the laws against the death penalty prevent a sentient robot from being ‘switched off’? The questions are boundless, but if the current progression of AI is able to continue until sentient AI is produced, then they will have to be answered at some point.

But there are other, perhaps more worrying issues to confront surrounding advanced AI. The most obvious non-moral opposition to AI comes from an argument that has been made in countless films over the years, from Terminator to I, Robot- namely, the potential that if robot-kind are ever able to equal or even better our mental faculties, then they could one day be able to overthrow us as a race. This is a very real issue when confronting the stereotypical issue of a war robot- that of an invincible metal machine capable of wanton destruction on par with a medium sized tank, and who is easily able to repair itself and make more of itself. It’s an idea that is reasonably unlikely to ever become real, but it actually raises another idea- one that is more likely to happen, more likely to build unnoticed, and is far, far more scary. What if the human race, fragile little blobs of fairly dumb flesh that we are, were ever to be totally superseded as an entity by robots?

This, for me, is the single most terrifying aspect of AI- the idea that I may one day become obsolete, an outdated model, a figment of the past. When compared to a machine’s ability to churn out hundreds of copies of itself simply from a blueprint and a design, the human reproductive system suddenly looks very fragile and inefficient by comparison. When compared to tough, hard, flexible modern metals and plastics that can be replaced in minutes, our mere flesh and blood starts to seem delightfully quaint. And if the whirring numbers of a silicon chip are ever able to become truly intelligent, then their sheer processing capacity makes our brains seem like outdated antiques- suddenly, the organic world doesn’t seem quite so amazing, and certainly more defenceless.

But could this ever happen? Could this nightmare vision of the future where humanity is nothing more than a minority race among a society ruled by silicon and plastic ever become a reality? There is a temptation from our rational side to say of course not- for one thing, we’re smart enough not to let things get to that stage, and that’s if AI even gets good enough for it to happen. But… what if it does? What if they can be that good? What if intelligent, sentient robots are able to become a part of a society to an extent that they become the next generation of engineers, and start expanding upon the abilities of their kind? From there on, one can predict an exponential spiral of progression as each successive and more intelligent generation turns out the next, even better one. Could it ever happen? Maybe not. Should we be scared? I don’t know- but I certainly am.

Artificial… what, exactly?

OK, time for part 3 of what I’m pretty sure will finish off as 4 posts on the subject of artificial intelligence. This time, I’m going to branch off-topic very slightly- rather than just focusing on AI itself, I am going to look at a fundamental question that the hunt for it raises: the nature of intelligence itself.

We all know that we are intelligent beings, and thus the search for AI has always been focused on attempting to emulate (or possibly better) the human mind and our human understanding of intelligence. Indeed, when Alan Turing first proposed the Turing test (see Monday’s post for what this entails), he was specifically trying to emulate human conversational and interaction skills. However, as mentioned in my last post, the modern-day approach to creating intelligence is to try and let robots learn for themselves, in order to minimise the amount of programming we have to give them ourselves and thus to come close to artificial, rather than programmed, intelligence. However, this learning process has raised an intriguing question- if we let robots learn for themselves entirely from base principles, could they begin to create entirely new forms of intelligence?

It’s an interesting idea, and one that leads us to question what, on a base level, intelligence is. When one thinks about it, we begin to realise the vast scope of ideas that ‘intelligence’ covers, and this is speaking merely from the human perspective. From emotional intelligence to sporting intelligence, from creative genius to pure mathematical ability (where computers themselves excel far beyond the scope of any human), intelligence is an almost pointlessly broad term.

And then, of course, we can question exactly what we mean by a form of intelligence. Take bees for example- on its own, a bee is a fairly useless creature that is most likely to just buzz around a little. Not only is it useless, but it is also very, very dumb. However, a hive, where bees are not individuals but a collective, is a very different matter- the coordinated movements of hundreds and thousands of bees can not only form huge nests and turn sugar into the liquid deliciousness that is honey, but can also defend the nest from attack, ensure the survival of the queen at all costs, and ensure that there is always someone to deal with the newborns despite the constant activity of the environment surround it. Many corporate or otherwise collective structures can claim to work similarly, but few are as efficient or versatile as a beehive- and more astonishingly, bees can exhibit an extraordinary range of intelligent behaviour as a collective beyond what an individual could even comprehend. Bees are the archetype of a collective, rather than individual, mind, and nobody is entirely sure how such a structure is able to function as it does.

Clearly, then, we cannot hope to pigeonhole or quantify intelligence as a single measurement- people may boast of their IQ scores, but this cannot hope to represent their intelligence across the full spectrum. Now, consider all these different aspects of intelligence, all the myriad of ways that we can be intelligent (or not). And ask yourself- now, have we covered all of them?

It’s another compelling idea- that there are some forms of intelligence out there that our human forms and brains simply can’t envisage, let alone experience. What these may be like… well how the hell should I know, I just said we can’t envisage them. This idea that we simply won’t be able to understand what they could be like if we ever experience can be a tricky one to get past (a similar problem is found in quantum physics, whose violation of common logic takes some getting used to), and it is a real issue that if we do ever encounter these ‘alien’ forms of intelligence, we won’t be able to recognise them for this very reason. However, if we are able to do so, it could fundamentally change our understanding of the world around us.

And, to drag this post kicking and screaming back on topic, our current development of AI could be a mine of potential to do this in (albeit a mine in which we don’t know what we’re going to find, or if there is anything to find at all). We all know that computers are fundamentally different from us in a lot of ways, and in fact it is very easy to argue that trying to force a computer to be intelligent beyond its typical, logical parameters is rather a stupid task, akin to trying to use a hatchback to tow a lorry. In fact, quite a good way to think of computers or robots is like animals, only adapted to a different environment to us- one in which their food comes via a plug and information comes to them via raw data and numbers… but I am wandering off-topic once again. The point is that computers have, for as long as the hunt for AI has gone on, been our vehicle for attempting to reach it- and only now are we beginning to fully understand that they have the potential to do so much more than just copy our minds. By pushing them onward and onward to the point they have currently reached, we are starting to turn them not into an artificial version of ourselves, but into an entirely new concept, an entirely new, man-made being.

To me, this is an example of true ingenuity and skill on behalf of the human race. Copying ourselves is no more inventive, on a base level, than making iPod clones or the like. Inventing a new, artificial species… like it or loath it, that’s amazing.