Personal History

Our lives today are more tracked, recorded and interconnected than ever before, for good and ill. Our phones can track our every moment, CCTV and other forms of physical recording have reduced our opportunities for privacy whilst out in public and, as the Leveson inquiry showed, modern technology makes it easier and easier for those who want to to keep tabs on all our activity. However, the aspect of this I want to discuss today concerns our online presence, something that is increasingly becoming a feature of all our lives.

On this blog, I try to be careful; I don’t mention my name, age or specific location and never put any photos of myself up. I also try, wherever possible, to be careful in other places online too; I don’t put photos on my Facebook page (since photos can be seen by anyone, regardless of whether they are your friend or not), try to keep a hold of my tongue when on forums, and try to operate a ‘look don’t touch’ policy in most other areas. But then again, I’m kinda lucky in that regard; I am not highly sociable, so rarely find myself in the position of having 100 embarrassing photos & videos put up concerning ‘that HILARIOUS thing you were doing last night’, and am not a public figure in any way. Basically, I am able to maintain a reasonable degree of privacy on the web by virtue of the fact that other people are unlikely to… contribute to my online profile.

Others are, of course, not so lucky; either that or they don’t especially care, which is, I suppose understandable. Sharing information about ourselves is, after all, pretty much exactly what Facebook and the like are for. However, we are frequently told how damaging it is to have such a wealth of information about us so blatantly available online; a quick Google and Facebook search of a client is now pretty much standard procedure when it comes to job applications, and even if there aren’t any pictures of you with underwear round the ankles vomiting into a fountain, they can build up a negative image of a potential client. An interviewer (well, a presumptive one) might, for instance, take a look at all the pictures showing you hanging round with mates at a club and think you are a habitual drinker and partygoer, neither of which exactly say ‘productive worker who’s always going to be in on time and in top condition’. Even beyond the world of work, there is the potential for serial embarrassment if pictures that were meant to be shared between friends make it out into the big wide world, and there is even the worrying idea of ‘cyber stalking’, made so easy thanks to the internet, entering your life.

However, perhaps most interesting are those in the public domain, both people and companies, who must control what totally uncontrollable, and usually unknown, people can choose to put online about them. Not only can this be personally hurtful for individual people, but for many such figures their livelihood is dependent on their reputation. All it takes is a spree of bad press reports for a negative image to tar one’s brand for a long old time, and all of the incalculable lost revenue that comes with that. The internet has a large memory and billions of people to contribute to it, and even a few particularly vociferous bloggers can keep bad words in the Google suggestion bar for a very long time.

This has lead, in the last few years, to the rise of a new industry; that of online reputation management. These companies have a simple enough remit; to disassociate their client from negative connotations online wherever possible. Unfortunately, this isn’t a matter of just shutting people up, because this is the internet and that kind of thing never ends well.  No, these businesses have to be a mite more subtle. For example, let us imagine, for the sake of implausibility, that Benedict Cumberbatch is linked with a rabbit-murdering syndicate, and although nothing is ever nailed down there are enough damning news bulletins and angry blogs that this thing is going to hang around forever. A reputation management company’s initial job would be to get this off the front page of Google, so they have to create some more content to hide the bad stuff; 94% of Google searches never get off page one. However, they can’t just produce huge numbers of spam-like articles to the vein of ‘Benedict’s a nice guy! Look, he’s cuddling a kitten! He gives money to nice charities!’, because people are smart enough to tell when that kind of thing is happening. So, a large amount of neutral or neutral-positive stuff is generated; certain sites might be paid, for example, to talk about the next film or theatre project it’s announced he’s appearing in. A variety of content is key, because if it’s all just carbon copies of the same statement people will smell a rat. Once the content’s been generated, there comes the matter of getting it circulated. Just writing a program to generate hits artificially isn’t enough on its own; this is where the world of sponsored Facebook links comes in, trying to get people thinking and talking about non-rabbit murdering stuff. This prevents more negative content from being generated and existing stuff from getting traffic much more effectively. The job is, however, an extremely slow one; a news story that breaks over the course of a week can take a year or two to fix, depending on the ferocity of one’s opponents.

When the world wide web, or ‘the information super-highway’, as it was also known back then, first came into our workld back in the 90s, people had high hopes. We could learn things, share things, discover stuff about one another, foster universal understanding. And, whilst we can now do all these things and more, the internet has become infamous too, scaring corporations and people alike with what billions of interconnected people can make happen. It is a strange place that many try to tame, out of necessity or out of fear. For many, it’s a battle they are doomed to lose.

PS: I feel like I should slightly apologise for not really having anything to say here. I guess I didn’t really think of a conclusion in advance

Advertisements

Hitting the hay

OK, so it was history last time, so I’m feeling like a bit of science today. So, here is your random question for today; are the ‘leaps of faith’ in the Assassin’s Creed games survivable?

Between them, the characters of Altair, Ezio and Connor* jump off a wide variety of famous buildings and monuments across the five current games, but the jump that springs most readily to mind is Ezio’s leap from the Campanile di San Marco, in St Mark’s Square, Venice, at the end of Assassin’s Creed II. It’s not the highest jump made, but it is one of the most interesting and it occurs as part of the main story campaign, meaning everyone who’s played the game through will have made the jump and it has some significance attached to it. It’s also a well-known building with plenty of information on it.

[*Interesting fact; apparently, both Altair and Ezio translate as ‘Eagle’ in some form in English, as does Connor’s Mohawk name (Ratonhnhaké;ton, according to Wikipedia) and the name of his ship, the Aquila. Connor itself translates as ‘lover of wolves’ from the original Gaelic]

The Campanile as it stands today is not the same one as in Ezio’s day; in 1902 the original building collapsed and took ten years to rebuild. However, the new Campanile was made to be cosmetically (if not quite structurally) identical to the original, so current data should still be accurate. Wikipedia again tells me the brick shaft making up the bulk of the structure accounts for (apparently only) 50m of the tower’s 98.6m total height, with Ezio’s leap (made from the belfry just above) coming in at around 55m. With this information we can calculate Ezio’s total gravitational potential energy lost during his fall; GPE lost = mgΔh, and presuming a 70kg bloke this comes to GPE lost= 33730J (Δ is, by the way, the mathematical way of expressing a change in something- in this case, Δh represents a change in height). If his fall were made with no air resistance, then all this GPE would be converted to kinetic energy, where KE = mv²/2. Solving to make v (his velocity upon hitting the ground) the subject gives v = sqrt(2*KE/m), and replacing KE with our value of the GPE lost, we get v = 31.04m/s. This tells us two things; firstly that the fall should take Ezio at least three seconds, and secondly that, without air resistance, he’d be in rather a lot of trouble.

But, we must of course factor air resistance into our calculations, but to do so to begin with we must make another assumption; that Ezio reaches terminal velocity before reaching the ground. Whether this statement is valid or not we will find out later. The terminal velocity is just a rearranged form of the drag equation: Vt=sqrt(2mg/pACd), where m= Ezio’s mass (70kg, as presumed earlier), g= gravitational field strength (on Earth, 9.8m/s²), p= air density (on a warm Venetian evening at around 15 degrees Celcius, this comes out as 1.225kg/m3), A= the cross-sectional area of Ezio’s falling body (call it 0.85m², presuming he’s around the same size as me) and Cd= his body’s drag coefficient (a number evaluating how well the air flows around his body and clothing, for which I shall pick 1 at complete random). Plugging these numbers into the equation gives a terminal velocity of 36.30m/s, which is an annoying number; because it’s larger than our previous velocity value, calculated without air resistance, of 31.04m/s, this means that Ezio definitely won’t have reached terminal velocity by the time he reaches the bottom of the Campanile, so we’re going to have to look elsewhere for our numbers. Interestingly, the terminal velocity for a falling skydiver, without parachute, is apparently around 54m/s, suggesting that I’ve got numbers that are in roughly the correct ballpark but that could do with some improvement (this is probably thanks to my chosen Cd value; 1 is a very high value, selected to give Ezio the best possible chance of survival, but ho hum)

Here, I could attempt to derive an equation for how velocity varies with distance travelled, but such things are complicated, time consuming and do not translate well into being typed out. Instead, I am going to take on blind faith a statement attached to my ‘falling skydiver’ number quoted above; that it takes about 3 seconds to achieve half the skydiver’s terminal velocity. We said that Ezio’s fall from the Campanile would take him at least three seconds (just trust me on that one), and in fact it would probably be closer to four, but no matter; let’s just presume he has jumped off some unidentified building such that it takes him precisely three seconds to hit the ground, at which point his velocity will be taken as 27m/s.

Except he won’t hit the ground; assuming he hits his target anyway. The Assassin’s Creed universe is literally littered with indiscriminate piles/carts of hay and flower petals that have been conveniently left around for no obvious reason, and when performing a leap of faith our protagonist’s always aim for them (the AC wiki tells me that these were in fact programmed into the memories that the games consist of in order to aid navigation, but this doesn’t matter). Let us presume that the hay is 1m deep where Ezio lands, and that the whole hay-and-cart structure is entirely successful in its task, in that it manages to reduce Ezio’s velocity from 27m/s to nought across this 1m distance, without any energy being lost through the hard floor (highly unlikely, but let’s be generous). At 27m/s, the 70kg Ezio has a momentum of 1890kgm/s, all of which must be dissipated through the hay across this 1m distance. This means an impulse of 1890Ns, and thus a force, will act upon him; Impulse=Force x ΔTime. This force will cause him to decelerate. If this deceleration is uniform (it wouldn’t be in real life, but modelling this is tricky business and it will do as an approximation), then his average velocity during his ‘slowing’ period will come to be 13.5m/s, and that this deceleration will take 0.074s. Given that we now know the impulse acting on Ezio and the time for which it acts, we can now work out the force upon him; 1890 / 0.074 = 1890 x 13.5 = 26460N. This corresponds to 364.5m/s² deceleration, or around 37g’s to put it in G-force terms. Given that 5g’s has been known to break bones in stunt aircraft, I think it’s safe to say that quite a lot more hay, Ezio’s not getting up any time soon. So remember; next time you’re thinking of jumping off a tall building, I would recommend a parachute over a haystack.

N.B.: The resulting deceleration calculated in the last bit seems a bit massive, suggesting I may have gone wrong somewhere, so if anyone has any better ideas of numbers/equations then feel free to leave them below. I feel here is also an appropriate place to mention a story I once heard concerning an air hostess whose plane blew up. She was thrown free, landed in a tree on the way down… and survived.

EDIT: Since writing this post, this has come into existence, more accurately calculating the drag and final velocity acting on the falling Assassin. They’re more advanced than me, but their conclusion is the same; I like being proved right :).

Practical computing

This looks set to be my final post of this series about the history and functional mechanics of computers. Today I want to get onto the nuts & bolts of computer programming and interaction, the sort of thing you might learn as a budding amateur wanting to figure out how to mess around with these things, and who’s interested in exactly how they work (bear in mind that I am not one of these people and am, therefore, likely to get quite a bit of this wrong). So, to summarise what I’ve said in the last two posts (and to fill in a couple of gaps): silicon chips are massive piles of tiny electronic switches, memory is stored in tiny circuits that are either off or on, this pattern of off and on can be used to represent information in memory, memory stores data and instructions for the CPU, the CPU has no actual ability to do anything but automatically delegates through the structure of its transistors to the areas that do, the arithmetic logic unit is a dumb counting machine used to do all the grunt work and is also responsible, through the CPU, for telling the screen how to make the appropriate pretty pictures.

OK? Good, we can get on then.

Programming languages are a way of translating the medium of computer information and instruction (binary data) into our medium of the same: words and language. Obviously, computers do not understand that the buttons we press on our screen have symbols on them, that these symbols mean something to us and that they are so built to produce the same symbols on the monitor when we press them, but we humans do and that makes computers actually usable for 99.99% of the world population. When a programmer brings up an appropriate program and starts typing instructions into it, at the time of typing their words mean absolutely nothing. The key thing is what happens when their data is committed to memory, for here the program concerned kicks in.

The key feature that defines a programming language is not the language itself, but the interface that converts words to instructions. Built into the workings of each is a list of ‘words’ in binary, each word having a corresponding, but entirely different, string of data associated with it that represents the appropriate set of ‘ons and offs’ that will get a computer to perform the correct task. This works in one of two ways: an ‘interpreter’ is an inbuilt system whereby the programming is stored just as words and is then converted to ‘machine code’ by the interpreter as it is accessed from memory, but the most common form is to use a compiler. This basically means that once you have finished writing your program, you hit a button to tell the computer to ‘compile’ your written code into an executable program in data form. This allows you to delete the written file afterwards, makes programs run faster, and gives programmers an excuse to bum around all the time (I refer you here)

That is, basically how computer programs work- but there is one last, key feature, in the workings of a modern computer, one that has divided both nerds and laymen alike across the years and decades and to this day provokes furious debate: the operating system.

An OS, something like Windows (Microsoft), OS X (Apple) or Linux (nerds), is basically the software that enables the CPU to do its job of managing processes and applications. Think of it this way: whilst the CPU might put two inputs through a logic gate and send an output to a program, it is the operating system that will set it up to determine exactly which gate to put it through and exactly how that program will execute. Operating systems are written onto the hard drive, and can, theoretically, be written using nothing more than a magnetized needle, a lot of time and a plethora of expertise to flip the magnetically charged ‘bits’ on the hard disk. They consist of many different parts, but the key feature of all of them is the kernel, the part that manages the memory, optimises the CPU performance and translates programs from memory to screen. The precise translation and method by which this latter function happens differs from OS to OS, hence why a program written for Windows won’t work on a Mac, and why Android (Linux-powered) smartphones couldn’t run iPhone (iOS) apps even if they could access the store. It is also the cause of all the debate between advocates of different operating systems, since different translation methods prioritise/are better at dealing with different things, work with varying degrees of efficiency and are more  or less vulnerable to virus attack. However, perhaps the most vital things that modern OS’s do on our home computers is the stuff that, at first glance seems secondary- moving stuff around and scheduling. A CPU cannot process more than one task at once, meaning that it should not be theoretically possible for a computer to multi-task; the sheer concept of playing minesweeper whilst waiting for the rest of the computer to boot up and sort itself out would be just too outlandish for words. However, a clever piece of software called a scheduler in each OS which switches from process to process very rapidly (remember computers run so fast that they can count to a billion, one by one, in under a second) to give the impression of it all happening simultaneously. Similarly, a kernel will allocate areas of empty memory for a given program to store its temporary information and run on, but may also shift some rarely-accessed memory from RAM (where it is accessible) to hard disk (where it isn’t) to free up more space (this is how computers with very little free memory space run programs, and the time taken to do this for large amounts of data is why they run so slowly) and must cope when a program needs to access data from another part of the computer that has not been specifically allocated a part of that program.

If I knew what I was talking about, I could witter on all day about the functioning of operating systems and the vast array of headache-causing practicalities and features that any OS programmer must consider, but I don’t and as such won’t. Instead, I will simply sit back, pat myself on the back for having actually got around to researching and (after a fashion) understanding all this, and marvel at what strange, confusing, brilliant inventions computers are.

A Brief History of Copyright

Yeah, sorry to be returning to this topic yet again, I am perfectly aware that I am probably going to be repeating an awful lot of stuff that either a) I’ve said already or b) you already know. Nonetheless, having spent a frustrating amount of time in recent weeks getting very annoyed at clever people saying stupid things, I feel the need to inform the world if only to satisfy my own simmering anger at something really not worth getting angry about. So:

Over the past year or so, the rise of a whole host of FLLAs (Four Letter Legal Acronyms) from SOPA to ACTA has, as I have previously documented, sent the internet and the world at large in to paroxysms of mayhem at the very idea that Google might break and/or they would have to pay to watch the latest Marvel film. Naturally, they also provoked a lot of debate, ranging in intelligence from intellectual to average denizen of the web, on the subject of copyright and copyright law. I personally think that the best way to understand anything is to try and understand exactly why and how stuff came to exist in the first place, so today I present a historical analysis of copyright law and how it came into being.

Let us travel back in time, back to our stereotypical club-wielding tribe of stone age human. Back then, the leader not only controlled and lead the tribe, but ensured that every facet of it worked to increase his and everyone else’s chance of survival, and chance of ensuring that the next meal would be coming along. In short, what was good for the tribe was good for the people in it. If anyone came up with a new idea or technological innovation, such as a shield for example, this design would also be appropriated and used for the good of the tribe. You worked for the tribe, and in return the tribe gave you protection, help gathering food and such and, through your collective efforts, you stayed alive. Everybody wins.

However, over time the tribes began to get bigger. One tribe would conquer their neighbours, gaining more power and thus enabling them to take on bigger, larger, more powerful tribes and absorb them too. Gradually, territories, nations and empires form, and what was once a small group in which everyone knew everyone else became a far larger organisation. The problem as things get bigger is that what’s good for a country starts to not necessarily become as good for the individual. As a tribe gets larger, the individual becomes more independent of the motions of his leader, to the point at which the knowledge that you have helped the security of your tribe does not bear a direct connection to the availability of your next meal- especially if the tribe adopts a capitalist model of ‘get yer own food’ (as opposed to a more communist one of ‘hunters pool your resources and share between everyone’ as is common in a very small-scale situation when it is easy to organise). In this scenario, sharing an innovation for ‘the good of the tribe’ has far less of a tangible benefit for the individual.

Historically, this rarely proved to be much of a problem- the only people with the time and resources to invest in discovering or producing something new were the church, who generally shared between themselves knowledge that would have been useless to the illiterate majority anyway, and those working for the monarchy or nobility, who were the bosses anyway. However, with the invention of the printing press around the start of the 16th century, this all changed. Public literacy was on the up and the press now meant that anyone (well, anyone rich enough to afford the printers’ fees)  could publish books and information on a grand scale. Whilst previously the copying of a book required many man-hours of labour from a skilled scribe, who were rare, expensive and carefully controlled, now the process was quick, easy and available. The impact of the printing press was made all the greater by the social change of the few hundred years between the Renaissance and today, as the establishment of a less feudal and more merit-based social system, with proper professions springing up as opposed to general peasantry, meaning that more people had the money to afford such publishing, preventing the use of the press being restricted solely to the nobility.

What all this meant was that more and more normal (at least, relatively normal) people could begin contributing ideas to society- but they weren’t about to give them up to their ruler ‘for the good of the tribe’. They wanted payment, compensation for their work, a financial acknowledgement of the hours they’d put in to try and make the world a better place and an encouragement for others to follow in their footsteps. So they sold their work, as was their due. However, selling a book, which basically only contains information, is not like selling something physical, like food. All the value is contained in the words, not the paper, meaning that somebody else with access to a printing press could also make money from the work you put in by running of copies of your book on their machine, meaning they were profiting from your work. This can significantly cut or even (if the other salesman is rich and can afford to undercut your prices) nullify any profits you stand to make from the publication of your work, discouraging you from putting the work in in the first place.

Now, even the most draconian of governments can recognise that your citizens producing material that could not only benefit your nation’s happiness but also potentially have great material use is a valuable potential resource, and that they should be doing what they can to promote the production of that material, if only to save having to put in the large investment of time and resources themselves. So, it makes sense to encourage the production of this material, by ensuring that people have a financial incentive to do it. This must involve protecting them from touts attempting to copy their work, and hence we arrive at the principle of copyright: that a person responsible for the creation of a work of art, literature, film or music, or who is responsible for some form of technological innovation, should have legal control over the release & sale of that work for at least a set period of time. And here, as I will explain next time, things start to get complicated…

The Encyclopaedia Webbanica

Once again, today’s post will begin with a story- this time, one about a place that was envisaged over a hundred years ago. It was called the Mundaneum.

The Mundaneum today is a tiny museum in the city of Mons, Belgium, which opened in its current form in 1998. It is a far cry from the original, first conceptualised by Nobel Peace Prize winner Henri la Fontaine and fellow lawyer and pioneer Paul Otlet in 1895. The two men, Otlet in particular, had a vision- to create a place where every single piece of knowledge in the world was housed. Absolutely all of it.

Even in the 19th century, when the breadth of scientific knowledge was a million times smaller than it is today (a 19th century version of New Scientist would be publishable about once a year), this was a huge undertaking, this was a truly gigantic undertaking from a practical perspective. Not only did Otlet and la Fontaine attempt to collect a copy of just about every book ever written in search of information, but went further than any conventional library of the time by also looking through pamphlets, photographs, magazines, and posters in search of data. The entire thing was stored on small 3×5 index cards and kept in a carefully organised and detailed system of files, and this paper database eventually grew to contain over 12 million entries. People would send letters or telegraphs to the government-funded Mundaneum (the name referencing to the French monde, meaning world, rather than mundane as in boring), who in turn would have their staff search through their files in order to give a response to just about any question that could be asked.

However, the most interesting thing of all about Otlet’s operation, quite apart from the sheer conceptual genius of a man who was light-years ahead of his time, was his response to the problems posed when the enterprise got too big for its boots. After a while, the sheer volume of information and, more importantly, paper, meant that the filing system was getting too big to be practical for the real world. Otlet realised that this was not a problem that could ever be resolved by more space or manpower- the problem lay in the use of paper. And this was where Otlet pulled his masterstroke of foresight.

Otlet envisaged a version of the Mundaneum where the whole paper and telegraph business would be unnecessary- instead, he foresaw a “mechanical, collective brain”, through which people of the world could access all the information the world had to offer stored within it via a system of “electric microscopes”. Not only that, but he envisaged the potential for these ‘microscopes’ to connect to one another, and letting people “participate, applaud, give ovations, [or] sing in the chorus”. Basically, a pre-war Belgian lawyer predicted the internet (and, in the latter statement, social networking too).

Otlet has never been included in the pantheon of web pioneers- he died in 1944 after his beloved Mundaneum had been occupied and used to house a Nazi art collection, and his vision of the web as more of an information storage tool for nerdy types is hardly what we have today. But, to me, his vision of a web as a hub for sharing information and a man-made font of all knowledge is envisaged, at least in part, by one huge and desperately appealing corner of the web today: Wikipedia.

If you take a step back and look at Wikipedia as a whole, its enormous success and popularity can be quite hard to understand. Beginning from a practical perspective, it is a notoriously difficult site to work with- whilst accessing the information is very user-friendly, the editing process can be hideously confusing and difficult, especially for the not very computer-literate (seriously, try it). My own personal attempts at article-editing have almost always resulted in failure, bar some very small changes and additions to existing text (where I don’t have to deal with the formatting). This difficulty in formatting is a large contributor to another issue- Wikipedia articles are incredibly text-heavy, usually with only a few pictures and captions, which would be a major turn-off in a magazine or book. The very concept of an encyclopaedia edited and made by the masses, rather than a select team of experts, also (initially) seems incredibly foolhardy. Literally anyone can type in just about anything they want, leaving the site incredibly prone to either vandalism or accidental misdirection (see xkcd.com/978/ for Randall Munroe’s take on how it can get things wrong). The site has come under heavy criticism over the years for this fact, particularly on its pages about people (Dan Carter, the New Zealand fly-half, has apparently considered taking up stamp collecting, after hundreds of fans have sent him stamps based on a Wikipedia entry stating that he was a philatelist), and just letting normal people edit it also leaves bias prone to creep in, despite the best efforts of Wikipedia’s team of writers and editors (personally, I think that the site keeps its editing software deliberately difficult to use to minimise the amount of people who can use it easily and so try to minimise this problem).

But, all that aside… Wikipedia is truly wonderful- it epitomises all that is good about the web. It is a free to use service, run by a not-for-profit organisation that is devoid of advertising and is funded solely by the people of the web whom it serves. It is the font of all knowledge to an entire generation of students and schoolchildren, and is the number one place to go for anyone looking for an answer about anything- or who’s just interested in something and would like to learn more. It is built on the principles of everyone sharing and contributing- even flaws or areas lacking citation are denoted by casual users if they slip up past the editors the first time around. It’s success is built upon its size, both big and small- the sheer quantity of articles (there are now almost four million, most of which are a bit bigger than would have fitted on one of Otlet’s index cards), means that it can be relied upon for just about any query (and will be at the top of 80% of my Google searches), but its small server space, and staff size (less than 50,000, most of whom are volunteers- the Wikimedia foundation employs less than 150 people) keeps running costs low and allows it to keep on functioning despite its user-sourced funding model. Wikipedia is currently the 6th (ish) most visited website in the world, with 12 billion page views a month. And all this from an entirely not-for-profit organisation designed to let people know facts.

Nowadays, the Mundaneum is a small museum, a monument to a noble but ultimately flawed experiment. It original offices in Brussels were left empty, gathering dust after the war until a graduate student discovered it and eventually provoked enough interest to move the old collection to Mons, where it currently resides as a shadow of its former glory. But its spirit lives on in the collective brain that its founder envisaged. God bless you, Wikipedia- long may you continue.

The Problems of the Real World

My last post on the subject of artificial intelligence was something of a philosophical argument on its nature- today I am going to take on a more practical perspective, and have a go at just scratching the surface of the monumental challenges that the real world poses to the development of AI- and, indeed, how they are (broadly speaking) solved.

To understand the issues surrounding the AI problem, we must first consider what, in the strictest sense of the matter, a computer is. To quote… someone, I can’t quite remember who: “A computer is basically just a dumb adding machine that counts on its fingers- except that it has an awful lot of fingers and counts terribly fast”. This, rather simplistic model, is in fact rather good for explaining exactly what it is that computers are good and bad at- they are very good at numbers, data crunching, the processing of information. Information is the key thing here- if something can be inputted into a computer purely in terms of information, then the computer is perfectly capable of modelling and processing it with ease- which is why a computer is very good at playing games. Even real-world problems that can be expressed in terms of rules and numbers can be converted into computer-recognisable format and mastered with ease, which is why computers make short work of things like ballistics modelling (such as gunnery tables, the US’s first usage of them), and logical games like chess.

However, where a computer develops problems is in the barrier between the real world and the virtual. One must remember that the actual ‘mind’ of a computer itself is confined exclusively to the virtual world- the processing within a robot has no actual concept of the world surrounding it, and as such is notoriously poor at interacting with it. The problem is twofold- firstly, the real world is not a mere simulation, where rules are constant and predictable; rather, it is an incredibly complicated, constantly changing environment where there are a thousand different things that we living humans keep track of without even thinking. As such, there are a LOT of very complicated inputs and outputs for a computer to keep track of in the real world, which makes it very hard to deal with. But this is merely a matter of grumbling over the engineering specifications and trying to meet the design brief of the programmers- it is the second problem which is the real stumbling block for the development of AI.

The second issue is related to the way a computer processes information- bit by bit, without any real grasp of the big picture. Take, for example, the computer monitor in front of you. To you, it is quite clearly a screen- the most notable clue being the pretty pattern of lights in front of you. Now, turn your screen slightly so that you are looking at it from an angle. It’s still got a pattern of lights coming out of it, it’s still the same colours- it’s still a screen. To a computer however, if you were to line up two pictures of your monitor from two different angles, it would be completely unable to realise that they were the same screen, or even that they were the same kind of objects. Because the pixels are in a different order, and as such the data’s different, the two pictures are completely different- the computer has no concept of the idea that the two patterns of lights are the same basic shape, just from different angles.

There are two potential solutions to this problem. Firstly, the computer can look at the monitor and store an image of it from every conceivable angle with every conceivable background, so that it would be able to recognise it anywhere, from any viewpoint- this would however take up a library’s worth of memory space and be stupidly wasteful. The alternative requires some cleverer programming- by training the computer to spot patterns of pixels that look roughly similar (either shifted along by a few bytes, or missing a few here and there), they can be ‘trained’ to pick out basic shapes- by using an algorithm to pick out changes in colour (an old trick that’s been used for years to clean up photos), the edges of objects can be identified and separate objects themselves picked out. I am not by any stretch of the imagination an expert in this field so won’t go into details, but by this basic method a computer can begin to step back and begin to look at the pattern of a picture as a whole.

But all that information inputting, all that work…  so your computer can identify just a monitor? What about all the myriad of other things our brains can recognise with such ease- animals, buildings, cars? And we haven’t even got on to differentiating between different types of things yet… how will we ever match the human brain?

This idea presented a big setback for the development of modern AI- so far we have been able to develop AI that allows one computer to handle a few real-world tasks or applications very well (and in some cases, depending on the task’s suitability to the computational mind, better than humans), but scientists and engineers were presented with a monumental challenge when faced with the prospect of trying to come close to the human mind (let alone its body) in anything like the breadth of tasks it is able to perform. So they went back to basics, and began to think of exactly how humans are able to do so much stuff.

Some of it can be put down to instinct, but then came the idea of learning. The human mind is especially remarkable in its ability to take in new information and learn new things about the world around it- and then take this new-found information and try to apply it to our own bodies. Not only can we do this, but we can also do it remarkably quickly- it is one of the main traits which has pushed us forward as a race.

So this is what inspires the current generation of AI programmers and robotocists- the idea of building into the robot’s design a capacity for learning. The latest generation of the Japanese ‘Asimo’ robots can learn what various objects presented to it are, and is then able to recognise them when shown them again- as well as having the best-functioning humanoid chassis of any existing robot, being able to run and climb stairs. Perhaps more excitingly are a pair of robots currently under development that start pretty much from first principles, just like babies do- first they are presented with a mirror and learn to manipulate their leg motors in such a way that allows them to stand up straight and walk (although they aren’t quite so good at picking themselves up if they fail in this endeavour). They then face one another and begin to demonstrate and repeat actions to one another, giving each action a name as they do so.  In doing this they build up an entirely new, if unsophisticated, language with which to make sense of the world around them- currently, this is just actions, but who knows what lies around the corner…