“Have you ever thought that nostalgia isn’t what it used to be…”

Human beings love nostalgia, perhaps strangely. For all the success of various self-help gurus and such telling us to ‘live in the moment’, there are few things more satisfying than sitting back and letting the memories flow over us, with rose-tinted spectacles all set up and in position. Looking back on our past may conjure up feelings of longing, of contentment, of pride or even resentment of the modern day when considering ‘the good old days’, but nobody can doubt how comforting the experience often is.

The real strangeness of nostalgia comes from how irrational it is; when analysing the facts of a given time period, whether in one’s own life or in a historical sense, it is hard not to come to the conclusion that the past is usually as bad as the present day, for some different and many of the same reasons. The older generations have, for example, have always thought that current chart music (for any time period’s definition of ‘current’) is not as good as when they were a teenager, that their younger peers have less respect than they should, and that culture is on a downward spiral into chaos and mayhem that will surely begin within the next couple of years. Or at least so the big book of English middle class stereotypes tells me. The point is that the idea that the modern day is worse than those that have gone before is an endless one, and since at no point in history have we ever been rolling in wealth, freedom, happiness and general prosperity it is a fairly simple process to conclude that things have not, in fact, actually been getting worse. At the very least, whilst in certain areas the world probably is worse than it was, say, 30 years ago (the USA’s relationship with the Middle East, the drugs trade, the number of One Direction fans on planet Earth and so on), from other standpoints it could be said that our world is getting continually better; consider the scientific and technological advancements of the last two decades, or the increasing acceptance the world seems to have for certain sections of its society (the LGBT community and certain racial minorities spring to mind). Basically, the idea that everything was somehow genuinely better in the past is an irrational one, and thus nostalgia is a rather irrational idea.

What then, is the cause of nostalgia; why do we find it so comforting, why is it so common to yearn for ‘good old days’ that, often, never truly were?

Part of the answer may lie in the nature of childhood, the period most commonly associated with nostalgia. Childhood in humans is an immensely interesting topic; no other animal enjoys a period of childhood lasting around a quarter of its total lifespan (indeed, if humans today lived as long as they did in the distant past, around half their life would be spent in the stage we nowadays identify as childhood), and the reasons for this could (and probably will one day) make up an entire post of their own. There is still a vast amount we do not know about how our bodies, particularly in terms of the brain, develop during this period of our lives, but what we can say with some certainty is that our perception of the world as a child is fundamentally different from our perception as adults. Whether it be the experience we do not yet have, the relative innocence of childhood, some deep neurological effect we do not yet know about or simply a lack of care for the outside world, the world as experienced by a child is generally a small, simple one. Children, more so the younger we are but to a lesser extent continuing through into the teenage years, tend to be wrapped up in their own little world; what Timmy did in the toilets at school today is, quite simply, the biggest event in human history to date. What the current prime minister is doing to the economy, how the bills are going to get paid this month, the ups and downs of marriages and relationships; none matter to a childhood mind, and with hindsight we are well aware of it. There is a reason behind the oft-stated (as well as slightly depressing and possibly wrong) statement that ‘schooldays are the best of your life’. As adults we forget that, as kids, we did have worries, there was horrible stuff in the world and we were unhappy, often; it’s just that, because childhood worries are so different and ignore so many of the big things that would have troubled us were we adults at the time, we tend to regard them as trivial, with the benefit of that wonderful thing that is hindsight.

However, this doesn’t account so well for nostalgia that hits when we enter our teenage years and later life; for stuff like music, for example, which also is unlikely to have registered in our pre-teen days. To explain this, we must consider the other half of the nostalgia explanation; the simple question of perception. It is an interesting fact that some 70-80% of people consider themselves to be an above-average driver, and it’s not hard to see why; we may see a few hundred cars on our commute into work or school, but will only ever remember that one bastard who cut us up at the lights. Even though it represents a tiny proportion of all the drivers we ever see, bad driving is still a common enough occurrence that we feel the majority of drivers must do such stupid antics on a regular basis, and that we are a better driver than said majority.

And the same applies to nostalgia. Many things will have happened to us during our younger days; we will hear some good music, and ignore a lot of crap music. We will have plenty of dull, normal schooldays, and a couple that are absolutely spectacular (along with a few terrible ones). And we will encounter many aspects of the world, be they news stories, encounters with people or any of the other pieces of random ‘stuff’ that makes up our day-to-day lives, that will either feel totally neutral to us, make us feel a little bit happy or make us slightly annoyed, exactly the same stuff that can sometimes make us feel like our current existence is a bit crappy. But all we will ever remember are the extremes; the stuff that filled us with joy, and the darkest and most memorable of horrors. And so, when we look back on our younger days, we smile sadly to ourselves as we remember those good times. All the little niggly bad things, all the dull moments, they don’t feature on our internal viewfinder. In our head, there really were ‘good old days’. Our head is, however, not a terribly reliable source when it comes to such things.

Advertisements

Plato’s Cave

Everyone’s heard of Plato, to some extent anyway; ‘Greek bloke, lived quite a while ago, had a beard’ is probably the limit of what could be considered universal knowledge. This he most certainly was, but what made him famous was his work, for Plato was taught by Socrates and was one of the finest philosophers and thinkers to grace human history. His greatest work was ‘The Republic’, a ten book piece exploring the nature of justice and government through a series of imagined conversations, hypothetical situations, metaphors and allegories.  One of these allegories has become especially linked to Plato’s name, which is somewhat surprising given how little the actual allegory is known to the world in general, so I thought I might explore it today; the allegory of the cave.

Plato believed in a separate level of reality, more fundamental than the physical world we encounter and interact with using our body and senses, that he called The Forms. To summarise briefly, a Form is the philosophical essence of an object; in the real world, a shelf is three bits of wood and some nails all joined together, but the Form of this is the ability to store some books within easy reach, for example.  Without the essence of shelf-ness, the shelf literally is nothing more than some wood, and ceases to be a shelf on a fundamental level any more. Similarly, when we turn a piece of plastic into a toy, we have fundamentally changed the Form of that plastic, even though the material is exactly the same.

Plato based most of his philosophical work around his Theory of Forms, and took the concept to great extremes; to him, the sole objective scale against which to measure intelligence was one’s ability to grasp the concept of the Form of something, and he also held that understanding the Form of a situation was the key to its correct management. However, he found his opinions on Forms hard to communicate to many people (and it can’t have helped that he was born to a rich family, where he was given plenty of opportunity to be intelligent, whilst many of the poor were uneducated), and some considered him to be talking rubbish, and so he came up with the allegory of the cave to explain what he was on about.

Imagine a large group of prisoners, chained to the wall of a cave for some unspecified reason. They are fixed in position, unable to move at all, and their necks are also fixed in position so they cannot look around. Worst of all, however, they have absolutely no memory of the world or how anything in it works; in many ways, their minds are like that of a newborn toddler trying to grasp the concept of the world around him. Everything they are to know must be learnt from experience and experimentation. But in front of them, they can see nothing but bare rock.

However, there are a few features of this cave that make it interesting. It is very deep and comprises multiple levels, with the prisoners at the bottom. On the level above the prisoners, and directly behind them, is an enormous fire, stoked and fed day and night (although being at the bottom of a cave, the prisoners don’t have any concept of day and night), brightly illuminating the wall that the prisoner’s see. Also on the level above, but in front of the fire, is a walkway, across which people walk along with their children, animals and whatever items they happen to be carrying. As they cross in front of the fire, their shadows are cast onto the wall the prisoners can see, and the sounds they make echo down to the prisoners too. Over time (and we’re presuming years here) the prisoners get used to the shadows they see on the wall in front of them; they learn to recognise the minute details of the shadows, to differentiate and identify them. They learn to call one figure a man, another a woman, and call others cat, dog, box, pot or whatever. They learn that sometimes it gets cold, and then hot again some time later, before reverting back to cold (thanks to the seasons). And then, they begin to make connections between the echoes they hear and the shadows. They learn that man shadows and woman shadows talk differently from one another and from dog shadows, and that basket shadows make hardly any noise.

Now remember, we’re presuming here that the prisoners have no memory/knowledge of the ‘real world’, so the shadows become, to them, a reality. They think it is the shadows of a dog that make the barking sound, and that when the shadow of a clay pot is dropped and breaks, then it is the shadow that has broken. Winter and summer are not caused by anything, they merely happen. What is to us merely an image of reality becomes their reality.

Now, Plato has us imagine we take one of our prisoners away; free him, show him the real world. As he says, if we suppose “that the man was compelled to look at the fire: wouldn’t he be struck blind and try to turn his gaze back toward the shadows, as toward what he can see clearly and hold to be real?” Wouldn’t he be simultaneously amazed and terrified by the world he found around him, to see a fully-fledged person causing the shadow he had once thought of as a fundamental reality? Perhaps he would be totally unable to even see, much less comprehend, this strange, horrifying new world, unable to recognise it as real.

However, humans are nothing if not adaptable creatures, and after some time ‘up top’ our freed prisoner would surely grow accustomed to his surroundings. He would see a person, rather than their shadow, think of putting something in a box, rather than seeing a black square on a wall, and would eventually feel confident enough to venture out of the cave, look at and comprehend the sun, and eventually even recognise it as “source of the seasons and the years, and is the steward of all things in the visible place, and is in a certain way the cause of all those things he and his companions had been seeing”. (Plato often used the sun as a metaphor for enlightenment or illumination from knowledge, so here it represents the prisoner’s final understanding of the nature of reality).

Now, our prisoner could be said to be educated in the ways of the world, and after a time he would surely think back to those long days he spent chained to that wall. He would think of his fellow prisoners, how piteous their lives and their recognition of reality was when compared to him, and how much he could teach them to aid their understanding and make them happier. “And wouldn’t he disdain whatever honours, praises, and prizes were awarded there to the ones who guessed best which shadows followed which?”. So, Plato has our man return to his cave, to his old spot, and try to teach his fellow prisoners what reality really is.

And it is here where Plato’s analogy gets really interesting; for, rather than accepting this knowledge, the fellow prisoners would be far more likely to reject them. What are these colour things? What do you mean, stuff goes ‘inside’ other things- there are only two dimensions. What is this big fiery ball in this ‘sky’ thing? And, after all, why should they listen to him; after so long away, he’s going to be pretty bad at the whole ‘guessing what each shadow is’ business, so they would probably think him stupid; insane, even, going on about all these concepts that are, to the prisoners, quite obviously not real. He would be unable to educate them without showing them what he means, because he can’t express his thought in terms of the shadows they see in front of them. If anything, his presence would only scare them, convince them that this strange ‘other world’ he talks about is but a feat of madness causing one’s eyes to become corrupted, scaring them away from attempting to access anything beyond their limited view of ‘shadow-reality’. As Plato says, “if they were somehow able to get their hands on and kill the man who attempts to release and lead them up, wouldn’t they kill him?”

To Plato, the world of his Forms was akin to the real world; true, enlightened, the root cause of the physical reality we see and encounter. And the real, material world; that was the shadows, mere imprints of The Forms that we experienced as physical phenomena. We as people have the ability to, unlike the prisoners, elevate ourselves beyond the physical world and try to understand the philosophical world, the level of reality where we can comprehend what causes things, what things mean, what their consequences are; where we can explore with an analytical mind and understand our world better on a fundamental level. Or, we can choose not to, and stay looking at shadows and dismissing those willing to think higher.

We Will Remember Them

Four days ago (this post was intended for Monday, when it would have been yesterday, but I was out then- sorry) was Remembrance Sunday; I’m sure you were all aware of that. Yesterday we acknowledged the dead, recognised the sacrifice they made in service of their country, and reflected upon the tragic horrors that war inflicted upon them and our nations. We gave our thanks that “for your tomorrow, we gave our today”.

However, as the greatest wars ever to rack our planet have disappeared towards the realm of being outside living memory, a few dissenting voices have risen about the place of the 11th of November as a day of national mourning and remembrance. They are not loud complaints, as anything that may be seen as an attempt to sully the memories of those who ‘laid so costly a sacrifice on the altar of freedom’ (to quote Saving Private Ryan) is unsurprisingly lambasted and vilified by the majority, but it would be wrong not to recognise that there are some who question the very idea of Remembrance Sunday in its modern incarnation.

‘Remembrance Sunday,’ so goes the argument, ‘is very much centred around the memories of those who died: recognising their act of sacrifice and championing the idea that ‘they died for us’.” This may partly explain why the Church has such strong links with the ceremony; quite apart from religion being approximately 68% about death, the whole concept of sacrificing oneself for the good of others is a direct parallel to the story of Jesus Christ. ‘However,’ continues the argument, ‘the wars that we of the old Allied Powers chiefly celebrate and remember are ones in which we won, and if we had lost them then to argue that they had given their lives in defence of their realm would make it seem like their sacrifice was wasted- thus, this style of remembrance is not exactly fair. Furthermore, by putting the date of our symbolic day of remembrance on the anniversary of the end of the First World War, we invariably make that conflict (and WWII) our main focus of interest. But, it is widely acknowledged that WWI was a horrific, stupid war, in which millions died for next to no material gain and which is generally regarded as a terrible waste of life. We weren’t fighting for freedom against some oppressive power, but because all the European top brass were squaring up to one another in a giant political pissing contest, making the death of 20 million people the result of little more than a game of satisfying egos. This was not a war in which ‘they died for us’ is exactly an appropriate sentiment’.

Such an argument is a remarkably good one, and does call into question the very act of remembrance itself.  It’s perhaps more appropriate to make such an argument with more recent wars- the Second World War was a necessary conflict if ever there was one, and it cannot be said that those soldiers currently fighting in Afghanistan are not trying to make a deeply unstable and rather undemocratic part of the world a better place to live in (I said trying). However, this doesn’t change the plain and simple truth that war is a horrible, unpleasant activity that we ought to be trying to get rid of wherever humanly possible, and remembering soldiers from years gone by as if their going to die in a muddy trench was absolutely the most good and right thing to do does not seem like the best way of going about this- it reminds me of, in the words of Wilfred Owen: “that old lie:/Dulce Et Decorum Est/Pro Patria Mori”.

However, that is not to say that we should not remember the deaths and sacrifices of those dead soldiers, far from it. Not only would it be hideously insensitive to both their memories and families (my family was fortunate enough to not experience any war casualties in the 20th century), but it would also suggest to soldiers currently fighting that their fight is meaningless- something they are definitely not going to take well, which would be rather inadvisable since they have all the guns and explosives. War might be a terrible thing, but that is not to say that it doesn’t take guts and bravery to face the guns and fight for what you believe in (or, alternatively, what your country makes you believe in). As deaths go, it is at least honourable, if not exactly Dulce Et Decorum.

And then, of course, there is the whole point of remembrance, and indeed history itself, to remember. The old adage about ‘study history or else find yourself repeating it’ still holds true, and by learning lessons from the past we stand very little chance of improving on our previous mistakes. Without the great social levelling and anti-imperialist effects of the First World War, then women may never have got the vote, jingoistic ideas about empires,  and the glory of dying in battle may still abound, America may (for good or ill) have not made enough money out of the war to become the economic superpower it is today and wars may, for many years more, have continued to waste lives through persistent use of outdated tactics on a modern battlefield with modern weaponry, to name but the first examples to come into my head- so to ignore the act of remembrance is not just disrespectful, but downright rude.

Perhaps then, the message to learn is not to ignore the sacrifice that those soldiers have made over the years, but rather to remember what they died to teach us. We can argue for all of eternity as to whether the wars that lead to their deaths were ever justified, but we can all agree that the concept of war itself is a wrong one, and that the death and pain it causes are the best reasons to pursue peace wherever we can. This then, should perhaps be the true message of Remembrance Sunday; that over the years, millions upon millions of soldiers have dyed the earth red with their blood, so that we might one day learn the lessons that enable us to enjoy a world in which they no longer have to.

Practical computing

This looks set to be my final post of this series about the history and functional mechanics of computers. Today I want to get onto the nuts & bolts of computer programming and interaction, the sort of thing you might learn as a budding amateur wanting to figure out how to mess around with these things, and who’s interested in exactly how they work (bear in mind that I am not one of these people and am, therefore, likely to get quite a bit of this wrong). So, to summarise what I’ve said in the last two posts (and to fill in a couple of gaps): silicon chips are massive piles of tiny electronic switches, memory is stored in tiny circuits that are either off or on, this pattern of off and on can be used to represent information in memory, memory stores data and instructions for the CPU, the CPU has no actual ability to do anything but automatically delegates through the structure of its transistors to the areas that do, the arithmetic logic unit is a dumb counting machine used to do all the grunt work and is also responsible, through the CPU, for telling the screen how to make the appropriate pretty pictures.

OK? Good, we can get on then.

Programming languages are a way of translating the medium of computer information and instruction (binary data) into our medium of the same: words and language. Obviously, computers do not understand that the buttons we press on our screen have symbols on them, that these symbols mean something to us and that they are so built to produce the same symbols on the monitor when we press them, but we humans do and that makes computers actually usable for 99.99% of the world population. When a programmer brings up an appropriate program and starts typing instructions into it, at the time of typing their words mean absolutely nothing. The key thing is what happens when their data is committed to memory, for here the program concerned kicks in.

The key feature that defines a programming language is not the language itself, but the interface that converts words to instructions. Built into the workings of each is a list of ‘words’ in binary, each word having a corresponding, but entirely different, string of data associated with it that represents the appropriate set of ‘ons and offs’ that will get a computer to perform the correct task. This works in one of two ways: an ‘interpreter’ is an inbuilt system whereby the programming is stored just as words and is then converted to ‘machine code’ by the interpreter as it is accessed from memory, but the most common form is to use a compiler. This basically means that once you have finished writing your program, you hit a button to tell the computer to ‘compile’ your written code into an executable program in data form. This allows you to delete the written file afterwards, makes programs run faster, and gives programmers an excuse to bum around all the time (I refer you here)

That is, basically how computer programs work- but there is one last, key feature, in the workings of a modern computer, one that has divided both nerds and laymen alike across the years and decades and to this day provokes furious debate: the operating system.

An OS, something like Windows (Microsoft), OS X (Apple) or Linux (nerds), is basically the software that enables the CPU to do its job of managing processes and applications. Think of it this way: whilst the CPU might put two inputs through a logic gate and send an output to a program, it is the operating system that will set it up to determine exactly which gate to put it through and exactly how that program will execute. Operating systems are written onto the hard drive, and can, theoretically, be written using nothing more than a magnetized needle, a lot of time and a plethora of expertise to flip the magnetically charged ‘bits’ on the hard disk. They consist of many different parts, but the key feature of all of them is the kernel, the part that manages the memory, optimises the CPU performance and translates programs from memory to screen. The precise translation and method by which this latter function happens differs from OS to OS, hence why a program written for Windows won’t work on a Mac, and why Android (Linux-powered) smartphones couldn’t run iPhone (iOS) apps even if they could access the store. It is also the cause of all the debate between advocates of different operating systems, since different translation methods prioritise/are better at dealing with different things, work with varying degrees of efficiency and are more  or less vulnerable to virus attack. However, perhaps the most vital things that modern OS’s do on our home computers is the stuff that, at first glance seems secondary- moving stuff around and scheduling. A CPU cannot process more than one task at once, meaning that it should not be theoretically possible for a computer to multi-task; the sheer concept of playing minesweeper whilst waiting for the rest of the computer to boot up and sort itself out would be just too outlandish for words. However, a clever piece of software called a scheduler in each OS which switches from process to process very rapidly (remember computers run so fast that they can count to a billion, one by one, in under a second) to give the impression of it all happening simultaneously. Similarly, a kernel will allocate areas of empty memory for a given program to store its temporary information and run on, but may also shift some rarely-accessed memory from RAM (where it is accessible) to hard disk (where it isn’t) to free up more space (this is how computers with very little free memory space run programs, and the time taken to do this for large amounts of data is why they run so slowly) and must cope when a program needs to access data from another part of the computer that has not been specifically allocated a part of that program.

If I knew what I was talking about, I could witter on all day about the functioning of operating systems and the vast array of headache-causing practicalities and features that any OS programmer must consider, but I don’t and as such won’t. Instead, I will simply sit back, pat myself on the back for having actually got around to researching and (after a fashion) understanding all this, and marvel at what strange, confusing, brilliant inventions computers are.

Up one level

In my last post (well, last excepting Wednesday’s little topical deviation), I talked about the real nuts and bolts of a computer, detailing the function of the transistors that are so vital to the workings of a computer. Today, I’m going to take one step up and study a slightly broader picture, this time concerned with the integrated circuits that utilise such components to do the real grunt work of computing.

An integrated circuit is simply a circuit that is not comprised of multiple, separate, electronic components- in effect, whilst a standard circuit might consist of a few bits of metal and plastic connected to one another by wires, in an IC they are all stuck in the same place and all assembled as one. The main advantage of this is that since all the components don’t have to be manually stuck to one another, but are built in circuit form from the start, there is no worrying about the fiddliness of assembly and they can be mass-produced quickly and cheaply with components on a truly microscopic scale. They generally consist of several layers on top of the silicon itself, simply to allow space for all of the metal connecting tracks and insulating materials to run over one another (this pattern is usually, perhaps ironically, worked out on a computer), and the sheer detail required of their manufacture surely makes it one of the marvels of the engineering world.

But… how do they make a computer work? Well, let’s start by looking at a computer’s memory, which in all modern computers takes the form of semiconductor memory. Memory takes the form of millions upon millions of microscopically small circuits known as memory circuits, each of which consists of one or more transistors. Computers are electronic, meaning to only thing they understand is electricity- for the sake of simplicity and reliability, this takes the form of whether the current flowing in a given memory circuit is ‘on’ or ‘off’. If the switch is on, then the circuit is represented as a 1, or a 0 if it is switched off. These memory circuits are generally grouped together, and so each group will consist of an ordered pattern of ones and zeroes, of which there are many different permutations. This method of counting in ones and zeroes is known as binary arithmetic, and is sometimes thought of as the simplest form of counting. On a hard disk, patches of magnetically charged material represent binary information rather than memory circuits.

Each little memory circuit, with its simple on/off value, represents one bit of information. 8 bits grouped together forms a byte, and there may be billions of bytes in a computer’s memory. The key task of a computer programmer is, therefore, to ensure that all the data that a computer needs to process is written in binary form- but this pattern of 1s and 0s might be needed to represent any information from the content of an email to the colour of one pixel of a video. Clearly, memory on its own is not enough, and the computer needs some way of translating the information stored into the appropriate form.

A computer’s tool for doing this is known as a logic gate, a simple electronic device consisting of (you guessed it) yet more transistor switches. This takes one or two inputs, either ‘on’ or ‘off’ binary ones, and translates them into another value. There are three basic types:  AND gates (if both inputs equal 1, output equals 1- otherwise, output equals 0), OR gates (if either input equals 1, output equals 1- if both inputs equal 0, output equals 0), and NOT gates (if input equals 1, output equals 0, if input equals 0, output equals 1). The NOT gate is the only one of these with a single input, and combinations of these gates can perform other functions too, such as NAND (not-and) or XOR (exclusive OR; if either input equals 1, output equals 1, but if both inputs equal 1 or 0, output equals 0) gates. A computer’s CPU (central processing unit) will contain hundreds of these, connected up in such a way as to link various parts of the computer together appropriately, translate the instructions of the memory into what function a given program should be performing, and thus cause the relevant bit (if you’ll pardon the pun) of information to translate into the correct process for the computer to perform.

For example, if you click on an icon on your desktop, your computer will put the position of your mouse and the input of the clicking action through an AND gate to determine that it should first highlight that icon. To do this, it orders the three different parts of each of the many pixels of that symbol to change their shade by a certain degree, and the the part of the computer responsible for the monitor’s colour sends a message to the Arithmetic Logic Unit (ALU), the computer’s counting department, to ask what the numerical values of the old shades plus the highlighting is, to give it the new shades of colour for the various pictures. Oh, and the CPU should also open the program. To do this, its connections send a signal off to the memory to say that program X should open now. Another bit of the computer then searches through the memory to find program X, giving it the master ‘1’ signal that causes it to open. Now that it is open, this program routes a huge amount of data back through the CPU to tell it to change the pattern of pretty colours on the screen again, requiring another slue of data to go through the ALU, and that areas of the screen A, B and C are now all buttons, so if you click there then we’re going to have to go through this business all over again. Basically the CPU’s logical function consists of ‘IF this AND/OR this happens, which signal do I send off to ask the right part of the memory what to do next?’. And it will do all this in a miniscule fraction of a second. Computers are amazing.

Obviously, nobody in their right mind is going to go through the whole business of telling the computer exactly what to do with each individual piece of binary data manually, because if they did nothing would ever get done. For this purpose, therefore, programmers have invented programming languages to translate their wishes into binary, and for a little more detail about them, tune in to my final post on the subject…

The Problems of the Real World

My last post on the subject of artificial intelligence was something of a philosophical argument on its nature- today I am going to take on a more practical perspective, and have a go at just scratching the surface of the monumental challenges that the real world poses to the development of AI- and, indeed, how they are (broadly speaking) solved.

To understand the issues surrounding the AI problem, we must first consider what, in the strictest sense of the matter, a computer is. To quote… someone, I can’t quite remember who: “A computer is basically just a dumb adding machine that counts on its fingers- except that it has an awful lot of fingers and counts terribly fast”. This, rather simplistic model, is in fact rather good for explaining exactly what it is that computers are good and bad at- they are very good at numbers, data crunching, the processing of information. Information is the key thing here- if something can be inputted into a computer purely in terms of information, then the computer is perfectly capable of modelling and processing it with ease- which is why a computer is very good at playing games. Even real-world problems that can be expressed in terms of rules and numbers can be converted into computer-recognisable format and mastered with ease, which is why computers make short work of things like ballistics modelling (such as gunnery tables, the US’s first usage of them), and logical games like chess.

However, where a computer develops problems is in the barrier between the real world and the virtual. One must remember that the actual ‘mind’ of a computer itself is confined exclusively to the virtual world- the processing within a robot has no actual concept of the world surrounding it, and as such is notoriously poor at interacting with it. The problem is twofold- firstly, the real world is not a mere simulation, where rules are constant and predictable; rather, it is an incredibly complicated, constantly changing environment where there are a thousand different things that we living humans keep track of without even thinking. As such, there are a LOT of very complicated inputs and outputs for a computer to keep track of in the real world, which makes it very hard to deal with. But this is merely a matter of grumbling over the engineering specifications and trying to meet the design brief of the programmers- it is the second problem which is the real stumbling block for the development of AI.

The second issue is related to the way a computer processes information- bit by bit, without any real grasp of the big picture. Take, for example, the computer monitor in front of you. To you, it is quite clearly a screen- the most notable clue being the pretty pattern of lights in front of you. Now, turn your screen slightly so that you are looking at it from an angle. It’s still got a pattern of lights coming out of it, it’s still the same colours- it’s still a screen. To a computer however, if you were to line up two pictures of your monitor from two different angles, it would be completely unable to realise that they were the same screen, or even that they were the same kind of objects. Because the pixels are in a different order, and as such the data’s different, the two pictures are completely different- the computer has no concept of the idea that the two patterns of lights are the same basic shape, just from different angles.

There are two potential solutions to this problem. Firstly, the computer can look at the monitor and store an image of it from every conceivable angle with every conceivable background, so that it would be able to recognise it anywhere, from any viewpoint- this would however take up a library’s worth of memory space and be stupidly wasteful. The alternative requires some cleverer programming- by training the computer to spot patterns of pixels that look roughly similar (either shifted along by a few bytes, or missing a few here and there), they can be ‘trained’ to pick out basic shapes- by using an algorithm to pick out changes in colour (an old trick that’s been used for years to clean up photos), the edges of objects can be identified and separate objects themselves picked out. I am not by any stretch of the imagination an expert in this field so won’t go into details, but by this basic method a computer can begin to step back and begin to look at the pattern of a picture as a whole.

But all that information inputting, all that work…  so your computer can identify just a monitor? What about all the myriad of other things our brains can recognise with such ease- animals, buildings, cars? And we haven’t even got on to differentiating between different types of things yet… how will we ever match the human brain?

This idea presented a big setback for the development of modern AI- so far we have been able to develop AI that allows one computer to handle a few real-world tasks or applications very well (and in some cases, depending on the task’s suitability to the computational mind, better than humans), but scientists and engineers were presented with a monumental challenge when faced with the prospect of trying to come close to the human mind (let alone its body) in anything like the breadth of tasks it is able to perform. So they went back to basics, and began to think of exactly how humans are able to do so much stuff.

Some of it can be put down to instinct, but then came the idea of learning. The human mind is especially remarkable in its ability to take in new information and learn new things about the world around it- and then take this new-found information and try to apply it to our own bodies. Not only can we do this, but we can also do it remarkably quickly- it is one of the main traits which has pushed us forward as a race.

So this is what inspires the current generation of AI programmers and robotocists- the idea of building into the robot’s design a capacity for learning. The latest generation of the Japanese ‘Asimo’ robots can learn what various objects presented to it are, and is then able to recognise them when shown them again- as well as having the best-functioning humanoid chassis of any existing robot, being able to run and climb stairs. Perhaps more excitingly are a pair of robots currently under development that start pretty much from first principles, just like babies do- first they are presented with a mirror and learn to manipulate their leg motors in such a way that allows them to stand up straight and walk (although they aren’t quite so good at picking themselves up if they fail in this endeavour). They then face one another and begin to demonstrate and repeat actions to one another, giving each action a name as they do so.  In doing this they build up an entirely new, if unsophisticated, language with which to make sense of the world around them- currently, this is just actions, but who knows what lies around the corner…