One Year On

A year is a long time.

On the 16th of December last year, I was on Facebook. Nothing unusual about this (I spent and indeed, to a slightly lesser extent, still spend rather too much time with that little blue f in the top corner of my screen), especially given that it was the run up to Christmas and I was bored, and neither was the precise content of the bit of Facebook I was looking at- an argument. Such things are common in the weird world of social networking, although they surely shouldn’t be, and this was just another such time. Three or four people were posting long, eloquent, semi-researched and furiously defended messages over some point of ethics, politics or internet piracy, I know not which (it was probably one of those anyway, since that’s what most of them seem to be about among my friends list). Unfortunately, one of those people was me, and I was losing. Well, I say losing; I don’t think anybody could be said to be winning, but I was getting angry and upset all the same, made worse by the realisation that what I was doing was a COMPLETE WASTE OF TIME. I am not in any position whereby my Views are going to have a massive impact on the lives of everyone else, nobody wants to hear what they are, and there was no way in hell that I was going to convince anyone that my opinion was more ‘right’ than their strongly-held conviction- all I and my fellow arguees were achieving was getting very, very angry at one another, actively making us all more miserable. We could pretend that we were debating an important issue, but in reality were just another group of people screaming at one another via the interwebs.

A little under a week later, the night after the winter solstice (22nd of December, which you should notice was exactly 366 days ago), I was again to be found watching an argument unfold on Facebook. Thankfully this time I was not participating, merely looking on with horror as another group of four or five people made their evening miserable by pretending they could convince others that they were ‘wrong’. The provocativeness of the original post, spouting one set of Views as gospel truth over the web, the self-righteousness of the responses and the steadily increasing vitriol of the resulting argument, all struck me as a terrible waste of some wonderful brains. Those participating I knew to be good people, smart people, capable of using their brains for, if not betterment of the world around them, then perhaps a degree of self-betterment or at the very least something that was not making the world a more unhappy place. The moment was not a happy one.

However, one of the benefits of not competing in such an argument is that I didn’t have to be reminded of it or spend much time watching it unfold, so I turned back to my news feed and began scrolling down. As I did so, I came to another friend, putting a link up to his blog. This was a recent experiment for him, only a few posts old at the time, and he self-publicised it religiously every time a post went up. He has since discontinued his blogging adventures, to my disappointment, but they made fun reading whilst they lasted; short (mostly less than 300 words) and covering a wide range of random topics. He wasn’t afraid to just be himself online, and wasn’t concerned about being definitively right; if he offered an opinion, it was just something he thought, no more & no less, and there was no sense that it was ever combative. Certainly it was never the point of any post he made; each was just something he’d encountered in the real world or online that he felt would be relatively cool and interesting to comment on. His description described his posts as ‘musings’, and that was the right word for them; harmless, fun and nice. They made the internet and world in general, in some tiny little way, a nicer place to explore.

So, I read through his post. I smirked a little, smiled and closed the tab, returning once more to Facebook and the other distractions & delights the net had to offer. After about an hour or so, my thoughts once again turned to the argument, and I rashly flicked over to look at how it was progressing. It had got to over 100 comments and, as these things do, was gradually wandering off-topic to a more fundamental, but no less depressing, point of disagreement. I was once again filled with a sense that these people were wasting their lives, but this time my thoughts were both more decisive and introspective. I thought about myself; listless, counting down the last few empty days before Christmas, looking at the occasional video or blog, not doing much with myself. My schedule was relatively free, I had a lot of spare time, but I was wasting it. I thought of all the weird and wonderful thoughts that flew across my brain, all the ideas that would spring and fountain of their own accord, all of the things that I thought were interesting, amazing or just downright wonderful about our little mental, spinning ball of rock and water and its strange, pink, fleshy inhabitants that I never got to share. Worse, I never got to put them down anywhere, so after time all these thoughts would die in some forgotten corner of my brain, and the potential they had to remind me of themselves was lost. Once again, I was struck by a sense of waste, but also of resolve; I could try to remedy this situation. So, I opened up WordPress, I filled out a few boxes, and I had my own little blog. My fingers hovered over the keyboard, before falling to the keys. I began to write a little introduction to myself.

Today, the role of my little corner of the interwebs has changed somewhat. Once, I would post poetry, lists, depressed trains of thought and last year’s ’round robin letter of Planet Earth’, which I still regard as one of the best concepts I ever put onto the net (although I don’t think I’ll do one this year- not as much major stuff has hit the news). Somewhere along the line, I realised that essays were more my kind of thing, so I’ve (mainly) stuck to them since; I enjoy the occasional foray into something else, but I find that I can’t produce as much regular stuff this was as otherwise. In any case, the essays have been good for me; I can type, research and get work done so much faster now, and it has paid dividends to my work rate and analytical ability in other fields. I have also found that in my efforts to add evidence to my comments, I end up doing a surprising amount of research that turns an exercise in writing down what I know into one of increasing the kind of stuff I know, learning all sorts of new and random stuff to pack into my brain. I have also violated my own rules about giving my Views on a couple of occasions (although I would hope that I haven’t been too obnoxious about it when I have), but broadly speaking the role of my blog has stayed true to those goals stated in my very first post; to be a place free from rants, to be somewhere to have a bit of a laugh and to be somewhere to rescue unwary travellers dredging the backwaters of the internet who might like what they’ve stumbled upon. But, really, this little blog is like a diary for me; a place that I don’t publicise on my Facebook feed, that I link to only rarely, and that I keep going because I find it comforting. It’s a place where there’s nobody to judge me, a place to house my mind and extend my memory. It’s stressful organising my posting time and coming up with ideas, but whilst blogging, the rest of the world can wait for a bit. It’s a calming place, a nice place, and over the last year it has changed me.

A year is a long time.

Advertisement

The Problems of the Real World

My last post on the subject of artificial intelligence was something of a philosophical argument on its nature- today I am going to take on a more practical perspective, and have a go at just scratching the surface of the monumental challenges that the real world poses to the development of AI- and, indeed, how they are (broadly speaking) solved.

To understand the issues surrounding the AI problem, we must first consider what, in the strictest sense of the matter, a computer is. To quote… someone, I can’t quite remember who: “A computer is basically just a dumb adding machine that counts on its fingers- except that it has an awful lot of fingers and counts terribly fast”. This, rather simplistic model, is in fact rather good for explaining exactly what it is that computers are good and bad at- they are very good at numbers, data crunching, the processing of information. Information is the key thing here- if something can be inputted into a computer purely in terms of information, then the computer is perfectly capable of modelling and processing it with ease- which is why a computer is very good at playing games. Even real-world problems that can be expressed in terms of rules and numbers can be converted into computer-recognisable format and mastered with ease, which is why computers make short work of things like ballistics modelling (such as gunnery tables, the US’s first usage of them), and logical games like chess.

However, where a computer develops problems is in the barrier between the real world and the virtual. One must remember that the actual ‘mind’ of a computer itself is confined exclusively to the virtual world- the processing within a robot has no actual concept of the world surrounding it, and as such is notoriously poor at interacting with it. The problem is twofold- firstly, the real world is not a mere simulation, where rules are constant and predictable; rather, it is an incredibly complicated, constantly changing environment where there are a thousand different things that we living humans keep track of without even thinking. As such, there are a LOT of very complicated inputs and outputs for a computer to keep track of in the real world, which makes it very hard to deal with. But this is merely a matter of grumbling over the engineering specifications and trying to meet the design brief of the programmers- it is the second problem which is the real stumbling block for the development of AI.

The second issue is related to the way a computer processes information- bit by bit, without any real grasp of the big picture. Take, for example, the computer monitor in front of you. To you, it is quite clearly a screen- the most notable clue being the pretty pattern of lights in front of you. Now, turn your screen slightly so that you are looking at it from an angle. It’s still got a pattern of lights coming out of it, it’s still the same colours- it’s still a screen. To a computer however, if you were to line up two pictures of your monitor from two different angles, it would be completely unable to realise that they were the same screen, or even that they were the same kind of objects. Because the pixels are in a different order, and as such the data’s different, the two pictures are completely different- the computer has no concept of the idea that the two patterns of lights are the same basic shape, just from different angles.

There are two potential solutions to this problem. Firstly, the computer can look at the monitor and store an image of it from every conceivable angle with every conceivable background, so that it would be able to recognise it anywhere, from any viewpoint- this would however take up a library’s worth of memory space and be stupidly wasteful. The alternative requires some cleverer programming- by training the computer to spot patterns of pixels that look roughly similar (either shifted along by a few bytes, or missing a few here and there), they can be ‘trained’ to pick out basic shapes- by using an algorithm to pick out changes in colour (an old trick that’s been used for years to clean up photos), the edges of objects can be identified and separate objects themselves picked out. I am not by any stretch of the imagination an expert in this field so won’t go into details, but by this basic method a computer can begin to step back and begin to look at the pattern of a picture as a whole.

But all that information inputting, all that work…  so your computer can identify just a monitor? What about all the myriad of other things our brains can recognise with such ease- animals, buildings, cars? And we haven’t even got on to differentiating between different types of things yet… how will we ever match the human brain?

This idea presented a big setback for the development of modern AI- so far we have been able to develop AI that allows one computer to handle a few real-world tasks or applications very well (and in some cases, depending on the task’s suitability to the computational mind, better than humans), but scientists and engineers were presented with a monumental challenge when faced with the prospect of trying to come close to the human mind (let alone its body) in anything like the breadth of tasks it is able to perform. So they went back to basics, and began to think of exactly how humans are able to do so much stuff.

Some of it can be put down to instinct, but then came the idea of learning. The human mind is especially remarkable in its ability to take in new information and learn new things about the world around it- and then take this new-found information and try to apply it to our own bodies. Not only can we do this, but we can also do it remarkably quickly- it is one of the main traits which has pushed us forward as a race.

So this is what inspires the current generation of AI programmers and robotocists- the idea of building into the robot’s design a capacity for learning. The latest generation of the Japanese ‘Asimo’ robots can learn what various objects presented to it are, and is then able to recognise them when shown them again- as well as having the best-functioning humanoid chassis of any existing robot, being able to run and climb stairs. Perhaps more excitingly are a pair of robots currently under development that start pretty much from first principles, just like babies do- first they are presented with a mirror and learn to manipulate their leg motors in such a way that allows them to stand up straight and walk (although they aren’t quite so good at picking themselves up if they fail in this endeavour). They then face one another and begin to demonstrate and repeat actions to one another, giving each action a name as they do so.  In doing this they build up an entirely new, if unsophisticated, language with which to make sense of the world around them- currently, this is just actions, but who knows what lies around the corner…