Getting bored with history lessons

Last post’s investigation into the post-Babbage history of computers took us up to around the end of the Second World War, before the computer age could really be said to have kicked off. However, with the coming of Alan Turing the biggest stumbling block for the intellectual development of computing as a science had been overcome, since it now clearly understood what it was and where it was going. From then on, therefore, the history of computing is basically one long series of hardware improvements and business successes, and the only thing of real scholarly interest was Moore’s law. This law is an unofficial, yet surprisingly accurate, model of the exponential growth in the capabilities of computer hardware, stating that every 18 months computing hardware gets either twice as powerful, half the size, or half the price for the same other specifications. This law was based on a 1965 paper by Gordon E Moore, who noted that the number of transistors on integrated circuits had been doubling every two years since their invention 7 years earlier. The modern day figure of an 18-monthly doubling in performance comes from an Intel executive’s estimate based on both the increasing number of transistors and their getting faster & more efficient… but I’m getting sidetracked. The point I meant to make was that there is no point me continuing with a potted history of the last 70 years of computing, so in this post I wish to get on with the business of exactly how (roughly fundamentally speaking) computers work.

A modern computer is, basically, a huge bundle of switches- literally billions of the things. Normal switches are obviously not up to the job, being both too large and requiring an electromechanical rather than purely electrical interface to function, so computer designers have had to come up with electrically-activated switches instead. In Colossus’ day they used vacuum tubes, but these were large and prone to breaking so, in the late 1940s, the transistor was invented. This is a marvellous semiconductor-based device, but to explain how it works I’m going to have to go on a bit of a tangent.

Semiconductors are materials that do not conduct electricity freely and every which way like a metal, but do not insulate like a wood or plastic either- sometimes they conduct, sometimes they don’t. In modern computing and electronics, silicon is the substance most readily used for this purpose. For use in a transistor, silicon (an element with four electrons in its outer atomic ‘shell’) must be ‘doped’ with other elements, meaning that they are ‘mixed’ into the chemical, crystalline structure of the silicon. Doping with a substance such as boron, with three electrons in its outer shell, creates an area with a ‘missing’ electron, known as a hole. Holes have, effectively, a positive charge compared a ‘normal’ area of silicon (since electrons are negatively charged), so this kind of doping produces what is known as p-type silicon. Similarly, doping with something like phosphorus, with five outer shell electrons, produces an excess of negatively-charged electrons and n-type silicon. Thus electrons, and therefore electricity (made up entirely of the net movement of electrons from one area to another) finds it easy to flow from n- to p-type silicon, but not very well going the other way- it conducts in one direction and insulates in the other, hence a semiconductor. However, it is vital to remember that the p-type silicon is not an insulator and does allow for free passage of electrons, unlike pure, undoped silicon. A transistor generally consists of three layers of silicon sandwiched together, in order NPN or PNP depending on the practicality of the situation, with each layer of the sandwich having a metal contact or ‘leg’ attached to it- the leg in the middle is called the base, and the ones at either side are called the emitter and collector.

Now, when the three layers of silicon are stuck next to one another, some of the free electrons in the n-type layer(s) jump to fill the holes in the adjacent p-type, creating areas of neutral, or zero, charge. These are called ‘depletion zones’ and are good insulators, meaning that there is a high electrical resistance across the transistor and that a current cannot flow between the emitter and collector despite usually having a voltage ‘drop’ between them that is trying to get a current flowing. However, when a voltage is applied across the collector and base a current can flow between these two different types of silicon without a problem, and as such it does. This pulls electrons across the border between layers, and decreases the size of the depletion zones, decreasing the amount of electrical resistance across the transistor and allowing an electrical current to flow between the collector and emitter. In short, one current can be used to ‘turn on’ another.

Transistor radios use this principle to amplify the signal they receive into a loud, clear sound, and if you crack one open you should be able to see some (well, if you know what you’re looking for). However, computer and manufacturing technology has got so advanced over the last 50 years that it is now possible to fit over ten million of these transistor switches onto a silicon chip the size of your thumbnail- and bear in mind that the entire Colossus machine, the machine that cracked the Lorenz cipher, contained only ten thousand or so vacuum tube switches all told. Modern technology is a wonderful thing, and the sheer achievement behind it is worth bearing in mind next time you get shocked over the price of a new computer (unless you’re buying an Apple- that’s just business elitism).

…and dammit, I’ve filled up a whole post again without getting onto what I really wanted to talk about. Ah well, there’s always next time…

(In which I promise to actually get on with talking about computers)

Advertisement

What we know and what we understand are two very different things…

If the whole Y2K debacle over a decade ago taught us anything, it was that the vast majority of the population did not understand the little plastic boxes known as computers that were rapidly filling up their homes. Nothing especially wrong or unusual about this- there’s a lot of things that only a few nerds understand properly, an awful lot of other stuff in our life to understand, and in any case the personal computer had only just started to become commonplace. However, over 12 and a half years later, the general understanding of a lot of us does not appear to have increased to any significant degree, and we still remain largely ignorant of these little feats of electronic witchcraft. Oh sure, we can work and operate them (most of us anyway), and we know roughly what they do, but as to exactly how they operate, precisely how they carry out their tasks? Sorry, not a clue.

This is largely understandable, particularly given the value of ‘understand’ that is applicable in computer-based situations. Computers are a rare example of a complex system that an expert is genuinely capable of understanding, in minute detail, every single aspect of the system’s working, both what it does, why it is there, and why it is (or, in some cases, shouldn’t be) constructed to that particular specification. To understand a computer in its entirety, therefore, is an equally complex job, and this is one very good reason why computer nerds tend to be a quite solitary bunch, with quite few links to the rest of us and, indeed, the outside world at large.

One person who does not understand computers very well is me, despite the fact that I have been using them, in one form or another, for as long as I can comfortably remember. Over this summer, however, I had quite a lot of free time on my hands, and part of that time was spent finally relenting to the badgering of a friend and having a go with Linux (Ubuntu if you really want to know) for the first time. Since I like to do my background research before getting stuck into any project, this necessitated quite some research into the hows and whys of its installation, along with which came quite a lot of info as to the hows and practicalities of my computer generally. I thought, then, that I might spend the next couple of posts or so detailing some of what I learned, building up a picture of a computer’s functioning from the ground up, and starting with a bit of a history lesson…

‘Computer’ was originally a job title, the job itself being akin to accountancy without the imagination. A computer was a number-cruncher, a supposedly infallible data processing machine employed to perform a range of jobs ranging from astronomical prediction to calculating interest. The job was a fairly good one, anyone clever enough to land it probably doing well by the standards of his age, but the output wasn’t. The human brain is not built for infallibility and, not infrequently, would make mistakes. Most of these undoubtedly went unnoticed or at least rarely caused significant harm, but the system was nonetheless inefficient. Abacuses, log tables and slide rules all aided arithmetic manipulation to a great degree in their respective fields, but true infallibility was unachievable whilst still reliant on the human mind.

Enter Blaise Pascal, 17th century mathematician and pioneer of probability theory (among other things), who invented the mechanical calculator aged just 19, in 1642. His original design wasn’t much more than a counting machine, a sequence of cogs and wheels so constructed as to able to count and convert between units, tens, hundreds and so on (ie a turn of 4 spaces on the ‘units’ cog whilst a seven was already counted would bring up eleven), as well as being able to work with currency denominations and distances as well. However, it could also subtract, multiply and divide (with some difficulty), and moreover proved an important point- that a mechanical machine could cut out the human error factor and reduce any inaccuracy to one of simply entering the wrong number.

Pascal’s machine was both expensive and complicated, meaning only twenty were ever made, but his was the only working mechanical calculator of the 17th century. Several, of a range of designs, were built during the 18th century as show pieces, but by the 19th the release of Thomas de Colmar’s Arithmometer, after 30 years of development, signified the birth of an industry. It wasn’t a large one, since the machines were still expensive and only of limited use, but de Colmar’s machine was the simplest and most reliable model yet. Around 3,000 mechanical calculators, of various designs and manufacturers, were sold by 1890, but by then the field had been given an unexpected shuffling.

Just two years after de Colmar had first patented his pre-development Arithmometer, an Englishmen by the name of Charles Babbage showed an interesting-looking pile of brass to a few friends and associates- a small assembly of cogs and wheels that he said was merely a precursor to the design of a far larger machine: his difference engine. The mathematical workings of his design were based on Newton polynomials, a fiddly bit of maths that I won’t even pretend to understand, but that could be used to closely approximate logarithmic and trigonometric functions. However, what made the difference engine special was that the original setup of the device, the positions of the various columns and so forth, determined what function the machine performed. This was more than just a simple device for adding up, this was beginning to look like a programmable computer.

Babbage’s machine was not the all-conquering revolutionary design the hype about it might have you believe. Babbage was commissioned to build one by the British government for military purposes, but since Babbage was often brash, once claiming that he could not fathom the idiocy of the mind that would think up a question an MP had just asked him, and prized academia above fiscal matters & practicality, the idea fell through. After investing £17,000 in his machine before realising that he had switched to working on a new and improved design known as the analytical engine, they pulled the plug and the machine never got made. Neither did the analytical engine, which is a crying shame; this was the first true computer design, with two separate inputs for both data and the required program, which could be a lot more complicated than just adding or subtracting, and an integrated memory system. It could even print results on one of three printers, in what could be considered the first human interfacing system (akin to a modern-day monitor), and had ‘control flow systems’ incorporated to ensure the performing of programs occurred in the correct order. We may never know, since it has never been built, whether Babbage’s analytical engine would have worked, but a later model of his difference engine was built for the London Science Museum in 1991, yielding accurate results to 31 decimal places.

…and I appear to have run on a bit further than intended. No matter- my next post will continue this journey down the history of the computer, and we’ll see if I can get onto any actual explanation of how the things work.

The Churchill Problem

Everybody knows about Winston Churchill- he was about the only reason that Britain’s will to fight didn’t crumble during the Second World War, his voice and speeches are some of the most iconic of all time, and his name and mannerisms have been immortalised by a cartoon dog selling insurance. However, some of his postwar achievements are often overlooked- after the war he was voted out of the office of Prime Minister in favour of a revolutionary Labour government, but he returned to office in the 50’s with the return of the Tories. He didn’t do quite as well this time round- Churchill was a shameless warmonger who nearly annihilated his own reputation during the First World War by ordering a disastrous assault on Gallipoli in Turkey, and didn’t do much to help it by insisting that everything between the two wars was an excuse for another one- but it was during this time that he made one of his least-known but most interesting speeches. In it he envisaged a world in which the rapidly accelerating technological advancement of his age would cause most of the meaningful work to be done by machines, and changing our concept of the working week. He suggested that we would one day be able to “give the working man what he’s never had – four days’ work and then three days’ fun”- basically, Winston Churchill was the first man to suggest the concept of a three day weekend.

This was at a time when the very concept of the weekend itself was actually a very new one- the original idea of one part of the week being dedicated to not working comes, of course, from the Sabbath days adopted by most religions. The idea of no work being done on a Sunday is, in the Western and therefore historically Christian world, an old one, but the idea of expanding it to Saturday as well is far newer. This was partly motivated by the increased proportion and acceptance of Jewish workers, whose day of rest fell on Saturday, and was also part of a general trend in decreasing work hours during the early 1900’s. It wasn’t until 1938 that the 5 day working week became ratified in US law, and it appeared to be the start of a downward trend in working hours as trade unions gained power, workers got more free time, and machines did all the important stuff. All of this appeared to lead to Churchill’s promised world- a world of the 4-day working week and perhaps, one day, a total lap of luxury whilst we let computers and androids do everything.

However, recently things have started to change. The trend of shortening working hours and an increasingly stressless existence has been reversed, with the average working week getting longer dramatically- since 1970, the  number of hours worked per capita has risen by 20%. A survey done a couple of winters ago found that of our weekend, we only spend an average of 15 hours and 17 minutes of it out of the work mindset (between 12:38am and 3:55pm on Sunday when we start worrying about Monday again), and that over half of us are too tired to enjoy our weekends properly. Given that this was a survey conducted by a hotel chain it may not be an entirely representative sample, but you get the idea. The weekend itself is in some ways under threat, and Churchill’s vision is disappearing fast.

So what’s changed since the 50’s (other than transport, communications, language, technology, religion, science, politics, the world, warfare, international relations, and just about everything else)? Why have we suddenly ceased to favour rest over work? What the hell is wrong with us?

To an extent, some of the figures are anomalous-  employment of women has increased drastically in the last 50 years and as such so has the percentage of the population who are unemployed. But this is not enough to explain away all of the stats relating to ‘the death of the weekend’.Part of the issue is judgemental. Office environments can be competitive places, and can quickly develop into mindsets where our emotional investment is in the compiling of our accounts document or whatever. In such an environment, people’s priorities become more focused on work, and somebody taking a day extra out on the weekend would just seem like laziness- especially of the boss who has deadlines to meet and really doesn’t appreciate slackers, as well as having control of your salary. We also, of course, judge ourselves, unwilling to feel as if we are letting the team down and causing other people inconvenience. There’s also the problem of boredom- as any schoolchild will tell you, the first few days of holiday after a long term are blissful relaxation, but it’s only a matter of time before a parent hears that dreaded phrase: “I’m booooooored”. The same thing can be said to apply to having nearly half your time off every single week. But these are features of human nature, which certainly hasn’t changed in the past 50 years, so what could the root of the change in trends be?

The obvious place to start when considering this is in the changes in work over this time. The last half-century has seen Britain’s manufacturing economy spiral downwards, as more and more of us lay down tools and pick up keyboards- the current ‘average job’ for a Briton involves working in an office somewhere. Probably in Sales, or Marketing. This kind of job involves chiefly working our minds, crunching numbers, thinking through figures and making it far harder for us to ‘switch off’ from our work mentality than if it were centred on how much our muscles hurt. It also makes it far easier to justify staying for overtime and to ‘just finish that last bit’, partly because not being physically tired makes it easier and also because the kind of work given to an office worker is more likely to be centred around individual mini-projects than simply punching rivets or controlling a machine for hours on end. And of course, as some of us start to stay for longer, so our competitive instinct causes the rest of us to as well.

In the modern age, switching off from a modern work mindset has been made even harder since the invention of the laptop and, especially, the smartphone. The laptop allowed us to check our emails or work on a project at home, on a train or wherever we happened to be- the smartphone has allowed us to keep in touch with work at every single waking moment of the day, making it very difficult for us to ‘switch work off’. It has also made it far easier to work at home, which for the committed worker can make it even harder to formally end the day when there are no colleagues or bosses telling you it’s time to go home. This spread of technology into our lives is thought to lead to an increase in levels of dopamine, a sort of pick-me-up drug the body releases after exposure to adrenaline, which can frazzle our pre-frontal cortex and leave someone feeling drained and unfocused- obvious signs of being overworked

Then there is the issue of competition. In the past, competition in industry would usually have been limited to a few other industries in the local area- in the grand scheme of things, this could perhaps be scaled up to cover an entire country. The existence of trade unions helped prevent this competition from causing problems- if everyone is desperate for work, as occurred with depressing regularity during the Great Depression in the USA, they keep trying to offer their services as cheaply as possible to try and bag the job, but if a trade union can be use to settle and standardise prices then this effect is halted. However, in the current age of everywhere being interconnected, competition in big business can occur from all over the world. To guarantee that they keep their job, people have to try to work as hard as they can for as long as they can, lengthening the working week still further. Since trade unions are generally limited to a single country, their powers in this situation are rather limited.

So, that’s the trend as it is- but is it feasible that we will ever live the life of luxury, with robots doing all our work, that seemed the summit of Churchill’s thinkings. In short: no. Whilst a three-day weekend is perhaps not too unfeasible, I just don’t think human nature would allow us to laze about all day, every day for the whole of our lives and do absolutely nothing with it, if only for the reasons explained above. Plus, constant rest would simply sanitise us to the concept, it becoming so normal that we simply could not envisage the concept of work at all. Thus, all the stresses that were once taken up with work worries would simply be transferred to ‘rest worries’, resulting in us not being any happier after all, and defeating the purpose of having all the rest in the first place. In short, we need work to enjoy play.

Plus, if robots ran everything and nobody worked them, it’d only be a matter of time before they either all broke down or took over.

Artificial… what, exactly?

OK, time for part 3 of what I’m pretty sure will finish off as 4 posts on the subject of artificial intelligence. This time, I’m going to branch off-topic very slightly- rather than just focusing on AI itself, I am going to look at a fundamental question that the hunt for it raises: the nature of intelligence itself.

We all know that we are intelligent beings, and thus the search for AI has always been focused on attempting to emulate (or possibly better) the human mind and our human understanding of intelligence. Indeed, when Alan Turing first proposed the Turing test (see Monday’s post for what this entails), he was specifically trying to emulate human conversational and interaction skills. However, as mentioned in my last post, the modern-day approach to creating intelligence is to try and let robots learn for themselves, in order to minimise the amount of programming we have to give them ourselves and thus to come close to artificial, rather than programmed, intelligence. However, this learning process has raised an intriguing question- if we let robots learn for themselves entirely from base principles, could they begin to create entirely new forms of intelligence?

It’s an interesting idea, and one that leads us to question what, on a base level, intelligence is. When one thinks about it, we begin to realise the vast scope of ideas that ‘intelligence’ covers, and this is speaking merely from the human perspective. From emotional intelligence to sporting intelligence, from creative genius to pure mathematical ability (where computers themselves excel far beyond the scope of any human), intelligence is an almost pointlessly broad term.

And then, of course, we can question exactly what we mean by a form of intelligence. Take bees for example- on its own, a bee is a fairly useless creature that is most likely to just buzz around a little. Not only is it useless, but it is also very, very dumb. However, a hive, where bees are not individuals but a collective, is a very different matter- the coordinated movements of hundreds and thousands of bees can not only form huge nests and turn sugar into the liquid deliciousness that is honey, but can also defend the nest from attack, ensure the survival of the queen at all costs, and ensure that there is always someone to deal with the newborns despite the constant activity of the environment surround it. Many corporate or otherwise collective structures can claim to work similarly, but few are as efficient or versatile as a beehive- and more astonishingly, bees can exhibit an extraordinary range of intelligent behaviour as a collective beyond what an individual could even comprehend. Bees are the archetype of a collective, rather than individual, mind, and nobody is entirely sure how such a structure is able to function as it does.

Clearly, then, we cannot hope to pigeonhole or quantify intelligence as a single measurement- people may boast of their IQ scores, but this cannot hope to represent their intelligence across the full spectrum. Now, consider all these different aspects of intelligence, all the myriad of ways that we can be intelligent (or not). And ask yourself- now, have we covered all of them?

It’s another compelling idea- that there are some forms of intelligence out there that our human forms and brains simply can’t envisage, let alone experience. What these may be like… well how the hell should I know, I just said we can’t envisage them. This idea that we simply won’t be able to understand what they could be like if we ever experience can be a tricky one to get past (a similar problem is found in quantum physics, whose violation of common logic takes some getting used to), and it is a real issue that if we do ever encounter these ‘alien’ forms of intelligence, we won’t be able to recognise them for this very reason. However, if we are able to do so, it could fundamentally change our understanding of the world around us.

And, to drag this post kicking and screaming back on topic, our current development of AI could be a mine of potential to do this in (albeit a mine in which we don’t know what we’re going to find, or if there is anything to find at all). We all know that computers are fundamentally different from us in a lot of ways, and in fact it is very easy to argue that trying to force a computer to be intelligent beyond its typical, logical parameters is rather a stupid task, akin to trying to use a hatchback to tow a lorry. In fact, quite a good way to think of computers or robots is like animals, only adapted to a different environment to us- one in which their food comes via a plug and information comes to them via raw data and numbers… but I am wandering off-topic once again. The point is that computers have, for as long as the hunt for AI has gone on, been our vehicle for attempting to reach it- and only now are we beginning to fully understand that they have the potential to do so much more than just copy our minds. By pushing them onward and onward to the point they have currently reached, we are starting to turn them not into an artificial version of ourselves, but into an entirely new concept, an entirely new, man-made being.

To me, this is an example of true ingenuity and skill on behalf of the human race. Copying ourselves is no more inventive, on a base level, than making iPod clones or the like. Inventing a new, artificial species… like it or loath it, that’s amazing.

The Problems of the Real World

My last post on the subject of artificial intelligence was something of a philosophical argument on its nature- today I am going to take on a more practical perspective, and have a go at just scratching the surface of the monumental challenges that the real world poses to the development of AI- and, indeed, how they are (broadly speaking) solved.

To understand the issues surrounding the AI problem, we must first consider what, in the strictest sense of the matter, a computer is. To quote… someone, I can’t quite remember who: “A computer is basically just a dumb adding machine that counts on its fingers- except that it has an awful lot of fingers and counts terribly fast”. This, rather simplistic model, is in fact rather good for explaining exactly what it is that computers are good and bad at- they are very good at numbers, data crunching, the processing of information. Information is the key thing here- if something can be inputted into a computer purely in terms of information, then the computer is perfectly capable of modelling and processing it with ease- which is why a computer is very good at playing games. Even real-world problems that can be expressed in terms of rules and numbers can be converted into computer-recognisable format and mastered with ease, which is why computers make short work of things like ballistics modelling (such as gunnery tables, the US’s first usage of them), and logical games like chess.

However, where a computer develops problems is in the barrier between the real world and the virtual. One must remember that the actual ‘mind’ of a computer itself is confined exclusively to the virtual world- the processing within a robot has no actual concept of the world surrounding it, and as such is notoriously poor at interacting with it. The problem is twofold- firstly, the real world is not a mere simulation, where rules are constant and predictable; rather, it is an incredibly complicated, constantly changing environment where there are a thousand different things that we living humans keep track of without even thinking. As such, there are a LOT of very complicated inputs and outputs for a computer to keep track of in the real world, which makes it very hard to deal with. But this is merely a matter of grumbling over the engineering specifications and trying to meet the design brief of the programmers- it is the second problem which is the real stumbling block for the development of AI.

The second issue is related to the way a computer processes information- bit by bit, without any real grasp of the big picture. Take, for example, the computer monitor in front of you. To you, it is quite clearly a screen- the most notable clue being the pretty pattern of lights in front of you. Now, turn your screen slightly so that you are looking at it from an angle. It’s still got a pattern of lights coming out of it, it’s still the same colours- it’s still a screen. To a computer however, if you were to line up two pictures of your monitor from two different angles, it would be completely unable to realise that they were the same screen, or even that they were the same kind of objects. Because the pixels are in a different order, and as such the data’s different, the two pictures are completely different- the computer has no concept of the idea that the two patterns of lights are the same basic shape, just from different angles.

There are two potential solutions to this problem. Firstly, the computer can look at the monitor and store an image of it from every conceivable angle with every conceivable background, so that it would be able to recognise it anywhere, from any viewpoint- this would however take up a library’s worth of memory space and be stupidly wasteful. The alternative requires some cleverer programming- by training the computer to spot patterns of pixels that look roughly similar (either shifted along by a few bytes, or missing a few here and there), they can be ‘trained’ to pick out basic shapes- by using an algorithm to pick out changes in colour (an old trick that’s been used for years to clean up photos), the edges of objects can be identified and separate objects themselves picked out. I am not by any stretch of the imagination an expert in this field so won’t go into details, but by this basic method a computer can begin to step back and begin to look at the pattern of a picture as a whole.

But all that information inputting, all that work…  so your computer can identify just a monitor? What about all the myriad of other things our brains can recognise with such ease- animals, buildings, cars? And we haven’t even got on to differentiating between different types of things yet… how will we ever match the human brain?

This idea presented a big setback for the development of modern AI- so far we have been able to develop AI that allows one computer to handle a few real-world tasks or applications very well (and in some cases, depending on the task’s suitability to the computational mind, better than humans), but scientists and engineers were presented with a monumental challenge when faced with the prospect of trying to come close to the human mind (let alone its body) in anything like the breadth of tasks it is able to perform. So they went back to basics, and began to think of exactly how humans are able to do so much stuff.

Some of it can be put down to instinct, but then came the idea of learning. The human mind is especially remarkable in its ability to take in new information and learn new things about the world around it- and then take this new-found information and try to apply it to our own bodies. Not only can we do this, but we can also do it remarkably quickly- it is one of the main traits which has pushed us forward as a race.

So this is what inspires the current generation of AI programmers and robotocists- the idea of building into the robot’s design a capacity for learning. The latest generation of the Japanese ‘Asimo’ robots can learn what various objects presented to it are, and is then able to recognise them when shown them again- as well as having the best-functioning humanoid chassis of any existing robot, being able to run and climb stairs. Perhaps more excitingly are a pair of robots currently under development that start pretty much from first principles, just like babies do- first they are presented with a mirror and learn to manipulate their leg motors in such a way that allows them to stand up straight and walk (although they aren’t quite so good at picking themselves up if they fail in this endeavour). They then face one another and begin to demonstrate and repeat actions to one another, giving each action a name as they do so.  In doing this they build up an entirely new, if unsophisticated, language with which to make sense of the world around them- currently, this is just actions, but who knows what lies around the corner…

The Chinese Room

Today marks the start of another attempt at a multi-part set of posts- the last lot were about economics (a subject I know nothing about), and this one will be about computers (a subject I know none of the details about). Specifically, over the next… however long it takes, I will be taking a look at the subject of artificial intelligence- AI.

There have been a long series of documentaries on the subject of robots, supercomputers and artificial intelligence in recent years, because it is a subject which seems to be in the paradoxical state of continually advancing at a frenetic rate, and simultaneously finding itself getting further and further away from the dream of ‘true’ artificial intelligence which, as we begin to understand more and more about psychology, neuroscience and robotics, becomes steadily more complicated and difficult to obtain. I could spend a thousand posts on the subject of all the details if I so wished, because it is also one of the fastest-developing regions of engineering on the planet, but that would just bore me and be increasingly repetitive for anyone who ends up reading this blog.

I want to begin, therefore, by asking a few questions about the very nature of artificial intelligence, and indeed the subject of intelligence itself, beginning with a philosophical problem that, when I heard about it on TV a few nights ago, was very intriguing to me- the Chinese Room.

Imagine a room containing only a table, a chair, a pen, a heap of paper slips, and a large book. The door to the room has a small opening in it, rather like a letterbox, allowing messages to be passed in or out. The book contains a long list of phrases written in Chinese, and (below them) the appropriate responses (also in Chinese characters). Imagine we take a non-Chinese speaker, and place him inside the room, and then take a fluent Chinese speaker and put them outside. They write a phrase or question (in Chinese) on some paper, and pass it through the letterbox to the other person inside the room. They have no idea what this message means, but by using the book they can identify the phrase, write the appropriate response to it, and pass it back through the letterbox. This process can be repeated multiple times, until a conversation begins to flow- the difference being that only one of the participants in the conversation actually knows what it’s about.

This experiment is a direct challenge to the somewhat crude test first proposed by mathematical genius and codebreaker Alan Turing in the 1940’s, to test whether a computer could be considered a truly intelligent being. The Turing test postulates that if a computer were ever able to conduct a conversation with a human so well that the human in question would have no idea that they were not talking to another human, but rather to a machine, then it could be considered to be intelligent.

The Chinese Room problem questions this idea, and as it does so, raises a fundamental question about whether a machine such as a computer can ever truly be called intelligent, or to possess intelligence. The point of the idea is to demonstrate that it is perfectly possible to appear to be intelligent, by conducting a normal conversation with someone, whilst simultaneously having no understanding whatsoever of the situation at hand. Thus, while a machine programmed with the correct response to any eventuality could converse completely naturally, and appear perfectly human, it would have no real conciousness. It would not be truly intelligent, it would merely be just running an algorithm, obeying the orders of the instructions in its electronic brain, working simply from the intelligence of the person who programmed in its orders. So, does this constitute intelligence, or is a conciousness necessary for something to be deemed intelligent?

This really boils down to a question of opinion- if something acts like it’s intelligent and is intelligent for all functional purposes, does that make it intelligent? Does it matter that it can’t really comprehend it’s own intelligence? John Searle, who first thought of the Chinese Room in the 1980’s, called the philosophical positions on this ‘strong AI’ and ‘weak AI’. Strong AI basically suggest that functional intelligence is intelligence to all intents and purposes- weak AI argues that the lack of true intelligence renders even the most advanced and realistic computer nothing more than a dumb machine.

However, Searle also proposes a very interesting idea that is prone to yet more philosophical debate- that our brains are mere machines in exactly the same way as computers are- the mechanics of the brain, deep in the unexplored depths of the fundamentals of neuroscience, are just machines that tick over and perform tasks in the same way as AI does- and that there is some completely different and non-computational mechanism that gives rise to our mind and conciousness.

But what if there is no such mechanism? What if the rise of a conciousness is merely the result of all the computational processes going on in our brain- what if conciousness is nothing more than a computational process itself, designed to give our brains a way of joining the dots and processing more efficiently. This is a quite frightening thought- that we could, in theory, be only restrained into not giving a computer a conciousness because we haven’t written the proper code yet. This is one of the biggest unanswered questions of modern science- what exactly is our mind, and what causes it.

To fully expand upon this particular argument would take time and knowledge that I don’t have in equal measure, so instead I will just leave that last question for you to ponder over- what is the difference between the box displaying these words for you right now, and the fleshy lump that’s telling you what they mean.

Since when the internet become alive?

Looking back over my previous posts (speaking of which by the way: WOO DOUBLE FIGURES), I realised just how odd my way of referring to the internet is. The internet, by archaic terms, doesn’t really even exist- there is nothing physical to show its presence. One can argue about the billions of computers and servers which connect to and contribute to it, but that’s a bit like saying that the story of a novel exists by virtue of the book having pages- the story itself is something… more than that. The same is true for the internet which is, when boiled down, just one huge mass of information- nothing more, nothing less. And yet, from my first posts in which I introduced myself to the web, I referred to the internet itself. When you think about it, the level to which the internet community has made the internet itself seem human goes far beyond just normal personification- the internet does not just represent a figure, it has, over the years of its existence, managed to give itself a personality. It has clearly defined ‘likes’ and ‘dislikes’, far beyond a simple average view of the human population. In my home country of Great Britain, for example, the majority of voters at each election vote Conservative, and such views are held by many people across the world, especially in America- the source of the main bulk of internet traffic. And yet, the internet’s political stance appears very liberal- it dislikes racism, is heavily supportive of freedom of speech and information, and dislikes privacy controls and regulations on itself. The internet also appears to like computer games, science, especially computing (and be of above-average intelligence in these matters too) and hate the likes of Stephanie Meyer, Justin Beiber and Rebecca Black, but one trait is predominant, and has almost become the defining feature of the modern internet- it likes to have a laugh. A large proportion of my Facebook traffic, for instance, is people sending me links of funny stuff from everyday life that other people have posted, and there is a recurrent joke that the internet could be basically split into two parts- porn, and pictures of cats looking simultaneously cute and hilarious. This set of priorities is very prevalent when studying the aims of internet groups such as Anonymous- quite a good description of them (and incidentally a link to a quality series of videos) can be found here: http://penny-arcade.com/patv/episode/anonymous, and I recommend you watch it. Their aims appear based around a similar set of liberal and ‘for teh lulz’ priorities.
Now, just sit back for a second and absorb this simple fact- the internet, essentially a large collection of information contributed to in some way by the vast majority of the human, has managed to develop its own personality and opinions. Furthermore, these opinions are held, as a rule, by the vast majority of the internet community (excluding the people, if they can be called such, who comment below youtube videos), even though these represent the views of a non-majority group in the real world (although feel free to debate the extent of non-majority). Now, ask yourself this- HOW IN THE NAME OF ALL THAT’S HOLY DID THAT HAPPEN!?!?!?! The very concept of creating such a personality could never have occurred to the web pioneers, the likes of Tim Berners-Lee and the CERN team who aided the process, and yet it has happened. Swathes of the internet may be devoid of such views, and there are a series of internet counter-cultures (the conspiracy theorists, for example, or the ‘vast uninformed panics’ that erupt whenever there is a major health scare), but the internet as a rule appears to have predominant characteristics THAT ARE INCONSISTENT WITH THOSE OF THE VAST POPULATION OF PEOPLE WHO CONTRIBUTE TO IT.
Normally I like my posts to have a conclusion behind me, and several of my instincts are fighting to explain about the kind of bored teenagers who populate the web for much of the time etc. etc., but right now I really don’t want to. I honestly think that the way this has happened is truly amazing, and from a psychological/behaviouroligical/ sociological perspective it is certainly incredibly interesting- I could fill a paper describing it. But, for now, I’m just going to sit back and revel in what humanity has done with its greatest invention. And try and think of a suitable way to conclude this post…