Today marks the start of another attempt at a multi-part set of posts- the last lot were about economics (a subject I know nothing about), and this one will be about computers (a subject I know none of the details about). Specifically, over the next… however long it takes, I will be taking a look at the subject of artificial intelligence- AI.
There have been a long series of documentaries on the subject of robots, supercomputers and artificial intelligence in recent years, because it is a subject which seems to be in the paradoxical state of continually advancing at a frenetic rate, and simultaneously finding itself getting further and further away from the dream of ‘true’ artificial intelligence which, as we begin to understand more and more about psychology, neuroscience and robotics, becomes steadily more complicated and difficult to obtain. I could spend a thousand posts on the subject of all the details if I so wished, because it is also one of the fastest-developing regions of engineering on the planet, but that would just bore me and be increasingly repetitive for anyone who ends up reading this blog.
I want to begin, therefore, by asking a few questions about the very nature of artificial intelligence, and indeed the subject of intelligence itself, beginning with a philosophical problem that, when I heard about it on TV a few nights ago, was very intriguing to me- the Chinese Room.
Imagine a room containing only a table, a chair, a pen, a heap of paper slips, and a large book. The door to the room has a small opening in it, rather like a letterbox, allowing messages to be passed in or out. The book contains a long list of phrases written in Chinese, and (below them) the appropriate responses (also in Chinese characters). Imagine we take a non-Chinese speaker, and place him inside the room, and then take a fluent Chinese speaker and put them outside. They write a phrase or question (in Chinese) on some paper, and pass it through the letterbox to the other person inside the room. They have no idea what this message means, but by using the book they can identify the phrase, write the appropriate response to it, and pass it back through the letterbox. This process can be repeated multiple times, until a conversation begins to flow- the difference being that only one of the participants in the conversation actually knows what it’s about.
This experiment is a direct challenge to the somewhat crude test first proposed by mathematical genius and codebreaker Alan Turing in the 1940’s, to test whether a computer could be considered a truly intelligent being. The Turing test postulates that if a computer were ever able to conduct a conversation with a human so well that the human in question would have no idea that they were not talking to another human, but rather to a machine, then it could be considered to be intelligent.
The Chinese Room problem questions this idea, and as it does so, raises a fundamental question about whether a machine such as a computer can ever truly be called intelligent, or to possess intelligence. The point of the idea is to demonstrate that it is perfectly possible to appear to be intelligent, by conducting a normal conversation with someone, whilst simultaneously having no understanding whatsoever of the situation at hand. Thus, while a machine programmed with the correct response to any eventuality could converse completely naturally, and appear perfectly human, it would have no real conciousness. It would not be truly intelligent, it would merely be just running an algorithm, obeying the orders of the instructions in its electronic brain, working simply from the intelligence of the person who programmed in its orders. So, does this constitute intelligence, or is a conciousness necessary for something to be deemed intelligent?
This really boils down to a question of opinion- if something acts like it’s intelligent and is intelligent for all functional purposes, does that make it intelligent? Does it matter that it can’t really comprehend it’s own intelligence? John Searle, who first thought of the Chinese Room in the 1980’s, called the philosophical positions on this ‘strong AI’ and ‘weak AI’. Strong AI basically suggest that functional intelligence is intelligence to all intents and purposes- weak AI argues that the lack of true intelligence renders even the most advanced and realistic computer nothing more than a dumb machine.
However, Searle also proposes a very interesting idea that is prone to yet more philosophical debate- that our brains are mere machines in exactly the same way as computers are- the mechanics of the brain, deep in the unexplored depths of the fundamentals of neuroscience, are just machines that tick over and perform tasks in the same way as AI does- and that there is some completely different and non-computational mechanism that gives rise to our mind and conciousness.
But what if there is no such mechanism? What if the rise of a conciousness is merely the result of all the computational processes going on in our brain- what if conciousness is nothing more than a computational process itself, designed to give our brains a way of joining the dots and processing more efficiently. This is a quite frightening thought- that we could, in theory, be only restrained into not giving a computer a conciousness because we haven’t written the proper code yet. This is one of the biggest unanswered questions of modern science- what exactly is our mind, and what causes it.
To fully expand upon this particular argument would take time and knowledge that I don’t have in equal measure, so instead I will just leave that last question for you to ponder over- what is the difference between the box displaying these words for you right now, and the fleshy lump that’s telling you what they mean.