Leave Reality at Home

One of the most contentious issues surrounding criticisms of many forms of media, particularly in films and videogames, is the issue of realism. How realistic a videogame is, how accurately it replicates the world around us both visually and thematically, is the most frequently cited factor in determining how immersive a game is, how much you ‘get into it’, and films that keep their feet very much in the real world delight both nerds and film critics alike by responding favourably to their nit-picking. But the place of realism in these media is not a simple question of ‘as much realism as possible is better’; finding the ideally realistic situation (which is a phrase I totally didn’t just make up) is a delicate balance that can vary enormously from one product to another, and getting that balance right is frequently the key to success.

That too much realism can be a bad thing can be demonstrated quite easily on both a thematic and visual front. To deal with the visual sphere of things first, I believe I have talked before about ‘the uncanny valley’, which is originated as a robotics term first hypothesised by Japanese roboticist Masahiro Mori. The theory, now supported by research from the likes of Hiroshi Ishiguro (who specialises in making hyper-realistic robots), states that as a robot gets steadily more and more human in appearance, humans tend to react more favourably to it, until we reach a high point of a stylised, human-like appearance that is nonetheless clearly non-human. Beyond this point, however, our reactions to such a robot get dramatically worse, as the design starts to look less like a human-like robot and more like a very weird looking human, until we get to the point at which the two are indistinguishable from one another and we reach another peak. This dip in positive reacton, the point where faces start to look ‘creepy’, is known as the uncanny valley, and the principle can be applied just as easily to computer graphics as it can to robots. The main way of overcoming the issue involves a careful design process intended to stylise certain features; in other words, the only way to make something quite realistic not look creepy is to make it selectively less realistic. Thus, hyper-realism is not always the way forward in drawn/animated forms of media, and won’t be until the magical end-goal of photorealistic graphics are achieved. If that ever happens.

However, the uncanny valley is far less interesting than the questions that arise when considering the idea of thematic realism (which I again totally didn’t just make up). These are the extent to which stories are realistic, or aspects of a story, or events in a film and somesuch, and here we arrive at an apparent double standard. Here, our evidence comes from nerds; as we all know, film nerds (and I suspect everyone else if they can find them) delight in pointing out continuity errors in everything they watch (a personal favourite is the ‘Hollywood’ sign in the remake of The Italian Job that quite clearly says OHLLYWOOD at one camera angle), and are prepared to go into a veritable tizz of enjoyment when something apparently implausible is somehow able to adhere fastidiously to the laws of physics. Being realistic is clearly something that can add a great deal to a film, indicating that the director has really thought about this; not only is this frequently an indicator of a properly good film, but it also helps satisfy a nerd’s natural desire to know all the details and background (which is the reason, by the way, that comic books spend so much of their time referencing to overcomplicated bits of canon).

However, evidence that reality is not at the core of our enjoyment when it comes to film and gaming can be quite easily revealed by considering the enormous popularity of the sci-fi and fantasy genres. We all of course know that these worlds are not real and, despite a lot of the jargon spouted in sci-fi to satisfy the already mentioned nerd curiosity, we also know that they fundamentally cannot be real. There is no such thing as magic, no dilithium crystals, no hyperspace and no elves, but that doesn’t prevent the idea of them from enjoying massive popularity from all sides. I mean, just about the biggest film of last summer was The Avengers, in which a group of superheroes fight a group of giant monsters sent through a magical portal by an ancient Norse god; about as realistic as a tap-dancing elephant, and yet most agreed as to the general awesomeness of that film. These fantastical, otherworldly and/or downright ridiculous worlds and stories have barely any bearing on the real world, and yet somehow this somehow makes it better.

The key detail here is, I think, the concept of escapism. Possibly the single biggest reason why we watch films, spend hours in front of Netflix, dedicate days of our life to videogames, is in pursuit of escapism; to get away from the mundaneness of our world and escape into our own little fantasy. We can follow a super-soldier blasting through waves of bad guys such as we all dream to be able to do, we can play as a hero with otherworldly magic at our fingertips , we can lead our sports teams to glory like we could never do in real life. Some of these stories take place in a realistic setting, others in a world of fantasy, yet in all the real pull factor is the same; we are getting to play or see a world that we fantasise about being able to live ourselves, and yet cannot.

The trick of successfully incorporating reality into these worlds is, therefore, one of supporting our escapism. In certain situations, such as in an ultra-realistic modern military shooter, an increasingly realistic situation makes this situation more like our fantasy, and as such adds to the immersion and the joy of the escapism; when we are facing challenges similar to those experienced by real soldiers (or at least the over-romanticised view of soldiering that we in fact fantasise about, rather than the day-to-day drudgery that is so often ignored), it makes our fantasy seem more tangible, fuelling the idea that we are, in fact, living the dream. On the other hand, applying the wrong sort of realism to a situation (like, say, not being able to make the impossible jumps or failing to have perfect balance) can kill the fantasy, reminding us just as easily as the unreality of a continuity error that this fantasy we are entertaining cannot actually happen, reminding us of the real world and ruining all the fun. There is, therefore, a kind of thematic uncanny valley as well; a state at which the reality of a film or videogame is just… wrong, and is thus able to take us out of the act of escapism. The location of this valley, however, is a lot harder to plot on a graph.

Advertisement

The Chinese Room

Today marks the start of another attempt at a multi-part set of posts- the last lot were about economics (a subject I know nothing about), and this one will be about computers (a subject I know none of the details about). Specifically, over the next… however long it takes, I will be taking a look at the subject of artificial intelligence- AI.

There have been a long series of documentaries on the subject of robots, supercomputers and artificial intelligence in recent years, because it is a subject which seems to be in the paradoxical state of continually advancing at a frenetic rate, and simultaneously finding itself getting further and further away from the dream of ‘true’ artificial intelligence which, as we begin to understand more and more about psychology, neuroscience and robotics, becomes steadily more complicated and difficult to obtain. I could spend a thousand posts on the subject of all the details if I so wished, because it is also one of the fastest-developing regions of engineering on the planet, but that would just bore me and be increasingly repetitive for anyone who ends up reading this blog.

I want to begin, therefore, by asking a few questions about the very nature of artificial intelligence, and indeed the subject of intelligence itself, beginning with a philosophical problem that, when I heard about it on TV a few nights ago, was very intriguing to me- the Chinese Room.

Imagine a room containing only a table, a chair, a pen, a heap of paper slips, and a large book. The door to the room has a small opening in it, rather like a letterbox, allowing messages to be passed in or out. The book contains a long list of phrases written in Chinese, and (below them) the appropriate responses (also in Chinese characters). Imagine we take a non-Chinese speaker, and place him inside the room, and then take a fluent Chinese speaker and put them outside. They write a phrase or question (in Chinese) on some paper, and pass it through the letterbox to the other person inside the room. They have no idea what this message means, but by using the book they can identify the phrase, write the appropriate response to it, and pass it back through the letterbox. This process can be repeated multiple times, until a conversation begins to flow- the difference being that only one of the participants in the conversation actually knows what it’s about.

This experiment is a direct challenge to the somewhat crude test first proposed by mathematical genius and codebreaker Alan Turing in the 1940’s, to test whether a computer could be considered a truly intelligent being. The Turing test postulates that if a computer were ever able to conduct a conversation with a human so well that the human in question would have no idea that they were not talking to another human, but rather to a machine, then it could be considered to be intelligent.

The Chinese Room problem questions this idea, and as it does so, raises a fundamental question about whether a machine such as a computer can ever truly be called intelligent, or to possess intelligence. The point of the idea is to demonstrate that it is perfectly possible to appear to be intelligent, by conducting a normal conversation with someone, whilst simultaneously having no understanding whatsoever of the situation at hand. Thus, while a machine programmed with the correct response to any eventuality could converse completely naturally, and appear perfectly human, it would have no real conciousness. It would not be truly intelligent, it would merely be just running an algorithm, obeying the orders of the instructions in its electronic brain, working simply from the intelligence of the person who programmed in its orders. So, does this constitute intelligence, or is a conciousness necessary for something to be deemed intelligent?

This really boils down to a question of opinion- if something acts like it’s intelligent and is intelligent for all functional purposes, does that make it intelligent? Does it matter that it can’t really comprehend it’s own intelligence? John Searle, who first thought of the Chinese Room in the 1980’s, called the philosophical positions on this ‘strong AI’ and ‘weak AI’. Strong AI basically suggest that functional intelligence is intelligence to all intents and purposes- weak AI argues that the lack of true intelligence renders even the most advanced and realistic computer nothing more than a dumb machine.

However, Searle also proposes a very interesting idea that is prone to yet more philosophical debate- that our brains are mere machines in exactly the same way as computers are- the mechanics of the brain, deep in the unexplored depths of the fundamentals of neuroscience, are just machines that tick over and perform tasks in the same way as AI does- and that there is some completely different and non-computational mechanism that gives rise to our mind and conciousness.

But what if there is no such mechanism? What if the rise of a conciousness is merely the result of all the computational processes going on in our brain- what if conciousness is nothing more than a computational process itself, designed to give our brains a way of joining the dots and processing more efficiently. This is a quite frightening thought- that we could, in theory, be only restrained into not giving a computer a conciousness because we haven’t written the proper code yet. This is one of the biggest unanswered questions of modern science- what exactly is our mind, and what causes it.

To fully expand upon this particular argument would take time and knowledge that I don’t have in equal measure, so instead I will just leave that last question for you to ponder over- what is the difference between the box displaying these words for you right now, and the fleshy lump that’s telling you what they mean.