The Prisoner’s Dilemma

It’s a classic thought experiment, mathematical problem and a cause of much philosophical debate. Over the years it has found its way into every sphere of existence from serious lecturing to game shows to, on numerous occasions, real life. It has been argued as being the basis for all religion, and its place in our society. And to think that it, in its purest form, is nothing more than a story about two men in a jail- the prisoner’s dilemma.

The classic example of the dilemma goes roughly as follows; two convicts suspected of a crime are kept in single custody, separated from one another and unable to converse. Both are in fact guilty of the crime, but the police only have evidence to convict them for a small charge, worth a few months in jail if neither of them confess (the ‘cooperation’ option). However, if they ‘rat out’ on their partner, they should be able to get themselves charged with only a minor offence for complicity, worth a small fine, whilst their partner will get a couple of years behind bars. But, if both tell on one another, revealing their partnership in the crime, both can expect a sentence of around a year.

The puzzle comes under the title (in mathematics) of game theory, and was first formally quantified in the 1950s, although the vague principle was understood for years before that. The real interest of the puzzle comes in the strange self-conflicting logic of the situation; in all cases, the prisoner gets a reduced punishment if they rat out on their partner (a fine versus a prison sentence if their partner doesn’t tell on them, and one year rather than two if they do), but the consequence for both following the ‘logical’ path is a worse punishment if neither of them did. Basically, if one of them is a dick then they win, but if both of them are dicks then they both lose.

The basic principle of this can be applied to hundreds of situations; the current debate concerning climate change is one example. Climate change is a Bad Thing that looks set to cause untold trillions of dollars in damage over the coming years, and nobody actively wants to screw over the environment; however, solving the problem now is very expensive for any country, and everyone wants it to be somebody else’s problem. Therefore, the ‘cooperate’ situation is for everyone to introduce expensive measures to combat climate change, but the ‘being a dick’ situation is to let everyone else do that whilst you don’t bother and reap the benefits of both the mostly being fixed environment, and the relative economic boom you are experiencing whilst all the business rushes to invest in a country with less taxes being demanded. However, what we are stuck with now is the ‘everyone being a dick’ scenario where nobody wants to make a massive investment in sustainable energy and such for fear of nobody else doing it, and look what it’s doing to the planet.

But I digress; the point is that it is the logical ‘best’ thing to take the ‘cooperate’ option, but that it seems to make logical sense not to do so, and 90% of the moral and religious arguments made over the past couple of millennia can be reduced down to trying to make people pick the ‘cooperate’ option in all situations. That they don’t can be clearly evidenced by the fact that we still need armies for defensive purposes (it would be cheaper for us not to, but we can’t risk the consequences of someone raising an army to royally screw everyone over) and the ‘mutually assured destruction’ situation that developed between the American and Soviet nuclear arsenals during the Cold War.

Part of the problem with the prisoner’s dilemma situation concerns what is also called the ‘iterative prisoner’s dilemma’- aka, when the situation gets repeated over and over again. The reason this becomes a problem is because people can quickly learn what kind of behaviour you are likely to adopt, meaning that if you constantly take the ‘nice’ option people will learn that you can be easily be beaten by repeatedly taking the ‘arsehole’ option, meaning that the ‘cooperate’ option becomes the less attractive, logical one (even if it is the nice option). All this changes, however, if you then find yourself able to retaliate, making the whole business turn back into the giant pissing contest of ‘dick on the other guy’ we were trying to avoid. A huge amount of research and experimentation has been done into the ‘best’ strategy for an iterative prisoner’s dilemma, and they have found that a broadly ‘nice’, non-envious strategy, able to retaliate against an aggressive opponent but quick to forgive, is most usually the best; but since, in the real world, each successive policy change takes a large amount of resources, this is frequently difficult to implement. It is also a lot harder to model ‘successful’ strategies in continuous, rather than discrete, iterative prisoner’s dilemmas (is it dilemmas, or dilemmae?), such as feature most regularly in the real world.

To many, the prisoner’s dilemma is a somewhat depressing prospect. Present in almost all walks of life, there are countless examples of people picking the options that seem logical but work out negatively in the long run, simply because they haven’t realised the game theory of the situation. It is a puzzle that appears to show the logical benefit of selfishness, whilst simultaneously demonstrating its destructiveness and thus human nature’s natural predisposition to pursuing the ‘destructive’ option. But to me, it’s quite a comforting idea; not only does it show that ‘logic’ is not always as straightforward as it seems, justifying the fact that one viewpoint that seems blatantly, logically obvious to one person may not be the definitive correct one, but it also reveals to us the mathematics of kindness, and that the best way to play a game is the nice way.

Oh, and for a possibly unique, eminently successful and undoubtedly hilarious solution to the prisoner’s dilemma, I refer you here. It’s not a general solution, but it’s still a pretty cool one 🙂

Advertisement

NUMBERS

One of the most endlessly charming parts of the human experience is our capacity to see something we can’t describe and just make something up in order to do so, never mind whether it makes any sense in the long run or not. Countless examples have been demonstrated over the years, but the mother lode of such situations has to be humanity’s invention of counting.

Numbers do not, in and of themselves, exist- they are simply a construct designed by our brains to help us get around the awe-inspiring concept of the relative amounts of things. However, this hasn’t prevented this ‘neat little tool’ spiralling out of control to form the vast field that is mathematics. Once merely a diverting pastime designed to help us get more use out of our counting tools, maths (I’m British, live with the spelling) first tentatively applied itself to shapes and geometry before experimenting with trigonometry, storming onwards to algebra, turning calculus into a total mess about four nanoseconds after its discovery of something useful, before just throwing it all together into a melting point of cross-genre mayhem that eventually ended up as a field that it as close as STEM (science, technology, engineering and mathematics) gets to art, in that it has no discernible purpose other than for the sake of its own existence.

This is not to say that mathematics is not a useful field, far from it. The study of different ways of counting lead to the discovery of binary arithmetic and enabled the birth of modern computing, huge chunks of astronomy and classical scientific experiments were and are reliant on the application of geometric and trigonometric principles, mathematical modelling has allowed us to predict behaviour ranging from economics & statistics to the weather (albeit with varying degrees of accuracy) and just about every aspect of modern science and engineering is grounded in the brute logic that is core mathematics. But… well, perhaps the best way to explain where the modern science of maths has lead over the last century is to study the story of i.

One of the most basic functions we are able to perform to a number is to multiply it by something- a special case, when we multiply it by itself, is ‘squaring’ it (since a number ‘squared’ is equal to the area of a square with side lengths of that number). Naturally, there is a way of reversing this function, known as finding the square root of a number (ie square rooting the square of a number will yield the original number). However, convention dictates that a negative number squared makes a positive one, and hence there is no number squared that makes a negative and there is no such thing as the square root of a negative number, such as -1. So far, all I have done is use a very basic application of logic, something a five-year old could understand, to explain a fact about ‘real’ numbers, but maths decided that it didn’t want to not be able to square root a negative number, so had to find a way round that problem. The solution? Invent an entirely new type of number, based on the quantity i (which equals the square root of -1), with its own totally arbitrary and made up way of fitting  on a number line, and which can in no way exist in real life.

Admittedly, i has turned out to be useful. When considering electromagnetic forces, quantum physicists generally assign the electrical and magnetic components real and imaginary quantities in order to identify said different components, but its main purpose was only ever to satisfy the OCD nature of mathematicians by filling a hole in their theorems. Since then, it has just become another toy in the mathematician’s arsenal, something for them to play with, slip into inappropriate situations to try and solve abstract and largely irrelevant problems, and with which they can push the field of maths in ever more ridiculous directions.

A good example of the way mathematics has started to lose any semblance of its grip on reality concerns the most famous problem in the whole of the mathematical world- Fermat’s last theorem. Pythagoras famously used the fact that, in certain cases, a squared plus b squared equals c squared as a way of solving some basic problems of geometry, but it was never known as to whether a cubed plus b cubed could ever equal c cubed if a, b and c were whole numbers. This was also true for all other powers of a, b and c greater than 2, but in 1637 the brilliant French mathematician Pierre de Fermat claimed, in a scrawled note inside his copy of Diohantus’ Arithmetica, to have a proof for this fact ‘that is too large for this margin to contain’. This statement ensured the immortality of the puzzle, but its eventual solution (not found until 1995, leading most independent observers to conclude that Fermat must have made a mistake somewhere in his ‘marvellous proof’) took one man, Andrew Wiles, around a decade to complete. His proof involved showing that the terms involved in the theorem could be expressed in the form of an incredibly weird equation that doesn’t exist in the real world, and that all equations of this type had a counterpart equation of an equally irrelevant type. However, since the ‘Fermat equation’ was too weird to exist in the other format, it could not logically be true.

To a mathematician, this was the holy grail; not only did it finally lay to rest an ages-old riddle, but it linked two hitherto unrelated branches of algebraic mathematics by way of proving what is (now it’s been solved) known as the Taniyama-Shimura theorem. To anyone interested in the real world, this exercise made no contribution to it whatsoever- apart from satisfying a few nerds, nobody’s life was made easier by the solution, it didn’t solve any real-world problem, and it did not make the world a tangibly better place. In this respect then, it was a total waste of time.

However, despite everything I’ve just said, I’m not going to decide that all modern day mathematics is a waste of time; very few human activities ever are. Mathematics is many things; among them ridiculous, confusing, full of contradictions and potential slip-ups and, in a field whose age of winning a major prize is younger than in any other STEM field, apparently full of those likely to belittle you out of future success should you enter the world of serious academia. But, for some people, maths is just what makes the world makes sense, and at its heart that was all it was ever created to do. And if some people want their life to be all about the little symbols that make the world make sense, then well done to the world for making a place for them.

Oh, and there’s a theory doing the rounds of cosmology nowadays that reality is nothing more than a mathematical construct. Who knows in what obscure branch of reverse logarithmic integrals we’ll find answers about that one…

The Problems of the Real World

My last post on the subject of artificial intelligence was something of a philosophical argument on its nature- today I am going to take on a more practical perspective, and have a go at just scratching the surface of the monumental challenges that the real world poses to the development of AI- and, indeed, how they are (broadly speaking) solved.

To understand the issues surrounding the AI problem, we must first consider what, in the strictest sense of the matter, a computer is. To quote… someone, I can’t quite remember who: “A computer is basically just a dumb adding machine that counts on its fingers- except that it has an awful lot of fingers and counts terribly fast”. This, rather simplistic model, is in fact rather good for explaining exactly what it is that computers are good and bad at- they are very good at numbers, data crunching, the processing of information. Information is the key thing here- if something can be inputted into a computer purely in terms of information, then the computer is perfectly capable of modelling and processing it with ease- which is why a computer is very good at playing games. Even real-world problems that can be expressed in terms of rules and numbers can be converted into computer-recognisable format and mastered with ease, which is why computers make short work of things like ballistics modelling (such as gunnery tables, the US’s first usage of them), and logical games like chess.

However, where a computer develops problems is in the barrier between the real world and the virtual. One must remember that the actual ‘mind’ of a computer itself is confined exclusively to the virtual world- the processing within a robot has no actual concept of the world surrounding it, and as such is notoriously poor at interacting with it. The problem is twofold- firstly, the real world is not a mere simulation, where rules are constant and predictable; rather, it is an incredibly complicated, constantly changing environment where there are a thousand different things that we living humans keep track of without even thinking. As such, there are a LOT of very complicated inputs and outputs for a computer to keep track of in the real world, which makes it very hard to deal with. But this is merely a matter of grumbling over the engineering specifications and trying to meet the design brief of the programmers- it is the second problem which is the real stumbling block for the development of AI.

The second issue is related to the way a computer processes information- bit by bit, without any real grasp of the big picture. Take, for example, the computer monitor in front of you. To you, it is quite clearly a screen- the most notable clue being the pretty pattern of lights in front of you. Now, turn your screen slightly so that you are looking at it from an angle. It’s still got a pattern of lights coming out of it, it’s still the same colours- it’s still a screen. To a computer however, if you were to line up two pictures of your monitor from two different angles, it would be completely unable to realise that they were the same screen, or even that they were the same kind of objects. Because the pixels are in a different order, and as such the data’s different, the two pictures are completely different- the computer has no concept of the idea that the two patterns of lights are the same basic shape, just from different angles.

There are two potential solutions to this problem. Firstly, the computer can look at the monitor and store an image of it from every conceivable angle with every conceivable background, so that it would be able to recognise it anywhere, from any viewpoint- this would however take up a library’s worth of memory space and be stupidly wasteful. The alternative requires some cleverer programming- by training the computer to spot patterns of pixels that look roughly similar (either shifted along by a few bytes, or missing a few here and there), they can be ‘trained’ to pick out basic shapes- by using an algorithm to pick out changes in colour (an old trick that’s been used for years to clean up photos), the edges of objects can be identified and separate objects themselves picked out. I am not by any stretch of the imagination an expert in this field so won’t go into details, but by this basic method a computer can begin to step back and begin to look at the pattern of a picture as a whole.

But all that information inputting, all that work…  so your computer can identify just a monitor? What about all the myriad of other things our brains can recognise with such ease- animals, buildings, cars? And we haven’t even got on to differentiating between different types of things yet… how will we ever match the human brain?

This idea presented a big setback for the development of modern AI- so far we have been able to develop AI that allows one computer to handle a few real-world tasks or applications very well (and in some cases, depending on the task’s suitability to the computational mind, better than humans), but scientists and engineers were presented with a monumental challenge when faced with the prospect of trying to come close to the human mind (let alone its body) in anything like the breadth of tasks it is able to perform. So they went back to basics, and began to think of exactly how humans are able to do so much stuff.

Some of it can be put down to instinct, but then came the idea of learning. The human mind is especially remarkable in its ability to take in new information and learn new things about the world around it- and then take this new-found information and try to apply it to our own bodies. Not only can we do this, but we can also do it remarkably quickly- it is one of the main traits which has pushed us forward as a race.

So this is what inspires the current generation of AI programmers and robotocists- the idea of building into the robot’s design a capacity for learning. The latest generation of the Japanese ‘Asimo’ robots can learn what various objects presented to it are, and is then able to recognise them when shown them again- as well as having the best-functioning humanoid chassis of any existing robot, being able to run and climb stairs. Perhaps more excitingly are a pair of robots currently under development that start pretty much from first principles, just like babies do- first they are presented with a mirror and learn to manipulate their leg motors in such a way that allows them to stand up straight and walk (although they aren’t quite so good at picking themselves up if they fail in this endeavour). They then face one another and begin to demonstrate and repeat actions to one another, giving each action a name as they do so.  In doing this they build up an entirely new, if unsophisticated, language with which to make sense of the world around them- currently, this is just actions, but who knows what lies around the corner…

The Age of Reason

Science is a wonderful thing- particularly in the modern age where the more adventurous (or more willing to tempt fate, depending on your point of view) like to think that most of science is actually pretty well done and dusted. I mean, yes there are a lot of the little details we have yet to work out, but the big stuff, the major hows and whys, have been basically sorted out. We know why there are rainbows, why quantum tunnelling composite appears to defy basic logic, and even why you always seem to pick the slowest queue- science appears to have got it pretty much covered.

[I feel I must take this opportunity to point out one of my favourite stories about the world of science- at the start of the 20th century, there was a prevailing attitude among physicists that physics was going to last, as an advanced science, for about another 20 years or so. They basically presumed that they had worked almost everything out, and now all they had to do was to tie up all the loose ends. However, one particular loose end, the photoelectric effect, simply refused to budge by their classical scientific laws. The only person to come up with a solution was Max Planck who, by modelling light (which everyone knew was a wave) as a particle instead, opened the door to the modern age of quantum theory. Physics as a whole took one look at all the new questions this proposed and, as one, took a collective facepalm.]

In any case, we are now at such an advanced stage of the scientific revolution, that there appears to be nothing, in everyday life at least, that we cannot, at least in part, explain. We might not know, for example, exactly how the brain is wired up, but we still have enough of an understanding to have a pretty accurate guess as to what part of it isn’t working properly when somebody comes in with brain damage. We don’t get exactly why or how photons appear to defy the laws of logic, but we can explain enough of it to tell you why a lens focuses light onto a point. You get the idea.

Any scientist worth his salt will scoff at this- a chemist will bang on about the fact that nanotubes were only developed a decade ago and will revolutionise the world in another, a biologist will tell you about all the myriad of species we know next to nothing about, and the myriad more that we haven’t discovered yet, and a theoretical physicist will start quoting logical impossibilities and make you feel like a complete fool. But this is all, really, rather high-level science- the day-to-day stuff is all pretty much done. Right?

Well… it’s tempting to think so. But in reality all the scientists are pretty correct- Newton’s great ocean of truth remains very much a wild and unexplored place, and not just in all the nerdy places that nobody without 3 separate doctorates can understand. There are some things that everybody, from the lowliest man in the street to the cleverest scientists, can comprehend completely and not understand in the slightest.

Take, for instance, the case of Sugar the cat. Sugar was a part-Persian with a hip deformity who often got uncomfortable in cars. As such when her family moved house, they opted to leave her with a neighbour. After a couple of weeks, Sugar disappeared, before reappearing 14 months later… at her family’s new house. What makes this story even more remarkable? The fact that Silky’s owners had moved from California to Oklahoma, and that a cat with a severe hip problem had trekked 1500 miles, over 100 a month,  to a place she had never even seen. How did she manage it? Nobody has a sodding clue.

This isn’t the only story of long-distance cat return, although Sugar holds the distance record. But an ability to navigate that a lot of sat navs would be jealous of isn’t the only surprising oddity in the world of nature. Take leopards, for example. The most common, and yet hardest to find and possibly deadliest of ‘The Big Five’, everyone knows that they are born killers. Humans, by contrast, are in many respects born prey- we are slow over short distances, have no horns, claws, long teeth or other natural defences, are fairly poor at hiding and don’t even live in herds for safety in numbers. Especially vulnerable are, of course, babies and young children, who by animal standards take an enormously long time to even stand upright, let alone mature. So why exactly, in 1938, were a leopard and her cubs found with a near-blind human child who she had carried off as a baby five years ago. Even more remarkable was the superlative sense of smell the child had, being able to differentiate between different people and even objects with nothing more than a good sniff- which also reminds me of a video I saw a while ago of a blind Scottish boy who can tell what material something is made of and how far away it is (well enough to play basketball) simply by making a clicking sound with his mouth.

I’m not really sure what I’m trying to say in this post- I have a sneaking suspicion my subconscious simply wanted to give me an excuse to share some of the weirdest stories I have yet to see on Cracked.com. So, to round off, I’ll leave you with a final one. In 1984 a hole was found in a farm in Washington State, about 3 metres by 2 and around 60cm deep. 25 metres away, the three tons of grass-covered earth that had previously filled the hole was found- completely intact, in a single block. One person described it as looking like it had been cut away with ‘a gigantic cookie cutter’, but this failed to explain why all of the roots hanging off it were intact. There were no tracks or any distinguishing feature apart from a dribble of earth leading between hole and divot, and the closest thing anyone had to an explanation was to lamely point out that there had been a minor earthquake 20 miles ago a week beforehand.

When I invent a time machine, forget killing Hitler- the first thing I’m doing is going back to find out what the &*^% happened with that hole.