NUMBERS

One of the most endlessly charming parts of the human experience is our capacity to see something we can’t describe and just make something up in order to do so, never mind whether it makes any sense in the long run or not. Countless examples have been demonstrated over the years, but the mother lode of such situations has to be humanity’s invention of counting.

Numbers do not, in and of themselves, exist- they are simply a construct designed by our brains to help us get around the awe-inspiring concept of the relative amounts of things. However, this hasn’t prevented this ‘neat little tool’ spiralling out of control to form the vast field that is mathematics. Once merely a diverting pastime designed to help us get more use out of our counting tools, maths (I’m British, live with the spelling) first tentatively applied itself to shapes and geometry before experimenting with trigonometry, storming onwards to algebra, turning calculus into a total mess about four nanoseconds after its discovery of something useful, before just throwing it all together into a melting point of cross-genre mayhem that eventually ended up as a field that it as close as STEM (science, technology, engineering and mathematics) gets to art, in that it has no discernible purpose other than for the sake of its own existence.

This is not to say that mathematics is not a useful field, far from it. The study of different ways of counting lead to the discovery of binary arithmetic and enabled the birth of modern computing, huge chunks of astronomy and classical scientific experiments were and are reliant on the application of geometric and trigonometric principles, mathematical modelling has allowed us to predict behaviour ranging from economics & statistics to the weather (albeit with varying degrees of accuracy) and just about every aspect of modern science and engineering is grounded in the brute logic that is core mathematics. But… well, perhaps the best way to explain where the modern science of maths has lead over the last century is to study the story of i.

One of the most basic functions we are able to perform to a number is to multiply it by something- a special case, when we multiply it by itself, is ‘squaring’ it (since a number ‘squared’ is equal to the area of a square with side lengths of that number). Naturally, there is a way of reversing this function, known as finding the square root of a number (ie square rooting the square of a number will yield the original number). However, convention dictates that a negative number squared makes a positive one, and hence there is no number squared that makes a negative and there is no such thing as the square root of a negative number, such as -1. So far, all I have done is use a very basic application of logic, something a five-year old could understand, to explain a fact about ‘real’ numbers, but maths decided that it didn’t want to not be able to square root a negative number, so had to find a way round that problem. The solution? Invent an entirely new type of number, based on the quantity i (which equals the square root of -1), with its own totally arbitrary and made up way of fitting  on a number line, and which can in no way exist in real life.

Admittedly, i has turned out to be useful. When considering electromagnetic forces, quantum physicists generally assign the electrical and magnetic components real and imaginary quantities in order to identify said different components, but its main purpose was only ever to satisfy the OCD nature of mathematicians by filling a hole in their theorems. Since then, it has just become another toy in the mathematician’s arsenal, something for them to play with, slip into inappropriate situations to try and solve abstract and largely irrelevant problems, and with which they can push the field of maths in ever more ridiculous directions.

A good example of the way mathematics has started to lose any semblance of its grip on reality concerns the most famous problem in the whole of the mathematical world- Fermat’s last theorem. Pythagoras famously used the fact that, in certain cases, a squared plus b squared equals c squared as a way of solving some basic problems of geometry, but it was never known as to whether a cubed plus b cubed could ever equal c cubed if a, b and c were whole numbers. This was also true for all other powers of a, b and c greater than 2, but in 1637 the brilliant French mathematician Pierre de Fermat claimed, in a scrawled note inside his copy of Diohantus’ Arithmetica, to have a proof for this fact ‘that is too large for this margin to contain’. This statement ensured the immortality of the puzzle, but its eventual solution (not found until 1995, leading most independent observers to conclude that Fermat must have made a mistake somewhere in his ‘marvellous proof’) took one man, Andrew Wiles, around a decade to complete. His proof involved showing that the terms involved in the theorem could be expressed in the form of an incredibly weird equation that doesn’t exist in the real world, and that all equations of this type had a counterpart equation of an equally irrelevant type. However, since the ‘Fermat equation’ was too weird to exist in the other format, it could not logically be true.

To a mathematician, this was the holy grail; not only did it finally lay to rest an ages-old riddle, but it linked two hitherto unrelated branches of algebraic mathematics by way of proving what is (now it’s been solved) known as the Taniyama-Shimura theorem. To anyone interested in the real world, this exercise made no contribution to it whatsoever- apart from satisfying a few nerds, nobody’s life was made easier by the solution, it didn’t solve any real-world problem, and it did not make the world a tangibly better place. In this respect then, it was a total waste of time.

However, despite everything I’ve just said, I’m not going to decide that all modern day mathematics is a waste of time; very few human activities ever are. Mathematics is many things; among them ridiculous, confusing, full of contradictions and potential slip-ups and, in a field whose age of winning a major prize is younger than in any other STEM field, apparently full of those likely to belittle you out of future success should you enter the world of serious academia. But, for some people, maths is just what makes the world makes sense, and at its heart that was all it was ever created to do. And if some people want their life to be all about the little symbols that make the world make sense, then well done to the world for making a place for them.

Oh, and there’s a theory doing the rounds of cosmology nowadays that reality is nothing more than a mathematical construct. Who knows in what obscure branch of reverse logarithmic integrals we’ll find answers about that one…

Advertisement

Bouncing horses

I have , over recent months, built up a rule concerning posts about YouTube videos, partly on the grounds that it’s bloody hard to make a full post out of them but also because there are most certainly a hell of a lot of good ones out there that I haven’t heard of, so any discussion of them is sure to be incomplete and biased, which I try to avoid wherever possible. Normally, this blog also rarely delves into what might be even vaguely dubbed ‘current affairs’, but since it regularly does discuss the weird and wonderful world of the internet and its occasional forays into the real world I thought that I might make an exception; today, I’m going to be talking about Gangnam Style.

Now officially the most liked video in the long and multi-faceted history of YouTube (taking over from the previous record holder and a personal favourite, LMFAO’s Party Rock Anthem), this music video by Korean rapper & pop star PSY was released over two and a half months ago, and for the majority of that time it lay in some obscure and foreign corner of the internet. Then, in that strange way that random videos, memes and general random bits and pieces are wont to do online, it suddenly shot to prominence thanks to the web collectively pissing itself over the sight of a chubby Korean bloke in sunglasses doing ‘the horse riding dance’. Quite how this was even discovered by some casual YouTube-surfer is something of a mystery to me given that said dance doesn’t even start for a good minute and a half or so, but the fact remains that it was, and that it is now absolutely bloody everywhere. Only the other day it became the first ever Korean single to reach no.1 in the UK charts, despite not having been translated from its original language, and has even prompted a dance off between rival Thai gangs prior to a gunfight. Seriously.

Not that it has met with universal appeal though. I’m honestly surprised that more critics didn’t get up in their artistic arms at the sheer ridiculousness of it, and the apparent lack of reason for it to enjoy the degree of success that it has (although quite a few probably got that out of their system after Call Me Maybe), but several did nonetheless. Some have called it ‘generic’ in music terms, others have found its general ridiculousness more tiresome and annoying than fun, and one Australian journalist commented that the song “makes you wonder if you have accidentally taken someone else’s medication”. That such criticism has been fairly limited can be partly attributed to the fact that the song itself is actually intended to be a parody anyway. Gangnam is a classy, fashionable district of the South Korean capital Seoul (PSY has likened it to Beverly Hills in California), and gangnam style is a Korean phrase referring to the kind of lavish & upmarket (if slightly pretentious) lifestyle of those who live there; or, more specifically, the kind of posers & hipsters who claim to affect ‘the Gangnam Style’. The song’s self-parody comes from the contrast between PSY’s lyrics, written from the first-person perspective of such a poser, and his deliberately ridiculous dress and dance style.

Such an act of deliberate self-parody has certainly helped to win plaudits from serious music critics, who have found themselves to be surprisingly good-humoured once told that the ridiculousness is deliberate and therefore actually funny- however, it’s almost certainly not the reason for the video’s over 300 million YouTube views, most of which surely go to people who’ve never heard of Gangnam, and certainly have no idea of the people PSY is mocking. In fact, there have been several different theories proposed as to why its popularity has soared quite so violently.

Most point to PSY’s very internet-friendly position on his video’s copyright. The Guardian claim that PSY has in fact waived his copyright to the video, but what is certain is that he has neglected to take any legal action on the dozens of parodies and alternate versions of his video, allowing others to spread the word in their own, unique ways and giving it enormous potential to spread, and spread far. These parodies have been many and varied in content, author and style, ranging from the North Korean government’s version aimed at satirising the South Korean president Park Guen-hye (breaking their own world record for most ridiculous entry into a political pissing contest, especially given that it mocks her supposed devotion to an autocratic system of government, and one moreover that ended over 30 years ago), to the apparently borderline racist “Jewish Style” (neither of which I have watched, so cannot comment on). One parody has even sparked a quite significant legal case, with 14 California lifeguards being fired for filming, dancing in, or even appearing in the background of, their parody video “Lifeguard Style” and investigation has since been launched by the City Council in response to the thousands of complaints and suggestions, one even by PSY himself, that the local government were taking themselves somewhat too seriously.

However, by far the most plausible reason for he mammoth success of the video is also the simplest; that people simply find it funny as hell. Yes, it helps a lot that such a joke was entirely intended (let’s be honest, he probably couldn’t have come up with quite such inspired lunacy by accident), and yes it helps how easily it has been able to spread, but to be honest the internet is almost always able to overcome such petty restrictions when it finds something it likes. Sometimes, giggling ridiculousness is just plain funny, and sometimes I can’t come up with a proper conclusion to these posts.

P.S. I forgot to mention it at the time, but last post was my 100th ever published on this little bloggy corner of the internet. Weird to think it’s been going for over 9 months already. And to anyone who’s ever stumbled across it, thank you; for making me feel a little less alone.

The Problems of the Real World

My last post on the subject of artificial intelligence was something of a philosophical argument on its nature- today I am going to take on a more practical perspective, and have a go at just scratching the surface of the monumental challenges that the real world poses to the development of AI- and, indeed, how they are (broadly speaking) solved.

To understand the issues surrounding the AI problem, we must first consider what, in the strictest sense of the matter, a computer is. To quote… someone, I can’t quite remember who: “A computer is basically just a dumb adding machine that counts on its fingers- except that it has an awful lot of fingers and counts terribly fast”. This, rather simplistic model, is in fact rather good for explaining exactly what it is that computers are good and bad at- they are very good at numbers, data crunching, the processing of information. Information is the key thing here- if something can be inputted into a computer purely in terms of information, then the computer is perfectly capable of modelling and processing it with ease- which is why a computer is very good at playing games. Even real-world problems that can be expressed in terms of rules and numbers can be converted into computer-recognisable format and mastered with ease, which is why computers make short work of things like ballistics modelling (such as gunnery tables, the US’s first usage of them), and logical games like chess.

However, where a computer develops problems is in the barrier between the real world and the virtual. One must remember that the actual ‘mind’ of a computer itself is confined exclusively to the virtual world- the processing within a robot has no actual concept of the world surrounding it, and as such is notoriously poor at interacting with it. The problem is twofold- firstly, the real world is not a mere simulation, where rules are constant and predictable; rather, it is an incredibly complicated, constantly changing environment where there are a thousand different things that we living humans keep track of without even thinking. As such, there are a LOT of very complicated inputs and outputs for a computer to keep track of in the real world, which makes it very hard to deal with. But this is merely a matter of grumbling over the engineering specifications and trying to meet the design brief of the programmers- it is the second problem which is the real stumbling block for the development of AI.

The second issue is related to the way a computer processes information- bit by bit, without any real grasp of the big picture. Take, for example, the computer monitor in front of you. To you, it is quite clearly a screen- the most notable clue being the pretty pattern of lights in front of you. Now, turn your screen slightly so that you are looking at it from an angle. It’s still got a pattern of lights coming out of it, it’s still the same colours- it’s still a screen. To a computer however, if you were to line up two pictures of your monitor from two different angles, it would be completely unable to realise that they were the same screen, or even that they were the same kind of objects. Because the pixels are in a different order, and as such the data’s different, the two pictures are completely different- the computer has no concept of the idea that the two patterns of lights are the same basic shape, just from different angles.

There are two potential solutions to this problem. Firstly, the computer can look at the monitor and store an image of it from every conceivable angle with every conceivable background, so that it would be able to recognise it anywhere, from any viewpoint- this would however take up a library’s worth of memory space and be stupidly wasteful. The alternative requires some cleverer programming- by training the computer to spot patterns of pixels that look roughly similar (either shifted along by a few bytes, or missing a few here and there), they can be ‘trained’ to pick out basic shapes- by using an algorithm to pick out changes in colour (an old trick that’s been used for years to clean up photos), the edges of objects can be identified and separate objects themselves picked out. I am not by any stretch of the imagination an expert in this field so won’t go into details, but by this basic method a computer can begin to step back and begin to look at the pattern of a picture as a whole.

But all that information inputting, all that work…  so your computer can identify just a monitor? What about all the myriad of other things our brains can recognise with such ease- animals, buildings, cars? And we haven’t even got on to differentiating between different types of things yet… how will we ever match the human brain?

This idea presented a big setback for the development of modern AI- so far we have been able to develop AI that allows one computer to handle a few real-world tasks or applications very well (and in some cases, depending on the task’s suitability to the computational mind, better than humans), but scientists and engineers were presented with a monumental challenge when faced with the prospect of trying to come close to the human mind (let alone its body) in anything like the breadth of tasks it is able to perform. So they went back to basics, and began to think of exactly how humans are able to do so much stuff.

Some of it can be put down to instinct, but then came the idea of learning. The human mind is especially remarkable in its ability to take in new information and learn new things about the world around it- and then take this new-found information and try to apply it to our own bodies. Not only can we do this, but we can also do it remarkably quickly- it is one of the main traits which has pushed us forward as a race.

So this is what inspires the current generation of AI programmers and robotocists- the idea of building into the robot’s design a capacity for learning. The latest generation of the Japanese ‘Asimo’ robots can learn what various objects presented to it are, and is then able to recognise them when shown them again- as well as having the best-functioning humanoid chassis of any existing robot, being able to run and climb stairs. Perhaps more excitingly are a pair of robots currently under development that start pretty much from first principles, just like babies do- first they are presented with a mirror and learn to manipulate their leg motors in such a way that allows them to stand up straight and walk (although they aren’t quite so good at picking themselves up if they fail in this endeavour). They then face one another and begin to demonstrate and repeat actions to one another, giving each action a name as they do so.  In doing this they build up an entirely new, if unsophisticated, language with which to make sense of the world around them- currently, this is just actions, but who knows what lies around the corner…