Crypto

Cryptography is a funny business; shady from the beginning, the whole business of codes and ciphers has been specifically designed to hide your intentions and move in the shadows, unnoticed. However, the art of cryptography has been changed almost beyond recognition in the last hundred years thanks to the invention of the computer, and what was once an art limited by the imagination of the nerd responsible has now turned into a question of sheer computing might. But, as always, the best way to start with this story is at the beginning…

There are two different methods of applying cryptography to a message; with a code or with a cipher. A code is a system involving replacing words with other words (‘Unleash a fox’ might mean ‘Send more ammunition’, for example), whilst a cipher involves changing individual letters and their ordering. Use of codes can generally only be limited to a few words that can be easily memorised, and/or requires endless cross-referencing with a book of known ‘translations’, as well as being relatively insecure when it comes to highly secretive information. Therefore, most modern encoding (yes, that word is still used; ‘enciphering’ sounds stupid) takes the form of employing ciphers, and has done for hundreds of years; they rely solely on the application of a simple rule, require far smaller reference manuals, and are more secure.

Early attempts at ciphers were charmingly simple; the ‘Caesar cipher’ is a classic example, famously invented and used by Julius Caesar, where each letter is replaced by the one three along from it in the alphabet (so A becomes D, B becomes E and so on). Augustus Caesar, who succeeded Julius, didn’t set much store by cryptography and used a similar system, although with only a one-place transposition (so A to B and such)- despite the fact that knowledge of the Caesar cipher was widespread, and his messages were hopelessly insecure. These ‘substitution ciphers’ suffered from a common problem; the relative frequency with which certain letters appear in the English language (E being the most common, followed by T) is well-known, so by analysing the frequency of occurring letters in a substitution-enciphered message one can work out fairly accurately what letter corresponds to which, and work out the rest from there. This problem can be partly overcome by careful phrasing of messages and using only short ones, but it’s nonetheless a problem.

Another classic method is to use a transposition cipher, which changes the order of letters- the trick lies in having a suitable ‘key’ with which to do the reordering. A classic example is to write the message in a rectangle of a size known to both encoder and recipient, writing in columns but ‘reading it off’ in rows. The recipient can then reverse the process to read the original message. This is a nice method, and it’s very hard to decipher a single message encoded this way, but if the ‘key’ (e.g. the size of the rectangle) is not changed regularly then one’s adversaries can figure it out after a while. The army of ancient Sparta used a kind of transposition cipher based on a tapered wooden rod called a skytale (pronounced skih-tah-ly), around which a strip of paper was wrapped and the message written down it, one on each turn of paper. The recipient then wrapped the paper around a skytale of identical girth and taper (the tapering prevented letters being evenly spaced, making it harder to decipher), and read the message off- again, a nice idea, but the need to make a new set of skytale’s for everyone every time the key needed changing rendered it impractical. Nonetheless, transposition ciphers are a nice idea, and the Union used them to great effect during the American Civil War.

In the last century, cryptography has developed into even more of an advanced science, and most modern ciphers are based on the concept of transposition ciphers- however, to avoid the problem of using letter frequencies to work out the key, modern ciphers use intricate and elaborate systems to change by how much the ‘value’ of the letter changes each time. The German Lorenz cipher machine used during the Second World War (and whose solving I have discussed in a previous post) involved putting the message through three wheels and electronic pickups to produce another letter; but the wheels moved on one click after each letter was typed, totally changing the internal mechanical arrangement. The only way the British cryptographers working against it could find to solve it was through brute force, designing a computer specifically to test every single possible starting position for the wheels against likely messages. This generally took them several hours to work out- but if they had had a computer as powerful as the one I am typing on, then provided it was set up in the correct manner it would have the raw power to ‘solve’ the day’s starting positions within a few minutes. Such is the power of modern computers, and against such opponents must modern cryptographers pit themselves.

One technique used nowadays presents a computer with a number that is simply too big for it to deal with; they are called ‘trapdoor ciphers’. The principle is relatively simple; it is far easier to find that 17 x 19 = 323 than it is to find the prime factors of 323, even with a computer, so if we upscale this business to start dealing with huge numbers a computer will whimper and hide in the corner just looking at them. If we take two prime numbers, each more than 100 digits long (this is, by the way, the source of the oft-quoted story that the CIA will pay $10,000 to anyone who finds a prime number of over 100 digits due to its intelligence value) and multiply them together, we get a vast number with only two prime factors which we shall, for now, call M. Then, we convert our message into number form (so A=01, B=02, I LIKE TRAINS=0912091105201801091419) and the resulting number is then raised to the power of a third (smaller, three digits will do) prime number. This will yield a number somewhat bigger than M, and successive lots of M are then subtracted from it until it reaches a number less than M (this is known as modulo arithmetic, and can be best visualised by example: so 19+16=35, but 19+16 (mod 24)=11, since 35-24=11). This number is then passed to the intended recipient, who can decode it relatively easily (well, so long as they have a correctly programmed computer) if they know the two prime factors of M (this business is actually known as the RSA problem, and for reasons I cannot hope to understand current mathematical thinking suggests that finding the prime factors of M is the easiest way of solving this; however, this has not yet been proven, and the matter is still open for debate). However, even if someone trying to decode the message knows M and has the most powerful computer on earth, it would take him thousands of years to find out what its prime factors are. To many, trapdoor ciphers have made cryptoanalysis (the art of breaking someone else’s codes), a dead art.

Man, there’s a ton of cool crypto stuff I haven’t even mentioned yet… screw it, this is going to be a two-parter. See you with it on Wednesday…

Advertisement

NMEvolution

Music has been called by some the greatest thing the human race has ever done, and at its best it is undoubtedly a profound expression of emotion more poetic than anything Shakespeare ever wrote. True, done badly it can sound like a trapped cat in a box of staplers falling down a staircase, but let’s not get hung up on details here- music is awesome.

However, music as we know it has only really existed for around a century or so, and many of the developments in music’s  history that have shaped it into the tour de force that it is in modern culture are in direct parallel to human history. As such, the history of our development as a race and the development of music run closely alongside one another, so I thought I might attempt a set of edited highlights of the former (well, western history at least) by way of an exploration of the latter.

Exactly how and when the various instruments as we know them were invented and developed into what they currently are is largely irrelevant (mostly since I don’t actually know and don’t have the time to research all of them), but historically they fell into one of two classes. The first could be loosely dubbed ‘noble’ instruments- stuff like the piano, clarinet or cello, which were (and are) hugely expensive to make, required a significant level of skill to do so, and were generally played for and by the rich upper classes in vast orchestras, playing centuries-old music written by the very few men with the both the riches, social status and talent to compose them. On the other hand, we have the less historically significant, but just as important, ‘common’ instruments, such as the recorder and the ancestors of the acoustic guitar. These were a lot cheaper to make and thus more available to (although certainly far from widespread among) the poorer echelons of society, and it was on these instruments that tunes were passed down from generation to generation, accompanying traditional folk dances and the like; the kind of people who played such instruments very rarely had the time to spare to really write anything new for them, and certainly stood no chance of making a living out of them. And, for many centuries, that was it- what you played and what you listened to, if you did so at all, depended on who you were born as.

However, during the great socioeconomic upheaval and levelling that accompanied the 19th century industrial revolution, music began to penetrate society in new ways. The growing middle and upper-middle classes quickly adopted the piano as a respectable ‘front room’ instrument for their daughters to learn, and sheet music was rapidly becoming both available and cheap for the masses. As such, music began to become an accessible activity for far larger swathes of the population and concert attendances swelled. This was the Romantic era of music composition, with the likes of Chopin, Mendelssohn and Brahms rising to prominence, and the size of an orchestra grew considerably to its modern size of four thousand violinists, two oboes and a bored drummer (I may be a little out in my numbers here) as they sought to add some new experimentation to their music. This experimentation with classical orchestral forms was continued through the turn of the century by a succession of orchestral composers, but this period also saw music head in a new and violently different direction; jazz.

Jazz was the quintessential product of the United States’ famous motto ‘E Pluribus Unum’ (From Many, One), being as it was the result of a mixing of immigrant US cultures. Jazz originated amongst America’s black community, many of whom were descendants of imported slaves or even former slaves themselves, and was the result of traditional African music blending with that of their forcibly-adopted land. Whilst many black people were heavily discriminated against when it came to finding work, they found they could forge a living in the entertainment industry, in seedier venues like bars and brothels. First finding its feet in the irregular, flowing rhythms of ragtime music, the music of the deep south moved onto the more discordant patterns of blues in the early 20th century before finally incorporating a swinging, syncopated rhythm and an innovative sentiment of improvisation to invent jazz proper.

Jazz quickly spread like wildfire across the underground performing circuit, but it wouldn’t force its way into popular culture until the introduction of prohibition in the USA. From 1920 all the way up until the Presidency of Franklin D Roosevelt (whose dropping of the bill is a story in and of itself) the US government banned the consumption of alcohol, which (as was to be expected, in all honesty) simply forced the practice underground. Dozens of illegal speakeasies (venues of drinking, entertainment and prostitution usually run by the mob) sprung up in every district of every major American city, and they were frequented by everyone from the poorest street sweeper to the police officers who were supposed to be closing them down. And in these venues, jazz flourished. Suddenly, everyone knew about jazz- it was a fresh, new sound to everyone’s ears, something that stuck in the head and, because of its ‘common’, underground connotations, quickly became the music of the people. Jazz musicians such as Louis Armstrong (a true pioneer of the genre) became the first celebrity musicians, and the way the music’s feel resonated with the happy, prosperous feeling surrounding the economic good times of the 1920s lead that decade to be dubbed ‘the Jazz Age’.

Countless things allowed jazz and other, successive generations to spread around the world- the invention of the gramophone further enhanced the public access to music, as did the new cultural phenomenon of the cinema and even the Second World War, which allowed for truly international spread. By the end of the war, jazz, soul, blues, R&B and all other derivatives had spread from their mainly deep south origins across the globe, blazing a trail for all other forms of popular music to follow in its wake. And, come the 50s, they did so in truly spectacular style… but I think that’ll have to wait until next time.

NUMBERS

One of the most endlessly charming parts of the human experience is our capacity to see something we can’t describe and just make something up in order to do so, never mind whether it makes any sense in the long run or not. Countless examples have been demonstrated over the years, but the mother lode of such situations has to be humanity’s invention of counting.

Numbers do not, in and of themselves, exist- they are simply a construct designed by our brains to help us get around the awe-inspiring concept of the relative amounts of things. However, this hasn’t prevented this ‘neat little tool’ spiralling out of control to form the vast field that is mathematics. Once merely a diverting pastime designed to help us get more use out of our counting tools, maths (I’m British, live with the spelling) first tentatively applied itself to shapes and geometry before experimenting with trigonometry, storming onwards to algebra, turning calculus into a total mess about four nanoseconds after its discovery of something useful, before just throwing it all together into a melting point of cross-genre mayhem that eventually ended up as a field that it as close as STEM (science, technology, engineering and mathematics) gets to art, in that it has no discernible purpose other than for the sake of its own existence.

This is not to say that mathematics is not a useful field, far from it. The study of different ways of counting lead to the discovery of binary arithmetic and enabled the birth of modern computing, huge chunks of astronomy and classical scientific experiments were and are reliant on the application of geometric and trigonometric principles, mathematical modelling has allowed us to predict behaviour ranging from economics & statistics to the weather (albeit with varying degrees of accuracy) and just about every aspect of modern science and engineering is grounded in the brute logic that is core mathematics. But… well, perhaps the best way to explain where the modern science of maths has lead over the last century is to study the story of i.

One of the most basic functions we are able to perform to a number is to multiply it by something- a special case, when we multiply it by itself, is ‘squaring’ it (since a number ‘squared’ is equal to the area of a square with side lengths of that number). Naturally, there is a way of reversing this function, known as finding the square root of a number (ie square rooting the square of a number will yield the original number). However, convention dictates that a negative number squared makes a positive one, and hence there is no number squared that makes a negative and there is no such thing as the square root of a negative number, such as -1. So far, all I have done is use a very basic application of logic, something a five-year old could understand, to explain a fact about ‘real’ numbers, but maths decided that it didn’t want to not be able to square root a negative number, so had to find a way round that problem. The solution? Invent an entirely new type of number, based on the quantity i (which equals the square root of -1), with its own totally arbitrary and made up way of fitting  on a number line, and which can in no way exist in real life.

Admittedly, i has turned out to be useful. When considering electromagnetic forces, quantum physicists generally assign the electrical and magnetic components real and imaginary quantities in order to identify said different components, but its main purpose was only ever to satisfy the OCD nature of mathematicians by filling a hole in their theorems. Since then, it has just become another toy in the mathematician’s arsenal, something for them to play with, slip into inappropriate situations to try and solve abstract and largely irrelevant problems, and with which they can push the field of maths in ever more ridiculous directions.

A good example of the way mathematics has started to lose any semblance of its grip on reality concerns the most famous problem in the whole of the mathematical world- Fermat’s last theorem. Pythagoras famously used the fact that, in certain cases, a squared plus b squared equals c squared as a way of solving some basic problems of geometry, but it was never known as to whether a cubed plus b cubed could ever equal c cubed if a, b and c were whole numbers. This was also true for all other powers of a, b and c greater than 2, but in 1637 the brilliant French mathematician Pierre de Fermat claimed, in a scrawled note inside his copy of Diohantus’ Arithmetica, to have a proof for this fact ‘that is too large for this margin to contain’. This statement ensured the immortality of the puzzle, but its eventual solution (not found until 1995, leading most independent observers to conclude that Fermat must have made a mistake somewhere in his ‘marvellous proof’) took one man, Andrew Wiles, around a decade to complete. His proof involved showing that the terms involved in the theorem could be expressed in the form of an incredibly weird equation that doesn’t exist in the real world, and that all equations of this type had a counterpart equation of an equally irrelevant type. However, since the ‘Fermat equation’ was too weird to exist in the other format, it could not logically be true.

To a mathematician, this was the holy grail; not only did it finally lay to rest an ages-old riddle, but it linked two hitherto unrelated branches of algebraic mathematics by way of proving what is (now it’s been solved) known as the Taniyama-Shimura theorem. To anyone interested in the real world, this exercise made no contribution to it whatsoever- apart from satisfying a few nerds, nobody’s life was made easier by the solution, it didn’t solve any real-world problem, and it did not make the world a tangibly better place. In this respect then, it was a total waste of time.

However, despite everything I’ve just said, I’m not going to decide that all modern day mathematics is a waste of time; very few human activities ever are. Mathematics is many things; among them ridiculous, confusing, full of contradictions and potential slip-ups and, in a field whose age of winning a major prize is younger than in any other STEM field, apparently full of those likely to belittle you out of future success should you enter the world of serious academia. But, for some people, maths is just what makes the world makes sense, and at its heart that was all it was ever created to do. And if some people want their life to be all about the little symbols that make the world make sense, then well done to the world for making a place for them.

Oh, and there’s a theory doing the rounds of cosmology nowadays that reality is nothing more than a mathematical construct. Who knows in what obscure branch of reverse logarithmic integrals we’ll find answers about that one…