Wub Wub

My brother has some dubstep on his iPod. To be honest, I’m not entirely sure why; he frequently says that the genre can be described less as music and more as ‘sounds’, and would be the first to claim that it’s far from a paradigm of deeply emotional musical expression. Indeed, to some, dubstep is the embodiment of everything that is wrong with 21st century music; loud, brash and completely devoid of meaning.

I, personally, find dubstep more interesting than anything else. I don’t listen to much of it myself (my musical tastes tend to be more… lyrical), but find it inherently fascinating, purely because of the reaction people have with it. It’s a Marmite thing; people either love it or hate it, and some people seem physically incapable of contemplating why others enjoy it. Or, indeed, why it’s considered music.

So, let’s take my favourite approach to the matter: an analytical perspective. The songs that people remember, that are catchy, that stick in the mind, that become old favourite and/or the ones that the pop music industry attempts to manufacture in bulk, tend to have simple, regular and easily recognisable beats that one can tap or bounce along to easily. Their pace and rhythm too tend to be fairly standard, often being based approximately around the 70bpm rhythm of the human heart (or some multiple thereof). Overlaying these simple rhythms, usually based around a drumbeat (strength depending on genre), we tend to factor in simple melodies with plenty of repeating patterns; think of how the pattern of a verse or chorus will usually be repeated multiple times throughout a song, and how even the different lines of a verse will often follow the same lines of a verse. And then there are lyrics; whilst many genres, particularly jazz, may have entire libraries of purely instrumental pieces, few of these (film soundtracks excepted) have ever gained mainstream cultural impact. Lyrics are important; they allow us to sing along, which makes it stick in our head more effectively, and they can allow a song to carry meaning too. Think of just about any famous, popular song (Bohemian Rhapsody excepted; NOBODY can explain why that is both so popular and so awesome), and chances are it’ll have several of the above features. Even rap, where music is stripped down to its barest bones, bases itself around a strong, simple rhythm and a voice-dictated melody (come to think of it, rap is basically poetry for the modern age… I should do a post on that some time).

Now, let’s compare that analysis with probably the most famous piece of dubstep around: Skrillex’s ‘Bangarang’. Bring it up on YouTube if you want to follow along. Upon listening the song, the beat is the first thing that becomes apparent; timing it I get a pace of 90bpm, the same rate as a fast walking pace or a fast, excited heartbeat; a mood that fits perfectly with the intentions of the music. It’s meant to excite, to get the blood pumping, to infuse the body with the beat, and to inspire excitement and a party atmosphere. The music is structured around this beat, but there is also an underlying ‘thump’ similar to the bass drum of a drumkit, just to enforce the point. Then we come onto the melody; after an intro that reminds me vaguely of something the Jackson 5 may once have done (just from something I heard on the radio once), we begin to layer over this underlying sound. This is a common trick employed across all genres; start with something simple and build on top of it, building in terms of scale and noise. The music industry has known for a long time that loudness is compelling, hooks us in and sells records, hence why there has been a trend over the last few decades for steadily increasing loudness in popular music… but that’s for another time. Building loudness encourages us to stick with a song, getting us drawn into it. The first added layer is a voice, not only giving us something to (after a fashion, since the words are rather unclear and almost meaningless) sing along to and recognise, adding another hook for us, but this also offers an early example of a repeated lyrical pattern- we have one two-line couplet repeated four times, with more layers of repeated bassline patterns being successively added throughout, and the two lines of said couplet only differ in the way they end. Again, this makes it easy and compelling to follow. The words are hard to make out, but that doesn’t matter; one is supposed to just feel the gist, get into the rhythm of it. The words are just another layer. This portion of the song takes on a ‘verse’ role for the rest of it, as it repeated several more times.

And then, we hit the meat and drink of the song; with the word ‘Bangarang’, everything shifts into a loud mesh of electronic sounds passed several times through an angle grinder. However, the beat (although pausing for emphasis at the moment of transition’) remains the same, carrying over some consistency, and we once again find ourselves found by repeated patterns, both in the backing sounds and in the lyrics that are still (sort of ) present. It’s also worth noting that the melody of the electronica pulses in time to the beat, enabling a listener/partygoer to rock to both beat and melody simultaneously, getting the whole body into it. This is our ‘chorus’- we again have repeating stanzas for a while, but we return to our verse (building once again) after a while so we don’t get bored of the repetition. Then chorus again, and then a shift in tone; another common trick employed across all genres to hold our interest. We have a slight key change up, and our melody is taken over by a new, unidentified instrument/noise. We still have our original sound playing in the background to ensure the shift is not too weirdly abrupt, but this melody, again utilising short, repeated units, is what takes centre stage. We then have another shift, to a quiet patch, still keeping the background. Here the muted sounds offer us time for reflection and preparation; the ‘loud soft loud’ pattern was one used extensively by Nirvana, The Pixies and other grunge bands during the 1990s, and continues to find a place in popular music to this day. We have flashes of loudness, just to whet our appetites for the return to chaos that is to come, and then we start to build again (another repeating pattern you see, this time built in to the structure of the song). The loudness returns and then, just because this kind of thing doesn’t have a particularly natural end, we leave on some more unintelligible, distorted lyrics; because finishing on a lone voice isn’t just for ‘proper’ bands like, just off the top of my head, Red Hot Chili Peppers.

Notice how absolutely every one of those features I identified can be linked to other musical genres, the kind of thing present, admittedly in different formats, across the full spectrum of the musical world. The only difference with dubstep is that, unlike using voices & guitars like more traditional genres, dubstep favours entirely electronic sounds made on the computer; in that respect, combined with its way of being unabashedly loud and partying, it is the perfect musical representation of the 21st century thus far. In fact, the only thing on my original list that it lacks is a strong lyrical focus; in that respect, I feel that it is missing a trick, and that it could use a few more intelligible words to carry some meaning and become more recognised as A Thing. Actually, after listening to that song a few times, it reminds me vaguely of The Prodigy (apologies to any fans who are offended by this; I don’t listen to them much), but maybe that’s just me. Does all this mean dubstep is necessarily ‘good’ as a musical type? Of course not; taste is purely subjective. But to say that dubstep is merely noise, and that there is no reason anyone would listen to it, misses the point; it pulls the same tricks as every other genre, and they all have fans. No reason to begrudge them a few of those.

Advertisement

Crypto

Cryptography is a funny business; shady from the beginning, the whole business of codes and ciphers has been specifically designed to hide your intentions and move in the shadows, unnoticed. However, the art of cryptography has been changed almost beyond recognition in the last hundred years thanks to the invention of the computer, and what was once an art limited by the imagination of the nerd responsible has now turned into a question of sheer computing might. But, as always, the best way to start with this story is at the beginning…

There are two different methods of applying cryptography to a message; with a code or with a cipher. A code is a system involving replacing words with other words (‘Unleash a fox’ might mean ‘Send more ammunition’, for example), whilst a cipher involves changing individual letters and their ordering. Use of codes can generally only be limited to a few words that can be easily memorised, and/or requires endless cross-referencing with a book of known ‘translations’, as well as being relatively insecure when it comes to highly secretive information. Therefore, most modern encoding (yes, that word is still used; ‘enciphering’ sounds stupid) takes the form of employing ciphers, and has done for hundreds of years; they rely solely on the application of a simple rule, require far smaller reference manuals, and are more secure.

Early attempts at ciphers were charmingly simple; the ‘Caesar cipher’ is a classic example, famously invented and used by Julius Caesar, where each letter is replaced by the one three along from it in the alphabet (so A becomes D, B becomes E and so on). Augustus Caesar, who succeeded Julius, didn’t set much store by cryptography and used a similar system, although with only a one-place transposition (so A to B and such)- despite the fact that knowledge of the Caesar cipher was widespread, and his messages were hopelessly insecure. These ‘substitution ciphers’ suffered from a common problem; the relative frequency with which certain letters appear in the English language (E being the most common, followed by T) is well-known, so by analysing the frequency of occurring letters in a substitution-enciphered message one can work out fairly accurately what letter corresponds to which, and work out the rest from there. This problem can be partly overcome by careful phrasing of messages and using only short ones, but it’s nonetheless a problem.

Another classic method is to use a transposition cipher, which changes the order of letters- the trick lies in having a suitable ‘key’ with which to do the reordering. A classic example is to write the message in a rectangle of a size known to both encoder and recipient, writing in columns but ‘reading it off’ in rows. The recipient can then reverse the process to read the original message. This is a nice method, and it’s very hard to decipher a single message encoded this way, but if the ‘key’ (e.g. the size of the rectangle) is not changed regularly then one’s adversaries can figure it out after a while. The army of ancient Sparta used a kind of transposition cipher based on a tapered wooden rod called a skytale (pronounced skih-tah-ly), around which a strip of paper was wrapped and the message written down it, one on each turn of paper. The recipient then wrapped the paper around a skytale of identical girth and taper (the tapering prevented letters being evenly spaced, making it harder to decipher), and read the message off- again, a nice idea, but the need to make a new set of skytale’s for everyone every time the key needed changing rendered it impractical. Nonetheless, transposition ciphers are a nice idea, and the Union used them to great effect during the American Civil War.

In the last century, cryptography has developed into even more of an advanced science, and most modern ciphers are based on the concept of transposition ciphers- however, to avoid the problem of using letter frequencies to work out the key, modern ciphers use intricate and elaborate systems to change by how much the ‘value’ of the letter changes each time. The German Lorenz cipher machine used during the Second World War (and whose solving I have discussed in a previous post) involved putting the message through three wheels and electronic pickups to produce another letter; but the wheels moved on one click after each letter was typed, totally changing the internal mechanical arrangement. The only way the British cryptographers working against it could find to solve it was through brute force, designing a computer specifically to test every single possible starting position for the wheels against likely messages. This generally took them several hours to work out- but if they had had a computer as powerful as the one I am typing on, then provided it was set up in the correct manner it would have the raw power to ‘solve’ the day’s starting positions within a few minutes. Such is the power of modern computers, and against such opponents must modern cryptographers pit themselves.

One technique used nowadays presents a computer with a number that is simply too big for it to deal with; they are called ‘trapdoor ciphers’. The principle is relatively simple; it is far easier to find that 17 x 19 = 323 than it is to find the prime factors of 323, even with a computer, so if we upscale this business to start dealing with huge numbers a computer will whimper and hide in the corner just looking at them. If we take two prime numbers, each more than 100 digits long (this is, by the way, the source of the oft-quoted story that the CIA will pay $10,000 to anyone who finds a prime number of over 100 digits due to its intelligence value) and multiply them together, we get a vast number with only two prime factors which we shall, for now, call M. Then, we convert our message into number form (so A=01, B=02, I LIKE TRAINS=0912091105201801091419) and the resulting number is then raised to the power of a third (smaller, three digits will do) prime number. This will yield a number somewhat bigger than M, and successive lots of M are then subtracted from it until it reaches a number less than M (this is known as modulo arithmetic, and can be best visualised by example: so 19+16=35, but 19+16 (mod 24)=11, since 35-24=11). This number is then passed to the intended recipient, who can decode it relatively easily (well, so long as they have a correctly programmed computer) if they know the two prime factors of M (this business is actually known as the RSA problem, and for reasons I cannot hope to understand current mathematical thinking suggests that finding the prime factors of M is the easiest way of solving this; however, this has not yet been proven, and the matter is still open for debate). However, even if someone trying to decode the message knows M and has the most powerful computer on earth, it would take him thousands of years to find out what its prime factors are. To many, trapdoor ciphers have made cryptoanalysis (the art of breaking someone else’s codes), a dead art.

Man, there’s a ton of cool crypto stuff I haven’t even mentioned yet… screw it, this is going to be a two-parter. See you with it on Wednesday…

The Consolidation of a World Power

I left my last post on the history of music at around 1969, which for many independent commentators marks the end of the era of the birth of rock music. The 60s had been a decade of a hundred stories running alongside one another in the music world, each with their own part to play in the vast tapestry of innovation. Jimi Hendrix had risen from an obscure career playing the blues circuit in New York to being an international star, and one moreover who revolutionised what the music world thought about what a guitar could and should do- even before he became an icon of the psychedelic hippie music world, his hard & heavy guitar leads, in stark contrast to the tones of early Beatles’ and 60s pop music had founded rock music’s harder edge. He in turn had borrowed from earlier pioneers, Jeff Beck, Eric Clapton, The Who (perhaps the first true rock band, given their wild onstage antics and heavy guitar & drumkit-based sound) and Bob Dylan (the godfather of folk rock and the blues-style guitar playing that rock turned into its harder sound), each of whom had their own special stories. However, there was a reason I focused on the story of the hippie movement in my last post- the story of a counter-culture precipitating a musical revolution was only in its first revolution, and would be repeated several times by the end of the century.

To some music nerds however, Henrix’s death aged just 27 (and after just four years of fame) in 1970 thanks to an accidental drug overdose marked the beginning of the end. The god of the guitar was dead, the beautiful voice of Janis Joplin was dead, Syd Barrett had broken up from Pink Floyd, another founding band of the psychedelic rock movement, and was being driven utterly insane by LSD (although he thankfully later managed to pull himself out of the self-destructive cycle and lived until 2006), and Floyd’s American counterparts The Velvet Underground broke up just four years later. Hell, even The Beatles went in 1970.

But that didn’t mean it was the end- far from it. Rock music might have lost some of its guiding lights, but it still carried on regardless- Pink Floyd, The Who, Led Zeppelin and The Rolling Stones, the four biggest British bands of the time, continued to play an active role in the worldwide music scene, Zeppelin and The Who creating a huge fan rivalry. David Bowie was also continuing to show the world the mental ideas hiding beneath his endlessly crisp accent, and the rock world continued to swing along.

However, it was also during this time that a key division began to make itself firmly felt. As rock developed its harder sound during the 1960s, other bands and artists had followed The Beatles’ early direction by playing softer, more lyrical and acoustic sounds, music that was designed to be easy on the ear and played to and for mass appeal. This quickly got itself labelled ‘pop music’ (short for popular), and just as quickly this became something of a term of abuse from serious rock aficionados. Since its conception, pop has always been more the commercial enterprise, motivated less by a sense of artistic expression and experimentation and more by the promise of fame and fortune, which many consider a rather shallow ambition. But, no matter what the age, pop music has always been there, and more often than not has been topping the charts- people often talk about some age in the long distant past as being the ‘best time for music’ before returning to lambast the kind of generic, commercial consumer-generated pop that no self-respecting musician could bring himself to genuinely enjoy and claiming that ‘most music today is rubbish’. They fail to remember, of course, just how much of the same kind of stuff was around in their chosen ‘golden age’, that the world in general has chosen to forget.

Nonetheless, this frustration with generic pop has frequently been a driving force for the generation of new forms of rock, in an attempt to ‘break the mould’. In the early seventies, for example, the rock world was described as tame or sterile, relatively acoustic acts beginning to claim rock status. The Rolling Stones and company weren’t new any more, there was a sense of lacking in innovation, and a sense of musical frustration began to build. This frustration was further fuelled by the ending of the 25-year old post war economic boom, and the result, musically speaking, was punk rock. In the UK, it was The Sex Pistols and The Clash, in the USA The Ramones and similar, most of whom were ‘garage bands’ with little skill (Johnny Rotten, lead singer of The Sex Pistols, has frequently admitted that he couldn’t sing in the slightest, and there was a running joke at the time on the theme of ‘Here’s three chords. Now go start a band’) but the requisite emotion, aggression and fresh thinking to make them a musical revolution. Also developed a few years earlier was heavy metal, perhaps the only rock genre to have never had a clearly defined ‘era’ despite having been there, hiding around the back and on the sidelines somewhere, for the past 40 or so years. Its development was partly fuelled by the same kind of musical frustration that sparked punk, but was also the result of a bizarre industrial accident. Working at a Birmingham metal factory in 1965 when aged 17, Black Sabbath guitarist (although they were then known as The Polka Tulk Blues Band) Tony Iommi lost the the ends of his middle and ring fingers on his right hand. This was a devastating blow for a young guitarist, but Iommi compensated by easing the tension on his strings and developing two thimbles to cover his finger ends. By 1969, his string slackening had lead him to detune his guitar down a minor third from E to C#, and to include slapping the strings with his fingers as part of his performance. This detuning, matched by the band’s bassist Geezer Butler, was combined with the idea formulated whilst watching the queues for horror movie Black Sabbath that ‘if people are prepared to make money to be scared, then why don’t we write scary music?’, to create the incredibly heavy, aggressive, driving and slightly ‘out of tune’ (to conventional ears) sound of heavy metal, which was further popularised by the likes of Judas Priest, Deep Purple and Motley Crue (sorry, I can’t do the umlauts here).

Over the next few years, punk would slowly fall out of fashion, evolving into harder variations such as hardcore (which never penetrated the public consciousness but would make itself felt some years later- read on to find out how) and leaving other bands to develop it into post-punk; a pattern repeated with other genres down the decades. The 1980s was the first decade to see hip hop come to the fore,  partly in response to the newly-arrived MTV signalling the onward march of electronic, manufactured pop. Hip hop was specifically targeted at a more underground, urban circuit to these clean, commercial sounds, music based almost entirely around a beat rather than melody and allowing the songs to be messed around with, looped, scratched and repeated all for the sake of effect and atmosphere building. From hip hop was spawned rap, party, funk, disco, a new definition of the word DJ and, eventually, even dubstep. The decade also saw rock music really start to ‘get large’ with bands such as Queen and U2 filling football stadiums, paving the way for the sheer scale of modern rock acts and music festivals, and culminating, in 1985, with the huge global event that was Live Aid- not only was this a huge musical landmark, but it fundamentally changed what it meant to be a musical celebrity, and greatly influenced western attitudes to the third world.

By the late 80s and early 90s the business of counter-culture was at it again, this time with anger directed at a range of subjects from MTV tones, the boring, amelodic repetition of rap and the controversial policies of the Reagan administration that created a vast American ‘disaffected youth’ culture. This music partly formulated itself into the thoughtful lyrics and iconic sounds of bands such as REM, but in other areas found its expression and anger in the remnants of punk. Kurt Cobain in particular drew heavy inspiration from ‘hardcore’ bands (see, I said they’d show up again) such as Black Cloud, and the huge popularity of Nirvana’s ‘Smells Like Teen Spirit’ thrust grunge, along with many of the other genres blanketed under the title ‘alternative rock’ into the public consciousness (one of my earlier posts dealt with this, in some ways tragic, rise and fall in more detail). Once the grunge craze died down, it was once again left for other bands to formulate a new sound and scene out of the remnants of the genre, Foo Fighters being the most prominent post-grunge band around today. In the UK things went in a little different direction- this time resentment was more reserved to the staged nature of Top of the Pops and the like, The Smiths leading the way into what would soon become indie rock or Britpop. This wave of British bands, such as Oasis, Blur and Suede, pushed back the influx of grunge and developed a prominence for the genre that made the term ‘indie’ seem a bit ironic.

Nowadays, there are so many different great bands, genres and styles pushing at the forefront of the musical world that it is difficult to describe what is the defining genre of our current era. Music is a bigger business than it has ever been before, both in terms of commercial pop sound and the hard rock acts that dominate festivals such as Download and Reading, with every band there is and has ever been forming a part, be it a thread or a whole figure, of the vast musical tapestry that the last century has birthed. It is almost amusing to think that, whilst there is so much that people could and do complain about in our modern world, it’s very hard to take it out on a music world that is so vast and able to cater for every taste. It’s almost hard to see where the next counter-culture will come from, or how their musical preferences will drive the world forward once again. Ah well, we’ll just have to wait and see…

Up one level

In my last post (well, last excepting Wednesday’s little topical deviation), I talked about the real nuts and bolts of a computer, detailing the function of the transistors that are so vital to the workings of a computer. Today, I’m going to take one step up and study a slightly broader picture, this time concerned with the integrated circuits that utilise such components to do the real grunt work of computing.

An integrated circuit is simply a circuit that is not comprised of multiple, separate, electronic components- in effect, whilst a standard circuit might consist of a few bits of metal and plastic connected to one another by wires, in an IC they are all stuck in the same place and all assembled as one. The main advantage of this is that since all the components don’t have to be manually stuck to one another, but are built in circuit form from the start, there is no worrying about the fiddliness of assembly and they can be mass-produced quickly and cheaply with components on a truly microscopic scale. They generally consist of several layers on top of the silicon itself, simply to allow space for all of the metal connecting tracks and insulating materials to run over one another (this pattern is usually, perhaps ironically, worked out on a computer), and the sheer detail required of their manufacture surely makes it one of the marvels of the engineering world.

But… how do they make a computer work? Well, let’s start by looking at a computer’s memory, which in all modern computers takes the form of semiconductor memory. Memory takes the form of millions upon millions of microscopically small circuits known as memory circuits, each of which consists of one or more transistors. Computers are electronic, meaning to only thing they understand is electricity- for the sake of simplicity and reliability, this takes the form of whether the current flowing in a given memory circuit is ‘on’ or ‘off’. If the switch is on, then the circuit is represented as a 1, or a 0 if it is switched off. These memory circuits are generally grouped together, and so each group will consist of an ordered pattern of ones and zeroes, of which there are many different permutations. This method of counting in ones and zeroes is known as binary arithmetic, and is sometimes thought of as the simplest form of counting. On a hard disk, patches of magnetically charged material represent binary information rather than memory circuits.

Each little memory circuit, with its simple on/off value, represents one bit of information. 8 bits grouped together forms a byte, and there may be billions of bytes in a computer’s memory. The key task of a computer programmer is, therefore, to ensure that all the data that a computer needs to process is written in binary form- but this pattern of 1s and 0s might be needed to represent any information from the content of an email to the colour of one pixel of a video. Clearly, memory on its own is not enough, and the computer needs some way of translating the information stored into the appropriate form.

A computer’s tool for doing this is known as a logic gate, a simple electronic device consisting of (you guessed it) yet more transistor switches. This takes one or two inputs, either ‘on’ or ‘off’ binary ones, and translates them into another value. There are three basic types:  AND gates (if both inputs equal 1, output equals 1- otherwise, output equals 0), OR gates (if either input equals 1, output equals 1- if both inputs equal 0, output equals 0), and NOT gates (if input equals 1, output equals 0, if input equals 0, output equals 1). The NOT gate is the only one of these with a single input, and combinations of these gates can perform other functions too, such as NAND (not-and) or XOR (exclusive OR; if either input equals 1, output equals 1, but if both inputs equal 1 or 0, output equals 0) gates. A computer’s CPU (central processing unit) will contain hundreds of these, connected up in such a way as to link various parts of the computer together appropriately, translate the instructions of the memory into what function a given program should be performing, and thus cause the relevant bit (if you’ll pardon the pun) of information to translate into the correct process for the computer to perform.

For example, if you click on an icon on your desktop, your computer will put the position of your mouse and the input of the clicking action through an AND gate to determine that it should first highlight that icon. To do this, it orders the three different parts of each of the many pixels of that symbol to change their shade by a certain degree, and the the part of the computer responsible for the monitor’s colour sends a message to the Arithmetic Logic Unit (ALU), the computer’s counting department, to ask what the numerical values of the old shades plus the highlighting is, to give it the new shades of colour for the various pictures. Oh, and the CPU should also open the program. To do this, its connections send a signal off to the memory to say that program X should open now. Another bit of the computer then searches through the memory to find program X, giving it the master ‘1’ signal that causes it to open. Now that it is open, this program routes a huge amount of data back through the CPU to tell it to change the pattern of pretty colours on the screen again, requiring another slue of data to go through the ALU, and that areas of the screen A, B and C are now all buttons, so if you click there then we’re going to have to go through this business all over again. Basically the CPU’s logical function consists of ‘IF this AND/OR this happens, which signal do I send off to ask the right part of the memory what to do next?’. And it will do all this in a miniscule fraction of a second. Computers are amazing.

Obviously, nobody in their right mind is going to go through the whole business of telling the computer exactly what to do with each individual piece of binary data manually, because if they did nothing would ever get done. For this purpose, therefore, programmers have invented programming languages to translate their wishes into binary, and for a little more detail about them, tune in to my final post on the subject…

A Continued History

This post looks set to at least begin by following on directly from my last one- that dealt with the story of computers up to Charles Babbage’s difference and analytical engines, whilst this one will try to follow the history along from there until as close to today as I can manage, hopefully getting in a few of the basics of the workings of these strange and wonderful machines.

After Babbage’s death as a relatively unknown and unloved mathematician in 1871, the progress of the science of computing continued to tick over. A Dublin accountant named Percy Ludgate, independently of Babbage’s work, did develop his own programmable, mechanical computer at the turn of the century, but his design fell into a similar degree of obscurity and hardly added anything new to the field. Mechanical calculators had become viable commercial enterprises, getting steadily cheaper and cheaper, and as technological exercises were becoming ever more sophisticated with the invention of the analogue computer. These were, basically a less programmable version of the difference engine- mechanical devices whose various cogs and wheels were so connected up that they would perform one specific mathematical function to a set of data. James Thomson in 1876 built the first, which could solve differential equations by integration (a fairly simple but undoubtedly tedious mathematical task), and later developments were widely used to collect military data and for solving problems concerning numbers too large to solve by human numerical methods. For a long time, analogue computers were considered the future of modern computing, but since they solved and modelled problems using physical phenomena rather than data they were restricted in capability to their original setup.

A perhaps more significant development came in the late 1880s, when an American named Herman Hollerith invented a method of machine-readable data storage in the form of cards punched with holes. These had been around for a while to act rather like programs, such as the holed-paper reels of a pianola or the punched cards used to automate the workings of a loom, but this was the first example of such devices being used to store data (although Babbage had theorised such an idea for the memory systems of his analytical engine). They were cheap, simple, could be both produced and read easily by a machine, and were even simple to dispose of. Hollerith’s team later went on to process the data of the 1890 US census, and would eventually become most of IBM. The pattern of holes on these cards could be ‘read’ by a mechanical device with a set of levers that would go through a hole if there was one present, turning the appropriate cogs to tell the machine to count up one. This system carried on being used right up until the 1980s on IBM systems, and could be argued to be the first programming language.

However, to see the story of the modern computer truly progress we must fast forward to the 1930s. Three interesting people and acheivements came to the fore here: in 1937 George Stibitz, and American working in Bell Labs, built an electromechanical calculator that was the first to process data digitally using on/off binary electrical signals, making it the first digital. In 1936, a bored German engineering student called Konrad Zuse dreamt up a method for processing his tedious design calculations automatically rather than by hand- to this end he devised the Z1, a table-sized calculator that could be programmed to a degree via perforated film and also operated in binary. His parts couldn’t be engineered well enough for it to ever work properly, but he kept at it to eventually build 3 more models and devise the first programming language. However, perhaps the most significant figure of 1930s computing was a young, homosexual, English maths genius called Alan Turing.

Turing’s first contribution to the computing world came in 1936, when he published a revolutionary paper showing that certain computing problems cannot be solved by one general algorithm. A key feature of this paper was his description of a ‘universal computer’, a machine capable of executing programs based on reading and manipulating a set of symbols on a strip of tape. The symbol on the tape currently being read would determine whether the machine would move up or down the strip, how far, and what it would change the symbol to, and Turing proved that one of these machines could replicate the behaviour of any computer algorithm- and since computers are just devices running algorithms, they can replicate any modern computer too. Thus, if a Turing machine (as they are now known) could theoretically solve a problem, then so could a general algorithm, and vice versa if it couldn’t. Not only that, but since modern computers cannot multi-task on the. These machines not only lay the foundations for computability and computation theory, on which nearly all of modern computing is built, but were also revolutionary as they were the first theorised to use the same medium for both data storage and programs, as nearly all modern computers do. This concept is known as a von Neumann architecture, after the man who first pointed out and explained this idea in response to Turing’s work.

Turing machines contributed one further, vital concept to modern computing- that of Turing-completeness. A Turing-complete system was defined as a single Turing machine (known as a Universal Turing machine) capable of replicating the behaviour of any other theoretically possible Turing machine, and thus any possible algorithm or computable sequence. Charles Babbage’s analytical engine would have fallen into that class had it ever been built, in part because it was capable of the ‘if X then do Y’ logical reasoning that characterises a computer rather than a calculator. Ensuring the Turing-completeness of a system is a key part of designing a computer system or programming language to ensure its versatility and that it is capable of performing all the tasks that could be required of it.

Turing’s work had laid the foundations for nearly all the theoretical science of modern computing- now all the world needed was machines capable of performing the practical side of things. However, in 1942 there was a war on, and Turing was being employed by the government’s code breaking unit at Bletchley Park, Buckinghamshire. They had already cracked the German’s Enigma code, but that had been a comparatively simple task since they knew the structure and internal layout of the Enigma machine. However, they were then faced by a new and more daunting prospect: the Lorenz cipher, encoded by an even more complex machine for which they had no blueprints. Turing’s genius, however, apparently knew no bounds, and his team eventually worked out its logical functioning. From this a method for deciphering it was formulated, but it required an iterative process that took hours of mind-numbing calculation to get a result out. A faster method of processing these messages was needed, and to this end an engineer named Tommy Flowers designed and built Colossus.

Colossus was a landmark of the computing world- the first electronic, digital, and partially programmable computer ever to exist. It’s mathematical operation was not highly sophisticated- it used vacuum tubes containing light emission and sensitive detection systems, all of which were state-of-the-art electronics at the time, to read the pattern of holes on a paper tape containing the encoded messages, and then compared these to another pattern of holes generated internally from a simulation of the Lorenz machine in different configurations. If there were enough similarities (the machine could obviously not get a precise matching since it didn’t know the original message content) it flagged up that setup as a potential one for the message’s encryption, which could then be tested, saving many hundreds of man-hours. But despite its inherent simplicity, its legacy is simply one of proving a point to the world- that electronic, programmable computers were both possible and viable bits of hardware, and paved the way for modern-day computing to develop.

What we know and what we understand are two very different things…

If the whole Y2K debacle over a decade ago taught us anything, it was that the vast majority of the population did not understand the little plastic boxes known as computers that were rapidly filling up their homes. Nothing especially wrong or unusual about this- there’s a lot of things that only a few nerds understand properly, an awful lot of other stuff in our life to understand, and in any case the personal computer had only just started to become commonplace. However, over 12 and a half years later, the general understanding of a lot of us does not appear to have increased to any significant degree, and we still remain largely ignorant of these little feats of electronic witchcraft. Oh sure, we can work and operate them (most of us anyway), and we know roughly what they do, but as to exactly how they operate, precisely how they carry out their tasks? Sorry, not a clue.

This is largely understandable, particularly given the value of ‘understand’ that is applicable in computer-based situations. Computers are a rare example of a complex system that an expert is genuinely capable of understanding, in minute detail, every single aspect of the system’s working, both what it does, why it is there, and why it is (or, in some cases, shouldn’t be) constructed to that particular specification. To understand a computer in its entirety, therefore, is an equally complex job, and this is one very good reason why computer nerds tend to be a quite solitary bunch, with quite few links to the rest of us and, indeed, the outside world at large.

One person who does not understand computers very well is me, despite the fact that I have been using them, in one form or another, for as long as I can comfortably remember. Over this summer, however, I had quite a lot of free time on my hands, and part of that time was spent finally relenting to the badgering of a friend and having a go with Linux (Ubuntu if you really want to know) for the first time. Since I like to do my background research before getting stuck into any project, this necessitated quite some research into the hows and whys of its installation, along with which came quite a lot of info as to the hows and practicalities of my computer generally. I thought, then, that I might spend the next couple of posts or so detailing some of what I learned, building up a picture of a computer’s functioning from the ground up, and starting with a bit of a history lesson…

‘Computer’ was originally a job title, the job itself being akin to accountancy without the imagination. A computer was a number-cruncher, a supposedly infallible data processing machine employed to perform a range of jobs ranging from astronomical prediction to calculating interest. The job was a fairly good one, anyone clever enough to land it probably doing well by the standards of his age, but the output wasn’t. The human brain is not built for infallibility and, not infrequently, would make mistakes. Most of these undoubtedly went unnoticed or at least rarely caused significant harm, but the system was nonetheless inefficient. Abacuses, log tables and slide rules all aided arithmetic manipulation to a great degree in their respective fields, but true infallibility was unachievable whilst still reliant on the human mind.

Enter Blaise Pascal, 17th century mathematician and pioneer of probability theory (among other things), who invented the mechanical calculator aged just 19, in 1642. His original design wasn’t much more than a counting machine, a sequence of cogs and wheels so constructed as to able to count and convert between units, tens, hundreds and so on (ie a turn of 4 spaces on the ‘units’ cog whilst a seven was already counted would bring up eleven), as well as being able to work with currency denominations and distances as well. However, it could also subtract, multiply and divide (with some difficulty), and moreover proved an important point- that a mechanical machine could cut out the human error factor and reduce any inaccuracy to one of simply entering the wrong number.

Pascal’s machine was both expensive and complicated, meaning only twenty were ever made, but his was the only working mechanical calculator of the 17th century. Several, of a range of designs, were built during the 18th century as show pieces, but by the 19th the release of Thomas de Colmar’s Arithmometer, after 30 years of development, signified the birth of an industry. It wasn’t a large one, since the machines were still expensive and only of limited use, but de Colmar’s machine was the simplest and most reliable model yet. Around 3,000 mechanical calculators, of various designs and manufacturers, were sold by 1890, but by then the field had been given an unexpected shuffling.

Just two years after de Colmar had first patented his pre-development Arithmometer, an Englishmen by the name of Charles Babbage showed an interesting-looking pile of brass to a few friends and associates- a small assembly of cogs and wheels that he said was merely a precursor to the design of a far larger machine: his difference engine. The mathematical workings of his design were based on Newton polynomials, a fiddly bit of maths that I won’t even pretend to understand, but that could be used to closely approximate logarithmic and trigonometric functions. However, what made the difference engine special was that the original setup of the device, the positions of the various columns and so forth, determined what function the machine performed. This was more than just a simple device for adding up, this was beginning to look like a programmable computer.

Babbage’s machine was not the all-conquering revolutionary design the hype about it might have you believe. Babbage was commissioned to build one by the British government for military purposes, but since Babbage was often brash, once claiming that he could not fathom the idiocy of the mind that would think up a question an MP had just asked him, and prized academia above fiscal matters & practicality, the idea fell through. After investing £17,000 in his machine before realising that he had switched to working on a new and improved design known as the analytical engine, they pulled the plug and the machine never got made. Neither did the analytical engine, which is a crying shame; this was the first true computer design, with two separate inputs for both data and the required program, which could be a lot more complicated than just adding or subtracting, and an integrated memory system. It could even print results on one of three printers, in what could be considered the first human interfacing system (akin to a modern-day monitor), and had ‘control flow systems’ incorporated to ensure the performing of programs occurred in the correct order. We may never know, since it has never been built, whether Babbage’s analytical engine would have worked, but a later model of his difference engine was built for the London Science Museum in 1991, yielding accurate results to 31 decimal places.

…and I appear to have run on a bit further than intended. No matter- my next post will continue this journey down the history of the computer, and we’ll see if I can get onto any actual explanation of how the things work.