A Continued History

This post looks set to at least begin by following on directly from my last one- that dealt with the story of computers up to Charles Babbage’s difference and analytical engines, whilst this one will try to follow the history along from there until as close to today as I can manage, hopefully getting in a few of the basics of the workings of these strange and wonderful machines.

After Babbage’s death as a relatively unknown and unloved mathematician in 1871, the progress of the science of computing continued to tick over. A Dublin accountant named Percy Ludgate, independently of Babbage’s work, did develop his own programmable, mechanical computer at the turn of the century, but his design fell into a similar degree of obscurity and hardly added anything new to the field. Mechanical calculators had become viable commercial enterprises, getting steadily cheaper and cheaper, and as technological exercises were becoming ever more sophisticated with the invention of the analogue computer. These were, basically a less programmable version of the difference engine- mechanical devices whose various cogs and wheels were so connected up that they would perform one specific mathematical function to a set of data. James Thomson in 1876 built the first, which could solve differential equations by integration (a fairly simple but undoubtedly tedious mathematical task), and later developments were widely used to collect military data and for solving problems concerning numbers too large to solve by human numerical methods. For a long time, analogue computers were considered the future of modern computing, but since they solved and modelled problems using physical phenomena rather than data they were restricted in capability to their original setup.

A perhaps more significant development came in the late 1880s, when an American named Herman Hollerith invented a method of machine-readable data storage in the form of cards punched with holes. These had been around for a while to act rather like programs, such as the holed-paper reels of a pianola or the punched cards used to automate the workings of a loom, but this was the first example of such devices being used to store data (although Babbage had theorised such an idea for the memory systems of his analytical engine). They were cheap, simple, could be both produced and read easily by a machine, and were even simple to dispose of. Hollerith’s team later went on to process the data of the 1890 US census, and would eventually become most of IBM. The pattern of holes on these cards could be ‘read’ by a mechanical device with a set of levers that would go through a hole if there was one present, turning the appropriate cogs to tell the machine to count up one. This system carried on being used right up until the 1980s on IBM systems, and could be argued to be the first programming language.

However, to see the story of the modern computer truly progress we must fast forward to the 1930s. Three interesting people and acheivements came to the fore here: in 1937 George Stibitz, and American working in Bell Labs, built an electromechanical calculator that was the first to process data digitally using on/off binary electrical signals, making it the first digital. In 1936, a bored German engineering student called Konrad Zuse dreamt up a method for processing his tedious design calculations automatically rather than by hand- to this end he devised the Z1, a table-sized calculator that could be programmed to a degree via perforated film and also operated in binary. His parts couldn’t be engineered well enough for it to ever work properly, but he kept at it to eventually build 3 more models and devise the first programming language. However, perhaps the most significant figure of 1930s computing was a young, homosexual, English maths genius called Alan Turing.

Turing’s first contribution to the computing world came in 1936, when he published a revolutionary paper showing that certain computing problems cannot be solved by one general algorithm. A key feature of this paper was his description of a ‘universal computer’, a machine capable of executing programs based on reading and manipulating a set of symbols on a strip of tape. The symbol on the tape currently being read would determine whether the machine would move up or down the strip, how far, and what it would change the symbol to, and Turing proved that one of these machines could replicate the behaviour of any computer algorithm- and since computers are just devices running algorithms, they can replicate any modern computer too. Thus, if a Turing machine (as they are now known) could theoretically solve a problem, then so could a general algorithm, and vice versa if it couldn’t. Not only that, but since modern computers cannot multi-task on the. These machines not only lay the foundations for computability and computation theory, on which nearly all of modern computing is built, but were also revolutionary as they were the first theorised to use the same medium for both data storage and programs, as nearly all modern computers do. This concept is known as a von Neumann architecture, after the man who first pointed out and explained this idea in response to Turing’s work.

Turing machines contributed one further, vital concept to modern computing- that of Turing-completeness. A Turing-complete system was defined as a single Turing machine (known as a Universal Turing machine) capable of replicating the behaviour of any other theoretically possible Turing machine, and thus any possible algorithm or computable sequence. Charles Babbage’s analytical engine would have fallen into that class had it ever been built, in part because it was capable of the ‘if X then do Y’ logical reasoning that characterises a computer rather than a calculator. Ensuring the Turing-completeness of a system is a key part of designing a computer system or programming language to ensure its versatility and that it is capable of performing all the tasks that could be required of it.

Turing’s work had laid the foundations for nearly all the theoretical science of modern computing- now all the world needed was machines capable of performing the practical side of things. However, in 1942 there was a war on, and Turing was being employed by the government’s code breaking unit at Bletchley Park, Buckinghamshire. They had already cracked the German’s Enigma code, but that had been a comparatively simple task since they knew the structure and internal layout of the Enigma machine. However, they were then faced by a new and more daunting prospect: the Lorenz cipher, encoded by an even more complex machine for which they had no blueprints. Turing’s genius, however, apparently knew no bounds, and his team eventually worked out its logical functioning. From this a method for deciphering it was formulated, but it required an iterative process that took hours of mind-numbing calculation to get a result out. A faster method of processing these messages was needed, and to this end an engineer named Tommy Flowers designed and built Colossus.

Colossus was a landmark of the computing world- the first electronic, digital, and partially programmable computer ever to exist. It’s mathematical operation was not highly sophisticated- it used vacuum tubes containing light emission and sensitive detection systems, all of which were state-of-the-art electronics at the time, to read the pattern of holes on a paper tape containing the encoded messages, and then compared these to another pattern of holes generated internally from a simulation of the Lorenz machine in different configurations. If there were enough similarities (the machine could obviously not get a precise matching since it didn’t know the original message content) it flagged up that setup as a potential one for the message’s encryption, which could then be tested, saving many hundreds of man-hours. But despite its inherent simplicity, its legacy is simply one of proving a point to the world- that electronic, programmable computers were both possible and viable bits of hardware, and paved the way for modern-day computing to develop.

Advertisement

Attack of the Blocks

I spend far too much time on the internet. As well as putting many hours of work into trying to keep this blog updated regularly, I while away a fair portion of time on Facebook, follow a large number of video series’ and webcomics, and can often be found wandering through the recesses of YouTube (an interesting and frequently harrowing experience that can tell one an awful lot about the extremes of human nature). But there is one thing that any resident of the web cannot hope to avoid for any great period of time, and quite often doesn’t want to- the strange world of Minecraft.

Since its release as a humble alpha-version indie game in 2009, Minecraft has boomed to become a runaway success and something of a cultural phenomenon. By the end of 2011, before it had even been released in its final release format, Minecraft had registered 4 million purchases and 4 times that many registered users, which isn’t bad for a game that has never advertised itself, spread semi-virally among nerdy gamers for its mere three-year history and was made purely as an interesting project by its creator Markus Persson (aka Notch). Thousands of videos, ranging from gameplay to some quite startlingly good music videos (check out the work of Captain Sparklez if you haven’t already) litter YouTube and many of the games’ features (such as TNT and the exploding mobs known as Creepers) have become memes in their own right to some degree.

So then, why exactly has Minecraft succeeded where hundreds and thousands of games have failed, becoming a revolution in gamer culture? What is it that makes Minecraft both so brilliant, and so special?

Many, upon being asked this question, tend to revert to extolling the virtues of the game’s indie nature. Being created entirely without funding as an experiment in gaming rather than profit-making, Minecraft’s roots are firmly rooted in the humble sphere of independent gaming, and it shows. One obvious feature is the games inherent simplicity- initially solely featuring the ability to wander around, place and destroy blocks, the controls are mainly (although far from entirely) confined to move and ‘use’, whether that latter function be shoot, slash, mine or punch down a tree. The basic, cuboid, ‘blocky’ nature of the game’s graphics, allowing for both simplicity of production and creating an iconic, retro aesthetic that makes it memorable and standout to look at. Whilst the game has frequently been criticised for not including a tutorial (I myself took a good quarter of an hour to find out that you started by punching a tree, and a further ten minutes to work out that you were supposed to hold down the mouse button rather than repeatedly click), this is another common feature of indie gaming, partly because it saves time in development, but mostly because it makes the game feel like it is not pandering to you and thus allowing indie gamers to feel some degree of elitism that they are good enough to work it out by themselves. This also ties in with the very nature of the game- another criticism used to be (and, to an extent, still is, even with the addition of the Enderdragon as a final win objective) that the game appeared to be largely devoid of point, existent only for its own purpose. This is entirely true, whether you view that as a bonus or a detriment being entirely your own opinion, and this idea of an unfamiliar, experimental game structure is another feature common in one form or another to a lot of indie games.

However, to me these do not seem to be entirely worthy of the name ‘answers’ regarding the question of Minecraft’s phenomenal success. The reason I think this way is that they do not adequately explain exactly why Minecraft rose to such prominence whilst other, often similar, indie games have been left in relative obscurity. Limbo, for example, is a side-scrolling platformer and a quite disturbing, yet compelling, in-game experience, with almost as much intrigue and puzzle from a set of game mechanics simpler even than those of Minecraft. It has also received critical acclaim often far in excess of Minecraft (which has received a positive, but not wildly amazed, response from critics), and yet is still known to only an occasional few. Amnesia: The Dark Descent has been often described as the greatest survival horror game in history, as well as incorporating a superb set of graphics, a three-dimensional world view (unlike the 2D view common to most indie games) and the most pants-wettingly terrifying experience anyone who’s ever played it is likely to ever face- but again, it is confined to the indie realm. Hell, Terraria is basically Minecraft in 2D, but has sold around 40 times less than Minecraft itself. All three of these games have received fairly significant acclaim and coverage, and rightly so, but none has become the riotous cultural phenomenon that Minecraft has, and neither have had an Assassin’s Creed mod (first example that sprung to mind).

So… why has Minecraft been so successful. Well, I’m going to be sticking my neck out here, but to my mind it’s because it doesn’t play like an indie game. Whilst most independently produced titled are 2D, confined to fairly limited surroundings and made as simple & basic as possible to save on development (Amnesia can be regarded as an exception), Minecraft takes it own inherent simplicity and blows it up to a grand scale. It is a vast, open world sandbox game, with vague resonances of the Elder Scrolls games and MMORPG’s, taking the freedom, exploration and experimentation that have always been the advantages of this branch of the AAA world, and combined them with the innovative, simplistic gaming experience of its indie roots. In some ways it’s similar to Facebook, in that it takes a simple principle and then applies it to the largest stage possible, and both have enjoyed a similarly explosive rise to fame. The randomly generated worlds provide infinite caverns to explore, endless mobs to slay, all the space imaginable to build the grandest of castles, the largest of cathedrals, or the SS Enterprise if that takes your fancy. There are a thousand different ways to play the game on a million different planes, all based on just a few simple mechanics. Minecraft is the best of indie and AAA blended together, and is all the more awesome for it.