Leave Reality at Home

One of the most contentious issues surrounding criticisms of many forms of media, particularly in films and videogames, is the issue of realism. How realistic a videogame is, how accurately it replicates the world around us both visually and thematically, is the most frequently cited factor in determining how immersive a game is, how much you ‘get into it’, and films that keep their feet very much in the real world delight both nerds and film critics alike by responding favourably to their nit-picking. But the place of realism in these media is not a simple question of ‘as much realism as possible is better’; finding the ideally realistic situation (which is a phrase I totally didn’t just make up) is a delicate balance that can vary enormously from one product to another, and getting that balance right is frequently the key to success.

That too much realism can be a bad thing can be demonstrated quite easily on both a thematic and visual front. To deal with the visual sphere of things first, I believe I have talked before about ‘the uncanny valley’, which is originated as a robotics term first hypothesised by Japanese roboticist Masahiro Mori. The theory, now supported by research from the likes of Hiroshi Ishiguro (who specialises in making hyper-realistic robots), states that as a robot gets steadily more and more human in appearance, humans tend to react more favourably to it, until we reach a high point of a stylised, human-like appearance that is nonetheless clearly non-human. Beyond this point, however, our reactions to such a robot get dramatically worse, as the design starts to look less like a human-like robot and more like a very weird looking human, until we get to the point at which the two are indistinguishable from one another and we reach another peak. This dip in positive reacton, the point where faces start to look ‘creepy’, is known as the uncanny valley, and the principle can be applied just as easily to computer graphics as it can to robots. The main way of overcoming the issue involves a careful design process intended to stylise certain features; in other words, the only way to make something quite realistic not look creepy is to make it selectively less realistic. Thus, hyper-realism is not always the way forward in drawn/animated forms of media, and won’t be until the magical end-goal of photorealistic graphics are achieved. If that ever happens.

However, the uncanny valley is far less interesting than the questions that arise when considering the idea of thematic realism (which I again totally didn’t just make up). These are the extent to which stories are realistic, or aspects of a story, or events in a film and somesuch, and here we arrive at an apparent double standard. Here, our evidence comes from nerds; as we all know, film nerds (and I suspect everyone else if they can find them) delight in pointing out continuity errors in everything they watch (a personal favourite is the ‘Hollywood’ sign in the remake of The Italian Job that quite clearly says OHLLYWOOD at one camera angle), and are prepared to go into a veritable tizz of enjoyment when something apparently implausible is somehow able to adhere fastidiously to the laws of physics. Being realistic is clearly something that can add a great deal to a film, indicating that the director has really thought about this; not only is this frequently an indicator of a properly good film, but it also helps satisfy a nerd’s natural desire to know all the details and background (which is the reason, by the way, that comic books spend so much of their time referencing to overcomplicated bits of canon).

However, evidence that reality is not at the core of our enjoyment when it comes to film and gaming can be quite easily revealed by considering the enormous popularity of the sci-fi and fantasy genres. We all of course know that these worlds are not real and, despite a lot of the jargon spouted in sci-fi to satisfy the already mentioned nerd curiosity, we also know that they fundamentally cannot be real. There is no such thing as magic, no dilithium crystals, no hyperspace and no elves, but that doesn’t prevent the idea of them from enjoying massive popularity from all sides. I mean, just about the biggest film of last summer was The Avengers, in which a group of superheroes fight a group of giant monsters sent through a magical portal by an ancient Norse god; about as realistic as a tap-dancing elephant, and yet most agreed as to the general awesomeness of that film. These fantastical, otherworldly and/or downright ridiculous worlds and stories have barely any bearing on the real world, and yet somehow this somehow makes it better.

The key detail here is, I think, the concept of escapism. Possibly the single biggest reason why we watch films, spend hours in front of Netflix, dedicate days of our life to videogames, is in pursuit of escapism; to get away from the mundaneness of our world and escape into our own little fantasy. We can follow a super-soldier blasting through waves of bad guys such as we all dream to be able to do, we can play as a hero with otherworldly magic at our fingertips , we can lead our sports teams to glory like we could never do in real life. Some of these stories take place in a realistic setting, others in a world of fantasy, yet in all the real pull factor is the same; we are getting to play or see a world that we fantasise about being able to live ourselves, and yet cannot.

The trick of successfully incorporating reality into these worlds is, therefore, one of supporting our escapism. In certain situations, such as in an ultra-realistic modern military shooter, an increasingly realistic situation makes this situation more like our fantasy, and as such adds to the immersion and the joy of the escapism; when we are facing challenges similar to those experienced by real soldiers (or at least the over-romanticised view of soldiering that we in fact fantasise about, rather than the day-to-day drudgery that is so often ignored), it makes our fantasy seem more tangible, fuelling the idea that we are, in fact, living the dream. On the other hand, applying the wrong sort of realism to a situation (like, say, not being able to make the impossible jumps or failing to have perfect balance) can kill the fantasy, reminding us just as easily as the unreality of a continuity error that this fantasy we are entertaining cannot actually happen, reminding us of the real world and ruining all the fun. There is, therefore, a kind of thematic uncanny valley as well; a state at which the reality of a film or videogame is just… wrong, and is thus able to take us out of the act of escapism. The location of this valley, however, is a lot harder to plot on a graph.

Advertisement

The Myth of Popularity

WARNING: Everything I say forthwith is purely speculative based on a rough approximation of a presented view of how a part of our world works, plus some vaguely related stuff I happen to know. It is very likely to differ from your own personal view of things, so please don’t get angry with me if it does.

Bad TV and cinema is a great source of inspiration; not because there’s much in it that’s interesting, but because there’s just so much of it that even without watching any it is possible to pick up enough information to diagnose trends, which are generally interesting to analyse. In this case, I refer to the picture of American schools that is so often portrayed by iteration after iteration of generic teenage romance/romcom/’drama’, and more specifically the people in it.

One of the classic plot lines of these types of things involves the ‘hopelessly lonely/unpopular nerd who has crush on Miss Popular de Cheerleader and must prove himself by [insert totally retarded idea]’. Needless to say these plot lines are more unintentionally hilarious and excruciating than anything else, but they work because they play on the one trope that so many of us are familiar with; that of the overbearing, idiotic, horrible people from the ‘popular’ social circle. Even if we were not raised within a sitcom, it’s a situation repeated in thousands of schools across the world- the popular kids are the arseholes at the top with inexplicable access to all the gadgets and girls, and the more normal, nice people lower down the social circle.

The image exists in our conciousness long after leaving school for a whole host of reasons; partly because major personal events during our formative years tend to have a greater impact on our psyche than those occurring later on in life, but also because it is often our first major interaction with the harsh unfairness life is capable of throwing at us. The whole situation seems totally unfair and unjust; why should all these horrible people be the popular ones, and get all the social benefits associated with that? Why not me, a basically nice, humble person without a Ralph Lauren jacket or an iPad 3, but with a genuine personality? Why should they have all the luck?

However, upon analysing the issue then this object of hate begins to break down; not because the ‘popular kids’ are any less hateful, but because they are not genuinely popular. If we define popular as a scale representative of how many and how much people like you (because what the hell else is it?), then it becomes a lot easier to approach it from a numerical, mathematical perspective. Those at the perceived top end of the social spectrum generally form themselves into a clique of superiority, where they all like one another (presumably- I’ve never been privy to being in that kind of group in order to find out) but their arrogance means that they receive a certain amount of dislike, and even some downright resentment, from the rest of the immediate social world. By contrast, members of other social groups (nerds, academics [often not the same people], those sportsmen not in the ‘popular’ sphere, and the myriad of groups of undefineable ‘normies’ who just splinter off into their own little cliques) tend to be liked by members of their selected group and treated with either neutrality or minor positive or negative feeling from everyone else, leaving them with an overall ‘popularity score’, from an approximated mathematical point of view, roughly equal to or even greater than the ‘popular’ kids. Thus, the image of popularity is really something of a myth, as these people are not technically speaking any more popular than anyone else.

So, then, how has this image come to present itself as one of popularity, of being the top of the social spectrum? Why are these guys on top, seemingly above group after group of normal, friendly people with a roughly level playing field when it comes to social standing?

If you were to ask George Orwell this question, he would present you with a very compelling argument concerning the nature of a social structure to form a ‘high’ class of people (shortly after asking you how you managed to communicate with him beyond the grave). He and other social commentators have frequently pointed out that the existence of a social system where all are genuinely treated equally is unstable without some ‘higher class’ of people to look up to- even if it is only in hatred. It is humanity’s natural tendency to try and better itself, try to fight its way to the top of the pile, so if the ‘high’ group disappear temporarily they will be quickly replaced; hence why there is such a disparity between rich and poor even in a country such as the USA founded on the principle that ‘all men are created free and equal’. This principle applies to social situations too; if the ‘popular’ kids were to fall from grace, then some other group would likely rise to fill the power vacuum at the top of the social spectrum. And, as we all know, power and influence are powerful corrupting forces, so this position would be likely to transform this new ‘popular’ group into arrogant b*stards too, removing the niceness they had when they were just normal guys. This effect is also in evidence that many of the previously hateful people at the top of the spectrum become very normal and friendly when spoken to one-on-one, outside of their social group (from my experience anyway; this does not apply to all people in such groups)

However, another explanation is perhaps more believable; that arrogance is a cause rather than a symptom. By acting like they are better than the rest of the world, the rest of the world subconsciously get it into their heads that, much though they are hated, they are the top of the social ladder purely because they said so. And perhaps this idea is more comforting, because it takes us back to the idea we started with; that nobody is more actually popular than anyone else, and that it doesn’t really matter in the grand scheme of things. Regardless of where your group ranks on the social scale, if it’s yours and you get along with the people in it, then it doesn’t really matter about everyone else or what they think, so long as you can get on, be happy, and enjoy yourself.

Footnote: I get most of these ideas from what is painted by the media as being the norm in American schools and from what friends have told me, since I’ve been lucky enough that the social hierarchies I encountered from my school experience basically left one another along. Judging by the horror stories other people tell me, I presume it was just my school. Plus, even if it’s total horseshit, it’s enough of a trope that I can write a post about it.

The Epitome of Nerd-dom

A short while ago, I did a series of posts on computing based on the fact that I had done a lot of related research when studying the installation of Linux. I feel that I should now come clean and point out that between the time of that first post being written and now, I have tried and failed to install Ubuntu on an old laptop six times already, which has served to teach me even more about exactly how it works, and how it differs from is more mainstream competitors. So, since I don’t have any better ideas, I thought I might dedicate this post to Linux itself.

Linux is named after both its founder, Linus Torvalds, a Finnish programmer who finished compiling the Linux kernel in 1992, and Unix, the operating system that could be considered the grandfather of all modern OSs and which Torvalds based his design upon (note- whilst Torvald’s first name has a soft, extended first syllable, the first syllable of the word Linux should be a hard, short, sharp ‘ih’ sound). The system has its roots in the work of Richard Stallman, a lifelong pioneer and champion of the free-to-use, open source movement, who started the GNU project in 1983. His ultimate goal was to produce a free, Unix-like operating system, and in keeping with this he wrote a software license allowing anyone to use and distribute software associated with it so long as they stayed in keeping with the license’s terms (ie nobody can use the free software for personal profit). The software compiled as part of the GNU project was numerous (including a still widely-used compiler) and did eventually come to fruition as an operating system, but it never caught on and the project was, in regards to its achieving of its final aims, a failure (although the GNU General Public License remains the most-used software license of all time).

Torvalds began work on Linux as a hobby whilst a student in April 1991, using another Unix clone MINIX to write his code in and basing it on MINIX’s structure. Initially, he hadn’t been intending to write a complete operating system at all, but rather a type of display interface called a terminal emulator- a system that tries to emulate a graphical terminal, like a monitor, through a more text-based medium (I don’t really get it either- it’s hard to find information a newbie like me can make good sense of). Strictly speaking a terminal emulator is a program, existing independent of an operating system and acting almost like one in its own right, directly within the computer’s architecture. As such, the two are somewhat related and it wasn’t long before Torvalds ‘realised’ he had written a kernel for an operating system and, since the GNU operating system had fallen through and there was no widespread, free-to-use kernel out there, he pushed forward with his project. In August of that same year he published a now-famous post on a kind of early internet forum called Usenet, saying that he was developing an operating system that was “starting to get ready”, and asking for feedback concerning where MINIX was good and where it was lacking, “as my OS resembles it somewhat”. He also, interestingly,  said that his OS “probably never will support anything other than AT-harddisks”. How wrong that statement has proved to be.

When he finally published Linux, he originally did so under his own license- however, he borrowed heavily from GNU software in order to make it run properly (so to have a proper interface and such), and released later versions under the GNU GPL. Torvalds and his associates continue to maintain and update the Linux kernel (Version 3.0 being released last year) and, despite some teething troubles with those who have considered it old-fashioned, those who thought MINIX code was stolen (rather than merely borrowed from), and Microsoft (who have since turned tail and are now one of the largest contributors to the Linux kernel), the system is now regarded as the pinnacle of Stallman’s open-source dream.

One of the keys to its success lies in its constant evolution, and the interactivity of this process. Whilst Linus Torvalds and co. are the main developers, they write very little code themselves- instead, other programmers and members of the Linux community offer up suggestions, patches and additions to either the Linux distributors (more on them later) or as source code to the kernel itself. All the main team have to do is pick and choose the features they want to see included, and continually prune what they get to maximise the efficiency and minimise the vulnerability to viruses of the system- the latter being one of the key features that marks Linux (and OS X) over Windows. Other key advantages Linux holds includes its size and the efficiency with which it allocates CPU usage; whilst Windows may command a quite high percentage of your CPU capacity just to keep itself running, not counting any programs running on it, Linux is designed to use your CPU as efficiently as possible, in an effort to keep it running faster. The kernel’s open source roots mean it is easy to modify if you have the technical know-how, and the community of followers surrounding it mean that any problem you have with a standard distribution is usually only a few button clicks away. Disadvantages include a certain lack of user-friendliness to the uninitiated or not computer-literate user since a lot of programs require an instruction typed into the command bar, far fewer  programs, especially commercial, professional ones, than Windows, an inability to process media as well as OS X (which is the main reason Apple computers appear to exist), and a tendency to go wrong more frequently than commercial operating systems. Nonetheless, many ‘computer people’ consider this a small price to pay and flock to the kernel in their thousands.

However, the Linux kernel alone is not enough to make an operating system- hence the existence of distributions. Different distributions (or ‘distros’ as they’re known) consist of the Linux kernel bundled together with all the other features that make up an OS: software, documentation, window system, window manager, and desktop interface, to name but some. A few of these components, such as the graphical user interface (or GUI, which covers the job of several of the above components), or the package manager (that covers program installation, removal and editing), tend to be fairly ubiquitous (GNOME or KDE are common GUIs, and Synaptic the most typical package manager), but different people like their operating system to run in slightly different ways. Therefore, variations on these other components are bundled together with the kernel to form a distro, a complete package that will run as an operating system in exactly the same fashion as you would encounter with Windows or OS X. Such distros include Ubuntu (the most popular among beginners), Debian (Ubuntu’s older brother), Red Hat, Mandriva and Crunchbang- some of these, such as Ubuntu, are commercially backed enterprises (although how they make their money is a little beyond me), whilst others are entirely community-run, maintained solely thanks to the dedication, obsession and boundless free time of users across the globe.

If you’re not into all this computer-y geekdom, then there is a lot to dislike about Linux, and many an average computer user would rather use something that will get them sneered at by a minority of elitist nerds but that they know and can rely upon. But, for all of our inner geeks, the spirit, community, inventiveness and joyous freedom of the Linux system can be a wonderful breath of fresh air. Thank you, Mr. Torvalds- you have made a lot of people very happy.

Practical computing

This looks set to be my final post of this series about the history and functional mechanics of computers. Today I want to get onto the nuts & bolts of computer programming and interaction, the sort of thing you might learn as a budding amateur wanting to figure out how to mess around with these things, and who’s interested in exactly how they work (bear in mind that I am not one of these people and am, therefore, likely to get quite a bit of this wrong). So, to summarise what I’ve said in the last two posts (and to fill in a couple of gaps): silicon chips are massive piles of tiny electronic switches, memory is stored in tiny circuits that are either off or on, this pattern of off and on can be used to represent information in memory, memory stores data and instructions for the CPU, the CPU has no actual ability to do anything but automatically delegates through the structure of its transistors to the areas that do, the arithmetic logic unit is a dumb counting machine used to do all the grunt work and is also responsible, through the CPU, for telling the screen how to make the appropriate pretty pictures.

OK? Good, we can get on then.

Programming languages are a way of translating the medium of computer information and instruction (binary data) into our medium of the same: words and language. Obviously, computers do not understand that the buttons we press on our screen have symbols on them, that these symbols mean something to us and that they are so built to produce the same symbols on the monitor when we press them, but we humans do and that makes computers actually usable for 99.99% of the world population. When a programmer brings up an appropriate program and starts typing instructions into it, at the time of typing their words mean absolutely nothing. The key thing is what happens when their data is committed to memory, for here the program concerned kicks in.

The key feature that defines a programming language is not the language itself, but the interface that converts words to instructions. Built into the workings of each is a list of ‘words’ in binary, each word having a corresponding, but entirely different, string of data associated with it that represents the appropriate set of ‘ons and offs’ that will get a computer to perform the correct task. This works in one of two ways: an ‘interpreter’ is an inbuilt system whereby the programming is stored just as words and is then converted to ‘machine code’ by the interpreter as it is accessed from memory, but the most common form is to use a compiler. This basically means that once you have finished writing your program, you hit a button to tell the computer to ‘compile’ your written code into an executable program in data form. This allows you to delete the written file afterwards, makes programs run faster, and gives programmers an excuse to bum around all the time (I refer you here)

That is, basically how computer programs work- but there is one last, key feature, in the workings of a modern computer, one that has divided both nerds and laymen alike across the years and decades and to this day provokes furious debate: the operating system.

An OS, something like Windows (Microsoft), OS X (Apple) or Linux (nerds), is basically the software that enables the CPU to do its job of managing processes and applications. Think of it this way: whilst the CPU might put two inputs through a logic gate and send an output to a program, it is the operating system that will set it up to determine exactly which gate to put it through and exactly how that program will execute. Operating systems are written onto the hard drive, and can, theoretically, be written using nothing more than a magnetized needle, a lot of time and a plethora of expertise to flip the magnetically charged ‘bits’ on the hard disk. They consist of many different parts, but the key feature of all of them is the kernel, the part that manages the memory, optimises the CPU performance and translates programs from memory to screen. The precise translation and method by which this latter function happens differs from OS to OS, hence why a program written for Windows won’t work on a Mac, and why Android (Linux-powered) smartphones couldn’t run iPhone (iOS) apps even if they could access the store. It is also the cause of all the debate between advocates of different operating systems, since different translation methods prioritise/are better at dealing with different things, work with varying degrees of efficiency and are more  or less vulnerable to virus attack. However, perhaps the most vital things that modern OS’s do on our home computers is the stuff that, at first glance seems secondary- moving stuff around and scheduling. A CPU cannot process more than one task at once, meaning that it should not be theoretically possible for a computer to multi-task; the sheer concept of playing minesweeper whilst waiting for the rest of the computer to boot up and sort itself out would be just too outlandish for words. However, a clever piece of software called a scheduler in each OS which switches from process to process very rapidly (remember computers run so fast that they can count to a billion, one by one, in under a second) to give the impression of it all happening simultaneously. Similarly, a kernel will allocate areas of empty memory for a given program to store its temporary information and run on, but may also shift some rarely-accessed memory from RAM (where it is accessible) to hard disk (where it isn’t) to free up more space (this is how computers with very little free memory space run programs, and the time taken to do this for large amounts of data is why they run so slowly) and must cope when a program needs to access data from another part of the computer that has not been specifically allocated a part of that program.

If I knew what I was talking about, I could witter on all day about the functioning of operating systems and the vast array of headache-causing practicalities and features that any OS programmer must consider, but I don’t and as such won’t. Instead, I will simply sit back, pat myself on the back for having actually got around to researching and (after a fashion) understanding all this, and marvel at what strange, confusing, brilliant inventions computers are.

What we know and what we understand are two very different things…

If the whole Y2K debacle over a decade ago taught us anything, it was that the vast majority of the population did not understand the little plastic boxes known as computers that were rapidly filling up their homes. Nothing especially wrong or unusual about this- there’s a lot of things that only a few nerds understand properly, an awful lot of other stuff in our life to understand, and in any case the personal computer had only just started to become commonplace. However, over 12 and a half years later, the general understanding of a lot of us does not appear to have increased to any significant degree, and we still remain largely ignorant of these little feats of electronic witchcraft. Oh sure, we can work and operate them (most of us anyway), and we know roughly what they do, but as to exactly how they operate, precisely how they carry out their tasks? Sorry, not a clue.

This is largely understandable, particularly given the value of ‘understand’ that is applicable in computer-based situations. Computers are a rare example of a complex system that an expert is genuinely capable of understanding, in minute detail, every single aspect of the system’s working, both what it does, why it is there, and why it is (or, in some cases, shouldn’t be) constructed to that particular specification. To understand a computer in its entirety, therefore, is an equally complex job, and this is one very good reason why computer nerds tend to be a quite solitary bunch, with quite few links to the rest of us and, indeed, the outside world at large.

One person who does not understand computers very well is me, despite the fact that I have been using them, in one form or another, for as long as I can comfortably remember. Over this summer, however, I had quite a lot of free time on my hands, and part of that time was spent finally relenting to the badgering of a friend and having a go with Linux (Ubuntu if you really want to know) for the first time. Since I like to do my background research before getting stuck into any project, this necessitated quite some research into the hows and whys of its installation, along with which came quite a lot of info as to the hows and practicalities of my computer generally. I thought, then, that I might spend the next couple of posts or so detailing some of what I learned, building up a picture of a computer’s functioning from the ground up, and starting with a bit of a history lesson…

‘Computer’ was originally a job title, the job itself being akin to accountancy without the imagination. A computer was a number-cruncher, a supposedly infallible data processing machine employed to perform a range of jobs ranging from astronomical prediction to calculating interest. The job was a fairly good one, anyone clever enough to land it probably doing well by the standards of his age, but the output wasn’t. The human brain is not built for infallibility and, not infrequently, would make mistakes. Most of these undoubtedly went unnoticed or at least rarely caused significant harm, but the system was nonetheless inefficient. Abacuses, log tables and slide rules all aided arithmetic manipulation to a great degree in their respective fields, but true infallibility was unachievable whilst still reliant on the human mind.

Enter Blaise Pascal, 17th century mathematician and pioneer of probability theory (among other things), who invented the mechanical calculator aged just 19, in 1642. His original design wasn’t much more than a counting machine, a sequence of cogs and wheels so constructed as to able to count and convert between units, tens, hundreds and so on (ie a turn of 4 spaces on the ‘units’ cog whilst a seven was already counted would bring up eleven), as well as being able to work with currency denominations and distances as well. However, it could also subtract, multiply and divide (with some difficulty), and moreover proved an important point- that a mechanical machine could cut out the human error factor and reduce any inaccuracy to one of simply entering the wrong number.

Pascal’s machine was both expensive and complicated, meaning only twenty were ever made, but his was the only working mechanical calculator of the 17th century. Several, of a range of designs, were built during the 18th century as show pieces, but by the 19th the release of Thomas de Colmar’s Arithmometer, after 30 years of development, signified the birth of an industry. It wasn’t a large one, since the machines were still expensive and only of limited use, but de Colmar’s machine was the simplest and most reliable model yet. Around 3,000 mechanical calculators, of various designs and manufacturers, were sold by 1890, but by then the field had been given an unexpected shuffling.

Just two years after de Colmar had first patented his pre-development Arithmometer, an Englishmen by the name of Charles Babbage showed an interesting-looking pile of brass to a few friends and associates- a small assembly of cogs and wheels that he said was merely a precursor to the design of a far larger machine: his difference engine. The mathematical workings of his design were based on Newton polynomials, a fiddly bit of maths that I won’t even pretend to understand, but that could be used to closely approximate logarithmic and trigonometric functions. However, what made the difference engine special was that the original setup of the device, the positions of the various columns and so forth, determined what function the machine performed. This was more than just a simple device for adding up, this was beginning to look like a programmable computer.

Babbage’s machine was not the all-conquering revolutionary design the hype about it might have you believe. Babbage was commissioned to build one by the British government for military purposes, but since Babbage was often brash, once claiming that he could not fathom the idiocy of the mind that would think up a question an MP had just asked him, and prized academia above fiscal matters & practicality, the idea fell through. After investing £17,000 in his machine before realising that he had switched to working on a new and improved design known as the analytical engine, they pulled the plug and the machine never got made. Neither did the analytical engine, which is a crying shame; this was the first true computer design, with two separate inputs for both data and the required program, which could be a lot more complicated than just adding or subtracting, and an integrated memory system. It could even print results on one of three printers, in what could be considered the first human interfacing system (akin to a modern-day monitor), and had ‘control flow systems’ incorporated to ensure the performing of programs occurred in the correct order. We may never know, since it has never been built, whether Babbage’s analytical engine would have worked, but a later model of his difference engine was built for the London Science Museum in 1991, yielding accurate results to 31 decimal places.

…and I appear to have run on a bit further than intended. No matter- my next post will continue this journey down the history of the computer, and we’ll see if I can get onto any actual explanation of how the things work.