An Opera Posessed

My last post left the story of JRR Tolkein immediately after his writing of his first bestseller; the rather charming, lighthearted, almost fairy story of a tale that was The Hobbit. This was a major success, and not just among the ‘children aged between 6 and 12’ demographic identified by young Rayner Unwin; adults lapped up Tolkein’s work too, and his publishers Allen & Unwin were positively rubbing their hands in glee. Naturally, they requested a sequel, a request to which Tolkein’s attitude appears to have been along the lines of ‘challenge accepted’.

Even holding down the rigours of another job, and even accounting for the phenomenal length of his finished product, the writing of a book is a process that takes a few months for a professional writer (Dame Barbara Cartland once released 25 books in the space of a year, but that’s another story), and perhaps a year or two for an amateur like Tolkein. He started writing the book in December 1937, and it was finally published 18 years later in 1955.

This was partly a reflection of the difficulties Tolkein had in publishing his work (more on that later), but this also reflects the measured, meticulous and very serious approach Tolkein took to his writing. He started his story from scratch, each time going in a completely different direction with an entirely different plot, at least three times. His first effort, for instance, was due to chronicle another adventure of his protagonist Bilbo from The Hobbit, making it a direct sequel in both a literal and spiritual sense. However, he then remembered about the ring Bilbo found beneath the mountains, won (or stolen, depending on your point of view) from the creature Gollum, and the strange power it held; not just invisibility, as was Bilbo’s main use for it, but the hypnotic effect it had on Gollum (he even subsequently rewrote that scene for The Hobbit‘s second edition to emphasise that effect). He decided that the strange power of the ring was a more natural direction to follow, and so he wrote about that instead.

Progress was slow. Tolkein went months at a time without working on the book, making only occasional, sporadic yet highly focused bouts of progress. Huge amounts were cross-referenced or borrowed from his earlier writings concerning the mythology, history & background of Middle Earth, Tolkein constantly trying to make his mythic world feel and, in a sense, be as real as possible, but it was mainly due to the influence of his son Christopher, who Tolkein would send chapters to whilst he was away fighting the Second World War in his father’s native South Africa, that the book ever got finished at all. When it eventually did, Tolkein had been working the story of Bilbo’s son Frodo and his adventure to destroy the Ring of Power for over 12 years. His final work was over 1000 pages long, spread across six ‘books’, as well as being laden with appendices to explain & offer background information, and he called it The Lord of The Rings (in reference to his overarching antagonist, the Dark Lord Sauron).

A similar story had, incidentally, been attempted once before; Der Ring des Nibelungen is an opera (well, four operas) written by German composer Richard Wagner during the 19th century, traditionally performed over the course of four consecutive nights (yeah, you have to be pretty committed to sit through all of that) and also known as ‘The Ring Cycle’- it’s where ‘Ride of The Valkyries’ comes from. The opera follows the story of a ring, made from the traditionally evil Rhinegold (gold panned from the Rhine river), and the trail of death, chaos and destruction it leaves in its wake between its forging & destruction. Many commentators have pointed out the close similarities between the two, and as a keen follower of Germanic mythology Tolkein certainly knew the story, but Tolkein rubbished any suggestion that he had borrowed from it, saying “Both rings were round, and there the resemblance ceases”. You can probably work out my approximate personal opinion from the title of this post, although I wouldn’t read too much into it.

Even once his epic was finished, the problems weren’t over. Once finished, he quarrelled with Allen & Unwin over his desire to release LOTR in one volume, along with his still-incomplete Silmarillion (that he wasn’t allowed to may explain all the appendices). He then turned to Collins, but they claimed his book was in urgent need of an editor and a license to cut (my words, not theirs, I should add). Many other people have voiced this complaint since, but Tolkein refused and ordered Collins to publish by 1952. This they failed to do, so Tolkein wrote back to Allen & Unwin and eventually agreed to publish his book in three parts; The Fellowship of The Ring, The Two Towers, and The Return of The King (a title Tolkein, incidentally, detested because it told you how the book ended).

Still, the book was out now, and the critics… weren’t that enthusiastic. Well, some of them were, certainly, but the book has always had its detractors among the world of literature, and that was most certainly the case during its inception. The New York Times criticised Tolkein’s academic approach, saying he had “formulated a high-minded belief in the importance of his mission as a literary preservationist, which turns out to be death to literature itself”, whilst others claimed it, and its characters in particular, lacked depth. Even Hugo Dyson, one of Tolkein’s close friends and a member of his own literary group, spent public readings of the book lying on a sofa shouting complaints along the lines of “Oh God, not another elf!”. Unlike The Hobbit, which had been a light-hearted children’s story in many ways, The Lord of The Rings was darker & more grown up, dealing with themes of death, power and evil and written in a far more adult style; this could be said to have exposed it to more serious critics and a harder gaze than its predecessor, causing some to be put off by it (a problem that wasn’t helped by the sheer size of the thing).

However, I personally am part of the other crowd, those who have voiced their opinions in nearly 500 five-star reviews on Amazon (although one should never read too much into such figures) and who agree with the likes of CS  Lewis, The Sunday Telegraph and Sunday Times of the time that “Here is a book that will break your heart”, that it is “among the greatest works of imaginative fiction of the twentieth century” and that “the English-speaking world is divided into those who have read The Lord of the Rings and The Hobbit and those who are going to read them”. These are the people who have shown the truth in the review of the New York Herald Tribune: that Tolkein’s masterpiece was and is “destined to outlast our time”.

But… what exactly is it that makes Tolkein’s epic so special, such a fixture; why, even years after its publication as the first genuinely great work of fantasy, it is still widely regarded as the finest work the genre has ever produced? I could probably write an entire book just to try and answer that question (and several people probably have done), but to me it was because Tolkein understood, absolutely perfectly and fundamentally, exactly what he was trying to write. Many modern fantasy novels try to be uber-fantastical, or try to base themselves around an idea or a concept, in some way trying to find their own level of reality on which their world can exist, and they often find themselves in a sort of awkward middle ground, but Tolkein never suffered that problem because he knew that, quite simply, he was writing a myth, and he knew exactly how that was done. Terry Pratchett may have mastered comedic fantasy, George RR Martin may be the king of political-style fantasy, but only JRR Tolkein has, in recent times, been able to harness the awesome power of the first source of story; the legend, told around the campfire, of the hero and the villain, of the character defined by their virtues over their flaws, of the purest, rawest adventure in the pursuit of saving what is good and true in this world. These are the stories written to outlast the generations, and Tolkein’s mastery of them is, to me, the secret to his masterpiece.

Advertisement

Practical computing

This looks set to be my final post of this series about the history and functional mechanics of computers. Today I want to get onto the nuts & bolts of computer programming and interaction, the sort of thing you might learn as a budding amateur wanting to figure out how to mess around with these things, and who’s interested in exactly how they work (bear in mind that I am not one of these people and am, therefore, likely to get quite a bit of this wrong). So, to summarise what I’ve said in the last two posts (and to fill in a couple of gaps): silicon chips are massive piles of tiny electronic switches, memory is stored in tiny circuits that are either off or on, this pattern of off and on can be used to represent information in memory, memory stores data and instructions for the CPU, the CPU has no actual ability to do anything but automatically delegates through the structure of its transistors to the areas that do, the arithmetic logic unit is a dumb counting machine used to do all the grunt work and is also responsible, through the CPU, for telling the screen how to make the appropriate pretty pictures.

OK? Good, we can get on then.

Programming languages are a way of translating the medium of computer information and instruction (binary data) into our medium of the same: words and language. Obviously, computers do not understand that the buttons we press on our screen have symbols on them, that these symbols mean something to us and that they are so built to produce the same symbols on the monitor when we press them, but we humans do and that makes computers actually usable for 99.99% of the world population. When a programmer brings up an appropriate program and starts typing instructions into it, at the time of typing their words mean absolutely nothing. The key thing is what happens when their data is committed to memory, for here the program concerned kicks in.

The key feature that defines a programming language is not the language itself, but the interface that converts words to instructions. Built into the workings of each is a list of ‘words’ in binary, each word having a corresponding, but entirely different, string of data associated with it that represents the appropriate set of ‘ons and offs’ that will get a computer to perform the correct task. This works in one of two ways: an ‘interpreter’ is an inbuilt system whereby the programming is stored just as words and is then converted to ‘machine code’ by the interpreter as it is accessed from memory, but the most common form is to use a compiler. This basically means that once you have finished writing your program, you hit a button to tell the computer to ‘compile’ your written code into an executable program in data form. This allows you to delete the written file afterwards, makes programs run faster, and gives programmers an excuse to bum around all the time (I refer you here)

That is, basically how computer programs work- but there is one last, key feature, in the workings of a modern computer, one that has divided both nerds and laymen alike across the years and decades and to this day provokes furious debate: the operating system.

An OS, something like Windows (Microsoft), OS X (Apple) or Linux (nerds), is basically the software that enables the CPU to do its job of managing processes and applications. Think of it this way: whilst the CPU might put two inputs through a logic gate and send an output to a program, it is the operating system that will set it up to determine exactly which gate to put it through and exactly how that program will execute. Operating systems are written onto the hard drive, and can, theoretically, be written using nothing more than a magnetized needle, a lot of time and a plethora of expertise to flip the magnetically charged ‘bits’ on the hard disk. They consist of many different parts, but the key feature of all of them is the kernel, the part that manages the memory, optimises the CPU performance and translates programs from memory to screen. The precise translation and method by which this latter function happens differs from OS to OS, hence why a program written for Windows won’t work on a Mac, and why Android (Linux-powered) smartphones couldn’t run iPhone (iOS) apps even if they could access the store. It is also the cause of all the debate between advocates of different operating systems, since different translation methods prioritise/are better at dealing with different things, work with varying degrees of efficiency and are more  or less vulnerable to virus attack. However, perhaps the most vital things that modern OS’s do on our home computers is the stuff that, at first glance seems secondary- moving stuff around and scheduling. A CPU cannot process more than one task at once, meaning that it should not be theoretically possible for a computer to multi-task; the sheer concept of playing minesweeper whilst waiting for the rest of the computer to boot up and sort itself out would be just too outlandish for words. However, a clever piece of software called a scheduler in each OS which switches from process to process very rapidly (remember computers run so fast that they can count to a billion, one by one, in under a second) to give the impression of it all happening simultaneously. Similarly, a kernel will allocate areas of empty memory for a given program to store its temporary information and run on, but may also shift some rarely-accessed memory from RAM (where it is accessible) to hard disk (where it isn’t) to free up more space (this is how computers with very little free memory space run programs, and the time taken to do this for large amounts of data is why they run so slowly) and must cope when a program needs to access data from another part of the computer that has not been specifically allocated a part of that program.

If I knew what I was talking about, I could witter on all day about the functioning of operating systems and the vast array of headache-causing practicalities and features that any OS programmer must consider, but I don’t and as such won’t. Instead, I will simply sit back, pat myself on the back for having actually got around to researching and (after a fashion) understanding all this, and marvel at what strange, confusing, brilliant inventions computers are.