Aging

OK, I know it was a while ago, but who watched Felix Baumgartner’s jump? If you haven’t seen it, then you seriously missed out; the sheer spectacle of the occasion was truly amazing, so unlike anything you’ve ever seen before. We’re fairly used to seeing skydives from aeroplanes, but usually we only see either a long distance shot, jumper’s-eye-view, or a view from the plane showing them being whisked away half a second after jumping. Baumgartner’s feat was… something else, the two images available for the actual jump being direct, static views of a totally vertical fall. Plus, they were so angled to give a sense of the awesome scope of the occasion; one showed directly down to earth below, showing the swirling clouds and the shape of the land, whilst the other shot gave a beautiful demonstration of the earth’s curvature. The height he was at made the whole thing particularly striking; shots from the International Space Station and the moon have showed the earth from further away, but Baumgartner’s unique height made everything seem big enough to be real, yet small enough to be terrifying. And then there was the drop itself; a gentle lean forward from the Austrian, followed by what can only be described as a plummet. You could visibly see the lack of air resistance, so fast was he accelerating compared to our other images of skydivers. The whole business was awe-inspiring. Felix Baumgartner, you sir have some serious balls.

However, I bring this story up not because of the event itself, nor the insane amount of media coverage it received, nor even the internet’s typically entertaining reaction to the whole business (this was probably my favourite). No, the thing that really caught my eye was a little something about Baumgartner himself; namely, that the man who holds the world records for highest freefall, highest manned balloon flight, fastest unassisted speed and second longest freefall ever will be forty-four years old in April.

At his age, he would be ineligible for entry into the British Armed Forces, is closer to collecting his pension than university, and has already experienced more than half his total expected time on this earth. Most men his age are in the process of settling down, finding their place in some management company and getting slightly less annoyed at being passed over for promotion by some youngster with a degree and four boatloads of hopelessly naive enthusiasm. They’re in the line for learning how to relax, taking up golf, being put onto diet plans by their wives and going to improving exhibitions of obscure artists. They are generally not throwing themselves out of balloons 39 kilometres above the surface of the earth, even if they were fit and mobile enough to get inside the capsule with half a gigatonne of sensors and pressure suit (I may be exaggerating slightly).

Baumgartner’s feats for a man of his age (he was also the first man to skydive across the English channel, and holds a hotly disputed record for lowest BASE jump ever) are not rare ones without reason. Human beings are, by their very nature, lazy (more on that another time) and tend to favour the simple, homely life rather one that demands such a high-octane, highly stressful thrill ride of a life experience. This tendency towards laziness also makes us grow naturally more and more unfit as time goes by, our bodies slowly using the ability our boundlessly enthusiastic childish bodies had for scampering up trees and chasing one another, making such seriously impressive physical achievements rare.

And then there’s the activity itself; skydiving, and even more so BASE jumping, is also a dangerous, injury-prone sport, and as such it is rare to find regular practitioners of Baumgartner’s age and experience who have not suffered some kind of reality-checking accident leaving them either injured, scared or, in some cases, dead. Finally, we must consider the fact that there are very few people rich enough and brave enough to give such an expensive, exhilarating hobby as skydiving a serious go, and even less with both the clout, nous, ambition and ability to get a project such as Red Bull Stratos off the ground. And we must also remember that one has to overcome the claustrophobic, restrictive experience of doing the jump in a heavy pressure suit; even Baumgartner had to get help from a sports psychologist to get over his claustrophobia caused by being in the suit.

But then again, maybe we shouldn’t be too surprised. Red Bull Stratos was a culmination of years of effort in a single minded pursuit of a goal, and that required a level of experience in both skydiving and life in general that simply couldn’t be achieved by anyone younger than middle age- the majority of younger, perhaps even more ambitious, skydivers simply could not have got the whole thing done. And we might think that the majority of middle-aged people don’t achieve great things, but then again in the grand scheme of things the majority of everyone don’t end up getting most of the developed world watching them of an evening. Admittedly, the majority of those who do end up doing the most extraordinary physical things are under 35, but there’s always room for an exceptional human to change that archetype. And anyway; look at the list of Nobel Prize winners and certified geniuses on our earth, our leaders and heroes. Many of them have turned their middle age into something truly amazing, and if their field happens to be quantum entanglement rather than BASE jumping then so be it; they can still be extraordinary people.

I don’t really know what the point of this post was, or exactly what conclusion I was trying to draw from it; it basically started off because I thought Felix Baumgartner was a pretty awesome guy, and I happened to notice he was older than I thought he would be. So I suppose it would be best to leave you with a fact and a quote from his jump. Fact: When he jumped, his heart rate was measured as being lower than the average resting (ie lying down doing nothing and not wetting yourself in pants-shitting terror) heart rate of a normal human, so clearly the guy is cool and relaxed to a degree beyond human imagining. Quote: “Sometimes you have to be really high to see how small you really are”.

Advertisement

A Continued History

This post looks set to at least begin by following on directly from my last one- that dealt with the story of computers up to Charles Babbage’s difference and analytical engines, whilst this one will try to follow the history along from there until as close to today as I can manage, hopefully getting in a few of the basics of the workings of these strange and wonderful machines.

After Babbage’s death as a relatively unknown and unloved mathematician in 1871, the progress of the science of computing continued to tick over. A Dublin accountant named Percy Ludgate, independently of Babbage’s work, did develop his own programmable, mechanical computer at the turn of the century, but his design fell into a similar degree of obscurity and hardly added anything new to the field. Mechanical calculators had become viable commercial enterprises, getting steadily cheaper and cheaper, and as technological exercises were becoming ever more sophisticated with the invention of the analogue computer. These were, basically a less programmable version of the difference engine- mechanical devices whose various cogs and wheels were so connected up that they would perform one specific mathematical function to a set of data. James Thomson in 1876 built the first, which could solve differential equations by integration (a fairly simple but undoubtedly tedious mathematical task), and later developments were widely used to collect military data and for solving problems concerning numbers too large to solve by human numerical methods. For a long time, analogue computers were considered the future of modern computing, but since they solved and modelled problems using physical phenomena rather than data they were restricted in capability to their original setup.

A perhaps more significant development came in the late 1880s, when an American named Herman Hollerith invented a method of machine-readable data storage in the form of cards punched with holes. These had been around for a while to act rather like programs, such as the holed-paper reels of a pianola or the punched cards used to automate the workings of a loom, but this was the first example of such devices being used to store data (although Babbage had theorised such an idea for the memory systems of his analytical engine). They were cheap, simple, could be both produced and read easily by a machine, and were even simple to dispose of. Hollerith’s team later went on to process the data of the 1890 US census, and would eventually become most of IBM. The pattern of holes on these cards could be ‘read’ by a mechanical device with a set of levers that would go through a hole if there was one present, turning the appropriate cogs to tell the machine to count up one. This system carried on being used right up until the 1980s on IBM systems, and could be argued to be the first programming language.

However, to see the story of the modern computer truly progress we must fast forward to the 1930s. Three interesting people and acheivements came to the fore here: in 1937 George Stibitz, and American working in Bell Labs, built an electromechanical calculator that was the first to process data digitally using on/off binary electrical signals, making it the first digital. In 1936, a bored German engineering student called Konrad Zuse dreamt up a method for processing his tedious design calculations automatically rather than by hand- to this end he devised the Z1, a table-sized calculator that could be programmed to a degree via perforated film and also operated in binary. His parts couldn’t be engineered well enough for it to ever work properly, but he kept at it to eventually build 3 more models and devise the first programming language. However, perhaps the most significant figure of 1930s computing was a young, homosexual, English maths genius called Alan Turing.

Turing’s first contribution to the computing world came in 1936, when he published a revolutionary paper showing that certain computing problems cannot be solved by one general algorithm. A key feature of this paper was his description of a ‘universal computer’, a machine capable of executing programs based on reading and manipulating a set of symbols on a strip of tape. The symbol on the tape currently being read would determine whether the machine would move up or down the strip, how far, and what it would change the symbol to, and Turing proved that one of these machines could replicate the behaviour of any computer algorithm- and since computers are just devices running algorithms, they can replicate any modern computer too. Thus, if a Turing machine (as they are now known) could theoretically solve a problem, then so could a general algorithm, and vice versa if it couldn’t. Not only that, but since modern computers cannot multi-task on the. These machines not only lay the foundations for computability and computation theory, on which nearly all of modern computing is built, but were also revolutionary as they were the first theorised to use the same medium for both data storage and programs, as nearly all modern computers do. This concept is known as a von Neumann architecture, after the man who first pointed out and explained this idea in response to Turing’s work.

Turing machines contributed one further, vital concept to modern computing- that of Turing-completeness. A Turing-complete system was defined as a single Turing machine (known as a Universal Turing machine) capable of replicating the behaviour of any other theoretically possible Turing machine, and thus any possible algorithm or computable sequence. Charles Babbage’s analytical engine would have fallen into that class had it ever been built, in part because it was capable of the ‘if X then do Y’ logical reasoning that characterises a computer rather than a calculator. Ensuring the Turing-completeness of a system is a key part of designing a computer system or programming language to ensure its versatility and that it is capable of performing all the tasks that could be required of it.

Turing’s work had laid the foundations for nearly all the theoretical science of modern computing- now all the world needed was machines capable of performing the practical side of things. However, in 1942 there was a war on, and Turing was being employed by the government’s code breaking unit at Bletchley Park, Buckinghamshire. They had already cracked the German’s Enigma code, but that had been a comparatively simple task since they knew the structure and internal layout of the Enigma machine. However, they were then faced by a new and more daunting prospect: the Lorenz cipher, encoded by an even more complex machine for which they had no blueprints. Turing’s genius, however, apparently knew no bounds, and his team eventually worked out its logical functioning. From this a method for deciphering it was formulated, but it required an iterative process that took hours of mind-numbing calculation to get a result out. A faster method of processing these messages was needed, and to this end an engineer named Tommy Flowers designed and built Colossus.

Colossus was a landmark of the computing world- the first electronic, digital, and partially programmable computer ever to exist. It’s mathematical operation was not highly sophisticated- it used vacuum tubes containing light emission and sensitive detection systems, all of which were state-of-the-art electronics at the time, to read the pattern of holes on a paper tape containing the encoded messages, and then compared these to another pattern of holes generated internally from a simulation of the Lorenz machine in different configurations. If there were enough similarities (the machine could obviously not get a precise matching since it didn’t know the original message content) it flagged up that setup as a potential one for the message’s encryption, which could then be tested, saving many hundreds of man-hours. But despite its inherent simplicity, its legacy is simply one of proving a point to the world- that electronic, programmable computers were both possible and viable bits of hardware, and paved the way for modern-day computing to develop.

The Dark Knight Rises

OK, I’m going to take a bit of a risk on this one- I’m going to dip back into the world of film reviewing. I’ve tried this once before over the course of this blog (about The Hunger Games) and it went about as well as a booze-up in a monastery (although it did get me my first ever comment!). However, never one to shirk from a challenge I thought I might try again, this time with something I’m a little more overall familiar with: Christopher Nolan’s conclusion to his Batman trilogy, The Dark Knight Rises.

Ahem

Christopher Nolan has never been one to make his plots simple and straightforward (he did do Inception after all), but most of his previous efforts have at least tried to focus on only one or two things at a time. In Dark Knight Rises however, he has gone ambitious, trying to weave no less than 6 different storylines into one film. Not only that, but 4 of those are trying to explore entirely new characters and a fifth pretty much does the whole ‘road to Batman’ origins story that was done in Batman Begins. That places the onus of the film firmly on its characters and their development, and trying to do that properly to so many new faces was always going to push everyone for space, even in a film that’s nearly 3 hours long.

So, did it work? Well… kind of. Some characters seem real and compelling pretty much from the off, in the same way that Joker did in The Dark Knight- Anne Hathaway’s Selina Kyle (not once referred to as Catwoman in the entire film) is a little bland here and there and we don’t get to see much of the emotion that supposedly drives her, but she is (like everyone else) superbly acted and does the ‘femme fakickass’ thing brilliantly, whilst Joseph Gordon Levitt’s young cop John Blake (who gets a wonderful twist to his character right at the end) is probably the most- and best-developed character of the film, adding some genuine emotional depth. Michael Caine is typically brilliant as Alfred, this time adding his own kick to the ‘origins’ plot line, and Christian Bale finally gets to do what no other Batman film has done before- make Batman/Bruce Wayne the most interesting part of the film.

However, whilst the main good guys’ story arcs are unique among Batman films by being the best parts of the film, some of the other elements don’t work as well. For someone who is meant to be a really key part of the story, Marion Cotillard’s Miranda Tate gets nothing that gives her character real depth- lots of narration and exposition, but we see next to none of her for huge chunks of the film and she just never feels like she matters very much. Tom Hardy as Bane suffers from a similar problem- he was clearly designed in the mould of Ducard (Liam Neeson) in Begins, acting as an overbearing figure of control and power that Batman simply doesn’t have (rather than the pure terror of Joker’s madness), but his actual actions never present him as anything other just a device to try and give the rest of the film a reason to happen, and he never appears to have any genuinely emotional investment or motivation in anything he’s doing. Part of the problem is his mask- whilst clearly a key feature of his character, it makes it impossible to see his mouth and bunches up his cheeks into an immovable pair of blobs beneath his eyes, meaning there is nothing visible for him to express feeling with, effectively turning him into a blunt machine rather than a believable bad guy. There’s also an entire arc concerning Commissioner Gordon (Gary Oldman) and his guilt over letting Batman take the blame for Harvey Dent’s death that is barely explored at all, but thankfully it’s so irrelevant to the overall plot that it might as well not be there at all.

It is, in many ways, a crying shame, because there are so many things the film does so, so right. The actual plot is a rollercoaster of an experience, pushing the stakes high and the action (in typical Nolan fashion) through the roof. The cinematography is great, every actor does a brilliant job in their respective roles and a lot of the little details- the pit & its leap to freedom, the ‘death by exile’ sequence and the undiluted awesome that is The Bat- are truly superb. In fact if Nolan had just decided on a core storyline and focus and then stuck with it as a solid structure, then I would probably still not have managed to wipe the inane grin off my face. But by being as ambitious as he has done, he has just squeezed screen time away from where it really needed to be, and turned the whole thing into a structural mess that doesn’t really know where it’s going at times. It’s a tribute to how good the good parts are that the whole experience is still such good fun, but it’s such a shame to see a near-perfect film let down so badly.

The final thing I have to say about the film is simply: go and see it. Seriously, however bad you think this review portrays it as, if you haven’t seen the film yet and you at all liked the other two (or any other major action blockbuster with half a brain), then get down to your nearest cinema and give it a watch. I can’t guarantee that you’ll have your greatest ever filmgoing experience there, but I can guarantee that it’ll be a really entertaining way to spend a few hours, and you certainly won’t regret having seen it.