Hitting the hay

OK, so it was history last time, so I’m feeling like a bit of science today. So, here is your random question for today; are the ‘leaps of faith’ in the Assassin’s Creed games survivable?

Between them, the characters of Altair, Ezio and Connor* jump off a wide variety of famous buildings and monuments across the five current games, but the jump that springs most readily to mind is Ezio’s leap from the Campanile di San Marco, in St Mark’s Square, Venice, at the end of Assassin’s Creed II. It’s not the highest jump made, but it is one of the most interesting and it occurs as part of the main story campaign, meaning everyone who’s played the game through will have made the jump and it has some significance attached to it. It’s also a well-known building with plenty of information on it.

[*Interesting fact; apparently, both Altair and Ezio translate as ‘Eagle’ in some form in English, as does Connor’s Mohawk name (Ratonhnhaké;ton, according to Wikipedia) and the name of his ship, the Aquila. Connor itself translates as ‘lover of wolves’ from the original Gaelic]

The Campanile as it stands today is not the same one as in Ezio’s day; in 1902 the original building collapsed and took ten years to rebuild. However, the new Campanile was made to be cosmetically (if not quite structurally) identical to the original, so current data should still be accurate. Wikipedia again tells me the brick shaft making up the bulk of the structure accounts for (apparently only) 50m of the tower’s 98.6m total height, with Ezio’s leap (made from the belfry just above) coming in at around 55m. With this information we can calculate Ezio’s total gravitational potential energy lost during his fall; GPE lost = mgΔh, and presuming a 70kg bloke this comes to GPE lost= 33730J (Δ is, by the way, the mathematical way of expressing a change in something- in this case, Δh represents a change in height). If his fall were made with no air resistance, then all this GPE would be converted to kinetic energy, where KE = mv²/2. Solving to make v (his velocity upon hitting the ground) the subject gives v = sqrt(2*KE/m), and replacing KE with our value of the GPE lost, we get v = 31.04m/s. This tells us two things; firstly that the fall should take Ezio at least three seconds, and secondly that, without air resistance, he’d be in rather a lot of trouble.

But, we must of course factor air resistance into our calculations, but to do so to begin with we must make another assumption; that Ezio reaches terminal velocity before reaching the ground. Whether this statement is valid or not we will find out later. The terminal velocity is just a rearranged form of the drag equation: Vt=sqrt(2mg/pACd), where m= Ezio’s mass (70kg, as presumed earlier), g= gravitational field strength (on Earth, 9.8m/s²), p= air density (on a warm Venetian evening at around 15 degrees Celcius, this comes out as 1.225kg/m3), A= the cross-sectional area of Ezio’s falling body (call it 0.85m², presuming he’s around the same size as me) and Cd= his body’s drag coefficient (a number evaluating how well the air flows around his body and clothing, for which I shall pick 1 at complete random). Plugging these numbers into the equation gives a terminal velocity of 36.30m/s, which is an annoying number; because it’s larger than our previous velocity value, calculated without air resistance, of 31.04m/s, this means that Ezio definitely won’t have reached terminal velocity by the time he reaches the bottom of the Campanile, so we’re going to have to look elsewhere for our numbers. Interestingly, the terminal velocity for a falling skydiver, without parachute, is apparently around 54m/s, suggesting that I’ve got numbers that are in roughly the correct ballpark but that could do with some improvement (this is probably thanks to my chosen Cd value; 1 is a very high value, selected to give Ezio the best possible chance of survival, but ho hum)

Here, I could attempt to derive an equation for how velocity varies with distance travelled, but such things are complicated, time consuming and do not translate well into being typed out. Instead, I am going to take on blind faith a statement attached to my ‘falling skydiver’ number quoted above; that it takes about 3 seconds to achieve half the skydiver’s terminal velocity. We said that Ezio’s fall from the Campanile would take him at least three seconds (just trust me on that one), and in fact it would probably be closer to four, but no matter; let’s just presume he has jumped off some unidentified building such that it takes him precisely three seconds to hit the ground, at which point his velocity will be taken as 27m/s.

Except he won’t hit the ground; assuming he hits his target anyway. The Assassin’s Creed universe is literally littered with indiscriminate piles/carts of hay and flower petals that have been conveniently left around for no obvious reason, and when performing a leap of faith our protagonist’s always aim for them (the AC wiki tells me that these were in fact programmed into the memories that the games consist of in order to aid navigation, but this doesn’t matter). Let us presume that the hay is 1m deep where Ezio lands, and that the whole hay-and-cart structure is entirely successful in its task, in that it manages to reduce Ezio’s velocity from 27m/s to nought across this 1m distance, without any energy being lost through the hard floor (highly unlikely, but let’s be generous). At 27m/s, the 70kg Ezio has a momentum of 1890kgm/s, all of which must be dissipated through the hay across this 1m distance. This means an impulse of 1890Ns, and thus a force, will act upon him; Impulse=Force x ΔTime. This force will cause him to decelerate. If this deceleration is uniform (it wouldn’t be in real life, but modelling this is tricky business and it will do as an approximation), then his average velocity during his ‘slowing’ period will come to be 13.5m/s, and that this deceleration will take 0.074s. Given that we now know the impulse acting on Ezio and the time for which it acts, we can now work out the force upon him; 1890 / 0.074 = 1890 x 13.5 = 26460N. This corresponds to 364.5m/s² deceleration, or around 37g’s to put it in G-force terms. Given that 5g’s has been known to break bones in stunt aircraft, I think it’s safe to say that quite a lot more hay, Ezio’s not getting up any time soon. So remember; next time you’re thinking of jumping off a tall building, I would recommend a parachute over a haystack.

N.B.: The resulting deceleration calculated in the last bit seems a bit massive, suggesting I may have gone wrong somewhere, so if anyone has any better ideas of numbers/equations then feel free to leave them below. I feel here is also an appropriate place to mention a story I once heard concerning an air hostess whose plane blew up. She was thrown free, landed in a tree on the way down… and survived.

EDIT: Since writing this post, this has come into existence, more accurately calculating the drag and final velocity acting on the falling Assassin. They’re more advanced than me, but their conclusion is the same; I like being proved right :).

Advertisement

The Epitome of Nerd-dom

A short while ago, I did a series of posts on computing based on the fact that I had done a lot of related research when studying the installation of Linux. I feel that I should now come clean and point out that between the time of that first post being written and now, I have tried and failed to install Ubuntu on an old laptop six times already, which has served to teach me even more about exactly how it works, and how it differs from is more mainstream competitors. So, since I don’t have any better ideas, I thought I might dedicate this post to Linux itself.

Linux is named after both its founder, Linus Torvalds, a Finnish programmer who finished compiling the Linux kernel in 1992, and Unix, the operating system that could be considered the grandfather of all modern OSs and which Torvalds based his design upon (note- whilst Torvald’s first name has a soft, extended first syllable, the first syllable of the word Linux should be a hard, short, sharp ‘ih’ sound). The system has its roots in the work of Richard Stallman, a lifelong pioneer and champion of the free-to-use, open source movement, who started the GNU project in 1983. His ultimate goal was to produce a free, Unix-like operating system, and in keeping with this he wrote a software license allowing anyone to use and distribute software associated with it so long as they stayed in keeping with the license’s terms (ie nobody can use the free software for personal profit). The software compiled as part of the GNU project was numerous (including a still widely-used compiler) and did eventually come to fruition as an operating system, but it never caught on and the project was, in regards to its achieving of its final aims, a failure (although the GNU General Public License remains the most-used software license of all time).

Torvalds began work on Linux as a hobby whilst a student in April 1991, using another Unix clone MINIX to write his code in and basing it on MINIX’s structure. Initially, he hadn’t been intending to write a complete operating system at all, but rather a type of display interface called a terminal emulator- a system that tries to emulate a graphical terminal, like a monitor, through a more text-based medium (I don’t really get it either- it’s hard to find information a newbie like me can make good sense of). Strictly speaking a terminal emulator is a program, existing independent of an operating system and acting almost like one in its own right, directly within the computer’s architecture. As such, the two are somewhat related and it wasn’t long before Torvalds ‘realised’ he had written a kernel for an operating system and, since the GNU operating system had fallen through and there was no widespread, free-to-use kernel out there, he pushed forward with his project. In August of that same year he published a now-famous post on a kind of early internet forum called Usenet, saying that he was developing an operating system that was “starting to get ready”, and asking for feedback concerning where MINIX was good and where it was lacking, “as my OS resembles it somewhat”. He also, interestingly,  said that his OS “probably never will support anything other than AT-harddisks”. How wrong that statement has proved to be.

When he finally published Linux, he originally did so under his own license- however, he borrowed heavily from GNU software in order to make it run properly (so to have a proper interface and such), and released later versions under the GNU GPL. Torvalds and his associates continue to maintain and update the Linux kernel (Version 3.0 being released last year) and, despite some teething troubles with those who have considered it old-fashioned, those who thought MINIX code was stolen (rather than merely borrowed from), and Microsoft (who have since turned tail and are now one of the largest contributors to the Linux kernel), the system is now regarded as the pinnacle of Stallman’s open-source dream.

One of the keys to its success lies in its constant evolution, and the interactivity of this process. Whilst Linus Torvalds and co. are the main developers, they write very little code themselves- instead, other programmers and members of the Linux community offer up suggestions, patches and additions to either the Linux distributors (more on them later) or as source code to the kernel itself. All the main team have to do is pick and choose the features they want to see included, and continually prune what they get to maximise the efficiency and minimise the vulnerability to viruses of the system- the latter being one of the key features that marks Linux (and OS X) over Windows. Other key advantages Linux holds includes its size and the efficiency with which it allocates CPU usage; whilst Windows may command a quite high percentage of your CPU capacity just to keep itself running, not counting any programs running on it, Linux is designed to use your CPU as efficiently as possible, in an effort to keep it running faster. The kernel’s open source roots mean it is easy to modify if you have the technical know-how, and the community of followers surrounding it mean that any problem you have with a standard distribution is usually only a few button clicks away. Disadvantages include a certain lack of user-friendliness to the uninitiated or not computer-literate user since a lot of programs require an instruction typed into the command bar, far fewer  programs, especially commercial, professional ones, than Windows, an inability to process media as well as OS X (which is the main reason Apple computers appear to exist), and a tendency to go wrong more frequently than commercial operating systems. Nonetheless, many ‘computer people’ consider this a small price to pay and flock to the kernel in their thousands.

However, the Linux kernel alone is not enough to make an operating system- hence the existence of distributions. Different distributions (or ‘distros’ as they’re known) consist of the Linux kernel bundled together with all the other features that make up an OS: software, documentation, window system, window manager, and desktop interface, to name but some. A few of these components, such as the graphical user interface (or GUI, which covers the job of several of the above components), or the package manager (that covers program installation, removal and editing), tend to be fairly ubiquitous (GNOME or KDE are common GUIs, and Synaptic the most typical package manager), but different people like their operating system to run in slightly different ways. Therefore, variations on these other components are bundled together with the kernel to form a distro, a complete package that will run as an operating system in exactly the same fashion as you would encounter with Windows or OS X. Such distros include Ubuntu (the most popular among beginners), Debian (Ubuntu’s older brother), Red Hat, Mandriva and Crunchbang- some of these, such as Ubuntu, are commercially backed enterprises (although how they make their money is a little beyond me), whilst others are entirely community-run, maintained solely thanks to the dedication, obsession and boundless free time of users across the globe.

If you’re not into all this computer-y geekdom, then there is a lot to dislike about Linux, and many an average computer user would rather use something that will get them sneered at by a minority of elitist nerds but that they know and can rely upon. But, for all of our inner geeks, the spirit, community, inventiveness and joyous freedom of the Linux system can be a wonderful breath of fresh air. Thank you, Mr. Torvalds- you have made a lot of people very happy.

Attack of the Blocks

I spend far too much time on the internet. As well as putting many hours of work into trying to keep this blog updated regularly, I while away a fair portion of time on Facebook, follow a large number of video series’ and webcomics, and can often be found wandering through the recesses of YouTube (an interesting and frequently harrowing experience that can tell one an awful lot about the extremes of human nature). But there is one thing that any resident of the web cannot hope to avoid for any great period of time, and quite often doesn’t want to- the strange world of Minecraft.

Since its release as a humble alpha-version indie game in 2009, Minecraft has boomed to become a runaway success and something of a cultural phenomenon. By the end of 2011, before it had even been released in its final release format, Minecraft had registered 4 million purchases and 4 times that many registered users, which isn’t bad for a game that has never advertised itself, spread semi-virally among nerdy gamers for its mere three-year history and was made purely as an interesting project by its creator Markus Persson (aka Notch). Thousands of videos, ranging from gameplay to some quite startlingly good music videos (check out the work of Captain Sparklez if you haven’t already) litter YouTube and many of the games’ features (such as TNT and the exploding mobs known as Creepers) have become memes in their own right to some degree.

So then, why exactly has Minecraft succeeded where hundreds and thousands of games have failed, becoming a revolution in gamer culture? What is it that makes Minecraft both so brilliant, and so special?

Many, upon being asked this question, tend to revert to extolling the virtues of the game’s indie nature. Being created entirely without funding as an experiment in gaming rather than profit-making, Minecraft’s roots are firmly rooted in the humble sphere of independent gaming, and it shows. One obvious feature is the games inherent simplicity- initially solely featuring the ability to wander around, place and destroy blocks, the controls are mainly (although far from entirely) confined to move and ‘use’, whether that latter function be shoot, slash, mine or punch down a tree. The basic, cuboid, ‘blocky’ nature of the game’s graphics, allowing for both simplicity of production and creating an iconic, retro aesthetic that makes it memorable and standout to look at. Whilst the game has frequently been criticised for not including a tutorial (I myself took a good quarter of an hour to find out that you started by punching a tree, and a further ten minutes to work out that you were supposed to hold down the mouse button rather than repeatedly click), this is another common feature of indie gaming, partly because it saves time in development, but mostly because it makes the game feel like it is not pandering to you and thus allowing indie gamers to feel some degree of elitism that they are good enough to work it out by themselves. This also ties in with the very nature of the game- another criticism used to be (and, to an extent, still is, even with the addition of the Enderdragon as a final win objective) that the game appeared to be largely devoid of point, existent only for its own purpose. This is entirely true, whether you view that as a bonus or a detriment being entirely your own opinion, and this idea of an unfamiliar, experimental game structure is another feature common in one form or another to a lot of indie games.

However, to me these do not seem to be entirely worthy of the name ‘answers’ regarding the question of Minecraft’s phenomenal success. The reason I think this way is that they do not adequately explain exactly why Minecraft rose to such prominence whilst other, often similar, indie games have been left in relative obscurity. Limbo, for example, is a side-scrolling platformer and a quite disturbing, yet compelling, in-game experience, with almost as much intrigue and puzzle from a set of game mechanics simpler even than those of Minecraft. It has also received critical acclaim often far in excess of Minecraft (which has received a positive, but not wildly amazed, response from critics), and yet is still known to only an occasional few. Amnesia: The Dark Descent has been often described as the greatest survival horror game in history, as well as incorporating a superb set of graphics, a three-dimensional world view (unlike the 2D view common to most indie games) and the most pants-wettingly terrifying experience anyone who’s ever played it is likely to ever face- but again, it is confined to the indie realm. Hell, Terraria is basically Minecraft in 2D, but has sold around 40 times less than Minecraft itself. All three of these games have received fairly significant acclaim and coverage, and rightly so, but none has become the riotous cultural phenomenon that Minecraft has, and neither have had an Assassin’s Creed mod (first example that sprung to mind).

So… why has Minecraft been so successful. Well, I’m going to be sticking my neck out here, but to my mind it’s because it doesn’t play like an indie game. Whilst most independently produced titled are 2D, confined to fairly limited surroundings and made as simple & basic as possible to save on development (Amnesia can be regarded as an exception), Minecraft takes it own inherent simplicity and blows it up to a grand scale. It is a vast, open world sandbox game, with vague resonances of the Elder Scrolls games and MMORPG’s, taking the freedom, exploration and experimentation that have always been the advantages of this branch of the AAA world, and combined them with the innovative, simplistic gaming experience of its indie roots. In some ways it’s similar to Facebook, in that it takes a simple principle and then applies it to the largest stage possible, and both have enjoyed a similarly explosive rise to fame. The randomly generated worlds provide infinite caverns to explore, endless mobs to slay, all the space imaginable to build the grandest of castles, the largest of cathedrals, or the SS Enterprise if that takes your fancy. There are a thousand different ways to play the game on a million different planes, all based on just a few simple mechanics. Minecraft is the best of indie and AAA blended together, and is all the more awesome for it.

Scrum Solutions

First up- sorry I suddenly disappeared over last week. I was away, and although I’d planned to tell WordPress to publish a few for me (I have a backlog now and everything), I was unfortunately away from my computer on Saturday and could not do so. Sorry. Today I would like to follow on from last Wednesday’s post dealing with the problems faced in the modern rugby scrum, to discuss a few solutions that have been suggested for dealing with the issue, and even throw in a couple of ideas of my own. But first, I’d like to offer my thoughts to another topic that has sprung up amid the chaos of scrummaging discussions (mainly by rugby league fans): the place, value and even existence of the scrum.

As the modern game has got faster and more free-flowing, the key focus of the game of rugby union has shifted. Where once entire game plans were built around the scrum and (especially) lineout, nowadays the battle of the breakdown is the vital one, as is so ably demonstrated by the world’s current openside flanker population. Thus, the scrum is becoming less and less important as a tactical tool, and the extremists may argue that it is no more than a way to restart play. This is the exact situation that has been wholeheartedly embraced by rugby league, where lineouts are non-existent and scrums are an uncontested way of restarting play after a minor infringement. To some there is, therefore, something of a crossroads: do we as a game follow the league path of speed and fluidity at the expense of structure, or stick to our guns and keep the scrum (and set piece generally) as a core tenet of our game?

There is no denying that our modern play style, centred around fast rucks and ball-in-hand play, is certainly faster and more entertaining than its slow, sluggish predecessor, if only for the fans watching it, and has certainly helped transform rugby union into the fun, flowing spectators game we know and love today. However having said that, if we just wanted to watch players run with the ball and nothing else of any interest to happen, then we’d all just go and play rugby league, and whilst league is certainly a worthwhile sport (with, among other things, the most passionate fans of any sport on earth), there is no point trying to turn union into its clone. In any case, the extent to which league as a game has been simplified has meant that there are now hardly any infringements or stoppages to speak of and that a scrum is a very rare occurence. This is very much unlike its union cousin, and to do away with the scrum as a tool in the union code would perhaps not suit the game as well as it does in union. Thus, it is certainly worth at least trying to prevent the scrum turning into a dour affair of constant collapses and resets before everyone dies of boredom and we simply scrap the thing.

(I know I’ve probably broken my ‘no Views’ rule here, but I could go on all day about the various arguments and I’d like to get onto some solutions)

The main problem with the modern scrum according to the IRB concerns the engage procedure- arguing (as do many other people) that trying to restrain eight athletes straining to let rip their strength is a tough task for even the stoutest front rower, they have this year changed the engage procedure to omit the ‘pause’ instruction from the ‘crouch, touch, pause, engage’ sequence. Originally included to both help the early players structure their engagement (thus ensuring they didn’t have to spend too much time bent down too far) and to ensure the referee had control over the engagement, they are now arguing that it has no place in the modern game and that it is time to see what effect getting rid of it will have (they have also replaced the ‘engage’ instruction with ‘set’ to reduce confusion about which syllable to engage on).

Whether this will work or not is a matter of some debate. It’s certainly a nice idea- speaking as a forward myself, I can attest that giving the scrum time to wind itself up is perhaps not the best way to ensure they come together in a safe, controlled fashion. However, what this does do is place a lot of onus on the referee to get his timing right. If the ‘crouch, touch, set’ procedure is said too quickly, it can be guaranteed that one team will not have prepared themselves properly and the whole engagement will be a complete mess. Say it too slowly, and both sides will have got themselves all wound up and we’ll be back to square one again. I suppose we’ll all find out how well it works come the new season (although I do advise giving teams time to settle back in- I expect to see a lot of packs waiting for a split second on the ‘set’ instruction as they wait for the fourth command they are so used to)

Other solutions have also been put forward. Many advocate a new law demanding gripping areas on the shirts of front row players to ensure they have something to get hold of on modern, skintight shirts, although the implementation of such a law would undoubtedly be both expensive and rather chaotic for all concerned, which is presumably why the IRB didn’t go for it. With the increasing use and importance of the Television Match Official (TMO) in international matches, there are a few suggesting that both they and the line judge should be granted extra responsibilities at scrum time to ensure the referee’s attention is not distracted, but it is understandable that referees do not want to be patronised by and become over-reliant on a hardly universally present system where the official in question is wholly dependent on whether the TV crews think that the front row binding will make a good shot.

However, whilst these ideas may help to prevent the scrum collapsing, with regards to the scrum’s place in the modern game they are little more than papering over the cracks. On their own, they will not change the way the game is played and will certainly not magically bring the scrum back to centre stage in the professional game.

For that to happen though, things may have to change quite radically. We must remember that the scrum as an invention is over 150 years old and was made for a game that has since changed beyond all recognition, so it could well be time that it began to reflect that. It’s all well and good playing the running game of today, but if the scrum starts to become little more than a restart then it has lost all its value. However, it is also true that if it is allowed to simply become a complete lottery, then the advantage for the team putting the ball in is lost and everyone just gets frustrated with it.

An answer could be (to pick an example idea) to turn the scrum into a more slippery affair, capable of moving back and forth far more easily than it can at the moment, almost more like a maul than anything else. This would almost certainly require radical changes regarding the structure and engagement of it- perhaps we should say that any number of players (between, say, three and ten) can take part in a scrum, in the same way as happens at lineouts, thereby introducing a tactical element to the setup and meaning that some sneaky trickery and preplanned plays could turn an opposition scrum on its head. Perhaps the laws on how the players are allowed to bind up should be relaxed, forcing teams to choose between a more powerful pushing setup and a looser one allowing for faster attacking & defending responses. Perhaps a law should be trialled demanding that if two teams engaged correctly, but the scrum collapsed because one side went lower than the other then the free kick would be awarded to the ‘lower’ side, thus placing a greater onus on technique over sheer power and turning the balance of the scrum on its head. Would any of these work? Maybe not, but they’re ideas.

I, obviously, do not have all the definitive answers, and I couldn’t say I’m a definite advocate of any of the ideas I voiced above (especially the last one, now I think how ridiculously impractical it would be to manage). But it is at least worth thinking about how much the game has evolved since the scrum’s invention, and whether it’s time for it to catch up.

The Dark Knight Rises

OK, I’m going to take a bit of a risk on this one- I’m going to dip back into the world of film reviewing. I’ve tried this once before over the course of this blog (about The Hunger Games) and it went about as well as a booze-up in a monastery (although it did get me my first ever comment!). However, never one to shirk from a challenge I thought I might try again, this time with something I’m a little more overall familiar with: Christopher Nolan’s conclusion to his Batman trilogy, The Dark Knight Rises.

Ahem

Christopher Nolan has never been one to make his plots simple and straightforward (he did do Inception after all), but most of his previous efforts have at least tried to focus on only one or two things at a time. In Dark Knight Rises however, he has gone ambitious, trying to weave no less than 6 different storylines into one film. Not only that, but 4 of those are trying to explore entirely new characters and a fifth pretty much does the whole ‘road to Batman’ origins story that was done in Batman Begins. That places the onus of the film firmly on its characters and their development, and trying to do that properly to so many new faces was always going to push everyone for space, even in a film that’s nearly 3 hours long.

So, did it work? Well… kind of. Some characters seem real and compelling pretty much from the off, in the same way that Joker did in The Dark Knight- Anne Hathaway’s Selina Kyle (not once referred to as Catwoman in the entire film) is a little bland here and there and we don’t get to see much of the emotion that supposedly drives her, but she is (like everyone else) superbly acted and does the ‘femme fakickass’ thing brilliantly, whilst Joseph Gordon Levitt’s young cop John Blake (who gets a wonderful twist to his character right at the end) is probably the most- and best-developed character of the film, adding some genuine emotional depth. Michael Caine is typically brilliant as Alfred, this time adding his own kick to the ‘origins’ plot line, and Christian Bale finally gets to do what no other Batman film has done before- make Batman/Bruce Wayne the most interesting part of the film.

However, whilst the main good guys’ story arcs are unique among Batman films by being the best parts of the film, some of the other elements don’t work as well. For someone who is meant to be a really key part of the story, Marion Cotillard’s Miranda Tate gets nothing that gives her character real depth- lots of narration and exposition, but we see next to none of her for huge chunks of the film and she just never feels like she matters very much. Tom Hardy as Bane suffers from a similar problem- he was clearly designed in the mould of Ducard (Liam Neeson) in Begins, acting as an overbearing figure of control and power that Batman simply doesn’t have (rather than the pure terror of Joker’s madness), but his actual actions never present him as anything other just a device to try and give the rest of the film a reason to happen, and he never appears to have any genuinely emotional investment or motivation in anything he’s doing. Part of the problem is his mask- whilst clearly a key feature of his character, it makes it impossible to see his mouth and bunches up his cheeks into an immovable pair of blobs beneath his eyes, meaning there is nothing visible for him to express feeling with, effectively turning him into a blunt machine rather than a believable bad guy. There’s also an entire arc concerning Commissioner Gordon (Gary Oldman) and his guilt over letting Batman take the blame for Harvey Dent’s death that is barely explored at all, but thankfully it’s so irrelevant to the overall plot that it might as well not be there at all.

It is, in many ways, a crying shame, because there are so many things the film does so, so right. The actual plot is a rollercoaster of an experience, pushing the stakes high and the action (in typical Nolan fashion) through the roof. The cinematography is great, every actor does a brilliant job in their respective roles and a lot of the little details- the pit & its leap to freedom, the ‘death by exile’ sequence and the undiluted awesome that is The Bat- are truly superb. In fact if Nolan had just decided on a core storyline and focus and then stuck with it as a solid structure, then I would probably still not have managed to wipe the inane grin off my face. But by being as ambitious as he has done, he has just squeezed screen time away from where it really needed to be, and turned the whole thing into a structural mess that doesn’t really know where it’s going at times. It’s a tribute to how good the good parts are that the whole experience is still such good fun, but it’s such a shame to see a near-perfect film let down so badly.

The final thing I have to say about the film is simply: go and see it. Seriously, however bad you think this review portrays it as, if you haven’t seen the film yet and you at all liked the other two (or any other major action blockbuster with half a brain), then get down to your nearest cinema and give it a watch. I can’t guarantee that you’ll have your greatest ever filmgoing experience there, but I can guarantee that it’ll be a really entertaining way to spend a few hours, and you certainly won’t regret having seen it.

Why do we call a writer a bard, anyway?

In Britain at the moment, there are an awful lot of pessimists. Nothing unusual about this, as it’s hardly atypical human nature and my country has never been noted for its sunny, uplifting outlook on life as a rule anyway. Their pessimism is typically of the sort adopted by people who consider themselves too intelligent (read arrogant) to believe in optimism and nice things anyway, and nowadays tends to focus around Britain’s place in the world. “We have nothing world-class” they tend to say, or “The Olympics are going to be totally rubbish” if they wish to be topical.

However, whilst I could dedicate an entire post to the ramblings of these people, I would probably have to violate my ‘no Views’ clause by the end of it, so will instead focus on one apparent inconsistency in their argument. You see, the kind of people who say this sort of thing also tend to be the kind of people who really, really like the work of William Shakespeare.

There is no denying that the immortal Bard (as he is inexplicably known) is a true giant of literature. He is the only writer of any form to be compulsory reading on the national curriculum and is known of by just about everyone in the world, or at least the English-speaking part. He introduced between 150 and 1500 new words to the English language (depending on who you believe and how stringent you are in your criteria) as well as countless phrases ranging from ‘bug-eyed monster’ (Othello) to ‘a sorry sight’ (Macbeth), wrote nearly 40 plays, innumerable sonnets and poems, and revolutionised theatre of his time. As such he is idolised above all other literary figures, Zeus in the pantheon of the Gods of the written word, even in our modern age. All of which is doubly surprising when you consider how much of what he wrote was… well… crap.

I mean think about it- Romeo and Juliet is about a romance that ends with both lovers committing suicide over someone they’ve only known for three days, whilst Twelfth Night is nothing more than a romcom (in fact the film ‘She’s the Man’ turned it into a modern one), and not a great one at that. Julius Caesar is considered even by fans to be the most boring way to spend a few hours in known human history, the character of Othello is the dopiest human in history and A Midsummer Night’s Dream is about some fairies falling in love with a guy who turns into a donkey. That was considered, by Elizabethans, the very height of comedic expression.

So then, why is he so idolised? The answer is, in fact, remarkably simple: Shakespeare did stuff that was new. During the 16th century theatre hadn’t really evolved from its Greek origins, and as such every play was basically the same. Every tragedy had the exact same formulaic plot line of tragic flaw-catharsis-death, which, whilst a good structure used to great effect by Arthur Miller and the guy who wrote the plot for the first God of War game, does tend to lose interest after 2000 years of ceaseless repetition. Comedies & satyrs had a bit more variety, but were essentially a mixture of stereotypes and pantomime that might have been entertaining had they not been mostly based on tired old stories, philosophy and mythology and been so unfunny that they required a chorus (who were basically a staged audience meant to show how the audience how to react). In any case there was hardly any call for these comedies anyway- they were considered the poorer cousins to the more noble and proper tragedy, amusing sideshows to distract attention from the monotony of the main dish. And then, of course, there were the irreversibly fixed tropes and rules that had to be obeyed- characters were invariably all noble and kingly (in fact it wasn’t until the 1920’s that the idea of a classical tragedy of the common man was entertained at all) and spoke with rigid rhythm, making the whole experience more poetic than imitative of real life. The iambic pentameter was king, the new was non-existent, and there was no concept whatsoever that any of this could change.

Now contrast this with, say, Macbeth. This is (obviously) a tragedy, about a lord who, rather than failing to recognise a tragic flaw in his personality until right at the very end and then holding out for a protracted death scene in which to explain all of it (as in a Greek tragedy), starts off a good and noble man who is sent mental by a trio of witches. Before Shakespeare’s time a playwright could be lynched before he made such insulting suggestions about the noble classes (and it is worth noting that Macbeth wasn’t written until he was firmly established as a playwright), but Shakespeare was one of the first of a more common-born group of playwrights, raised an actor rather than aristocrat. The main characters may be lords & kings it is true (even Shakespeare couldn’t shake off the old tropes entirely, and it would take a long time for that to change), but the driving forces of the plot are all women, three of whom are old hags who speak in an irregular chanting and make up heathen prophecies. Then there is an entire monologue dedicated to an old drunk bloke, speaking just as irregularly, mumbling on about how booze kills a boner, and even the main characters get in on the act, with Macbeth and his lady scrambling structureless phrases as they fairly shit themselves in fear of discovery. Hell, he even managed to slip in an almost comic moment of parody as Macbeth compares his own life to that of a play (which, of course, it is. He pulls a similar trick in As You Like It)

This is just one example- there are countless more. Romeo and Juliet was one of the first examples of romance used as the central driving force of a tragedy, The Tempest was the Elizabethan version of fantasy literature and Henry V deserves a mention for coming up with some of the best inspirational quotes of all time. Unsurprisingly, whilst Shakespeare was able to spark a revolution at home, other countries were rocked by his radicalism- the French especially were sharply divided into two camps, one supporting this theatrical revolution (such as Voltaire) and the other vehemently opposing it. It didn’t do any good- the wheels had been set in motion, and for the next 500 years theatre and literature continued (and continues) to evolve at a previously unprecedented rate. Nowadays, the work of Shakespeare seems to us as much of a relic as the old Greek tragedies must have appeared to him, but as theatre has moved on so too has our expectations of it (such as, for instance, jokes that are actually funny and speech we can understand without a scholar on hand). Shakespeare may not have told the best stories or written the best plays to our ears, but that doesn’t mean he wasn’t the best playwright.