Hope and Obama

Before I start writing this post, a brief disclaimer; I am not American, do not live there and do not have extensive first-hand experience of the political situation over there. This post is inspired entirely from stuff I’ve seen other people talk about online and a few bits of joining the dots from me, but if anyone feels I’ve gone wildly off-target please drop me a line in the comments. OK? Good, let’s get started.

The ascendency of Barack Hussein Obama to the Presidency of the USA in 2009 was among the most significant events in recent history. Not only did he become the first black person to sit in the Oval office, he put the Democrats back in power (representing a fairly major shift in direction for the country after eight years under George Bush Jnr.) and manage to put his party in control of Congress too, the first time any Democrat leader had been in that position for quite some time. With bold claims regarding the wars in Afghanistan and Iraq, both of which had been… talking points  during Bush’s time in charge, and big plans regarding the US healthcare system, this had all the hallmarks of a presidency dedicated to making change happen. Indeed, change was the key buzzword during Obama’s campaign; change for the punishing effects of US society on its young and poor, change for the recession-hit economy, and even change for the type of person in the White House (Bush had frequently been portrayed, rather unjustly for a man of notoriously quick wit, as stupid and socially incapable by satirists and left-leaning commentators, whilst even the right would find it hard to deny Obama’s natural charisma and intelligent, upright bearing) were all promised to voters, and it was a dream many took with them to the polling stations.

One of the key demographics the Democrats targeted and benefited from with this ‘pro-change’ style campaign was the youth vote; early twenty-somethings or even late teens, many of whom were voting in their first elections, who had grown up both physically and politically during the Bush administration and railed against his management of everything from the economy to the welfare system with all the ardour and uncluttered train of thought of young people everywhere. I should know: living through the period as a young person in a left-leaning family getting my news via the liberally-inclined BBC (and watching too much satirical comedy), one could hardly escape the idea that Bush was an absolute moron who know nothing about running his country. And this was whilst getting daily first-hand experience of what a left-wing government was like in Britain- I can imagine that to a young American with a similar outlook and position at the time, surrounded by right-leaning sentiment on all sides, the prospect of a Democratic president dedicated to change would have seemed like a shining beacon of hope for a brighter future. Indeed, the apparent importance of the youth vote to Obama’s success was illustrated during his 2012 re-election: when the news broke that Microsoft were planning on releasing a new Halo videogame on election day, conspiracy theorists had a wonderful time suggesting that Microsoft were embroiled in a great Republican plot to distract the youth vote by having them play Halo all day instead, thus meaning they couldn’t vote Democrat*.

Now, let us fast forward to the 2012 election. Obama won, but narrowly- and given he was up against a candidate whose comments that he ‘didn’t care about the very poor’ and thought that the windows in passenger aircraft should be able to be opened were very widely circulated and mocked, the result was far too close for comfort (even if, despite what some pundits and conservative commentators would have had you believe, all the pre-election statistics indicated a fairly safe Democrat victory). Whilst the airwaves weren’t exactly awash with anti-Obama messages, it wasn’t hard to find disillusionment and cynicism regarding his first term in office. For me, the whole thing was summed up by the attitudes of Jeph Jacques, the cartoonist behind the webcomic ‘Questionable Content’; reading through his back catalogue, he frequently had to restrain himself from verbalising his Obama-fandom in the comments below his comics during the 2008 election, but come election season in 2012 he chose to publish this. That comic pretty much sums it up: a whole generation had been promised change, and change had refused to come on a sufficiently large scale. The youthful optimism of his rise to power was replaced by something more akin to the weariness Obama himself displayed during the first live TV debate, and whilst I’m sure many of these somewhat disillusioned voters still voted Democrat (I mean, he still won, and preliminary statistics suggest voter turnout actually rose in 2012 compared to 2008), the prevailing mood seemed to be one less of optimism than of ‘better him than Romney’.

Exactly what was to blame for the lack of the promised change is a matter of debate; apologists may point to the difficulties had getting such radical (by American standards) health reforms and similar through a decidedly moderate congress, followed by the difficulties had trying to get anything through when congress became Republican-controlled, whilst the more cynical or pro-Republican would probably make some statement referring to the corporate-sponsored nature of the Democratic party/American political system or suggest that President Obama simply isn’t quite as good a politician/person (depending on the extent of your cynicism) as he came across as in 2008. Whatever the answer, the practical upshot has been quite interesting, as it has allowed one to watch as an entire generation discovered cynicism for the first time. All these hopes and dreams of some brave new vision for America went steaming face first into the bitter reality of the world and of politics, and the dream slowly fell apart. I am not old enough to definitively say that this is a pattern that has repeated itself down the ages, but nonetheless I found the whole escapade fascinating in a semi-morbid way, and I will be intrigued to see if/when it happens again.

Damn, I’m really going for conclusion-less posts at the moment…

*Interestingly, this kind of tactic has, so the story goes, been deliberately used in the past to achieve precisely the opposite effect. When Boris Yeltsin attempted to get re-elected as Russian president in 1996, voting day was designated a public holiday. Unfortunately, it was soon realised that many urban Russians, Yeltsin’s main voter base, were going to take this as a cue for a long weekend in the country (presumably hunting bears or whatever else Russians do in their second home in Siberia) rather than to go and vote, so Yeltsin went to the makers of telenovela (a kind of South American soap opera) called Tropikanka that was massively popular in the country and got them to make three brand-new episodes to be aired on election day. This kept the city dwellers at home, since many country spots didn’t have TV access, and meant they were around to go and vote. Yeltsin duly won, with 54.4% of the vote.

Advertisement

FILM FORTNIGHT: The King’s Speech

Ah, Tom Hooper, whatever are we to do with you; a professional Oscar-bagger whose adherents’ vociferousness in their praise of his directorial skill is only matched by his critics slagging him off. This is not to say that he makes bad films (although I have seen one reviewer call Les Miserables the third worst film of 2012; a somewhat bold claim), but more a reflection of the fact that Hooper’s style of film making is pretty much what the Academy thinks is the cinematic equivalent of nirvana. This very… specific style has not endeared him to everyone, specifically those who think his films are all the more dull and predictable for it.

Where was I again? Oh yes; The King’s Speech, the most critically successful to date of Hooper’s films, bagging a Golden Globe, seven BAFTAs and four Oscars. For the four of you who never quite heard what the plot was about, our gaze is cast back to 1925 and onto the then Duke of York, Prince Albert (Colin Firth), second in line to the throne after his older brother David (Guy Pearce). Albert is among of the most interesting Royals in (relatively) recent history and was the father to our current Queen, but the part of his character we are most interested in now is his heavily pronounced stammer. This impediment is hardly conducive to him being comfortable in a heavily public role, and he tries multiple methods to cure himself; but this is the early 20th century, and we are yet to see the extraordinary advances in medical science that came along during the decades after the Second World War. As such, the treatments offered are somewhat Victorian in nature and don’t work, leading to increasing frustration from the Prince regarding the issue, to the point where he basically decides to give up. His wife Elizabeth (Helena Bonham Carter), however, is more determined, and puts him in touch with Lionel Logue (Geoffrey Rush), an Australian speech therapist with somewhat unconventional methods (and indeed mannerisms) for the time.

The changing relationship between Logue and the Prince is the central plot thread for the remainder of the film; one a rather bluntly-spoken commoner and the other who has spent his entire life being served in deference to with the complex rules of formality and tradition acting as his social bodyguard. That this is going to cause tension is obvious from the opening scene, and is indicative of one of the film’s most prominent flaws; the near-total lack of anticipation. This does not half to be a bad thing necessarily; many a good film has been so without any need to resort to tension or anticipation, but every scene of The King’s Speech can pretty much be calculated from the first five seconds, and sticking around to watch frequently doesn’t add anything to the central plotline.

It’s a shame really, because there are other aspects (and other scenes) that the film gets magnificently right, particularly those scenes that focus on the transitional state of the world at the time. This particular point in history was a turbulent one; times were changing, the new and old were trying (and in many cases failing) to coexist, and the establishment was frequently struggling to cope with all this newness. No establishment embodied this more than the royalty; these were the last days for nobility in all its pomp and finery, the days when it finally realised how much of its power had been stripped away and how it could not go on pretending to be a divine figure of authoritative power. As the film makes clear, monarchies had been falling across Europe, and others were to be reduced to puppets beneath new regimes, and while this theme is never explicitly mentioned or made a central part of the film, it subtly pervades all around it in a way that makes one feel genuine sympathy for the characters concerned. It is present in the way the prince treats the children and the stories he tells of how his father treated him, in the methods that work for him and the methods that don’t, even in the way characters address one another. All in all a wonderful piece of directing to work in there; I only wish it had taken centre stage more frequently. Perhaps then it wouldn’t perpetually feel as if it were 15 minutes away from finishing.

Mention must of course be made of the actors; Colin Firth took three ‘Best Actor’ prizes for his role as the king, and I found his portrayal incredibly interesting. Firth has always brought a particular brand of confidence, even cockiness, to the roles he plays and is frequently cast in controlling figures of power for this very reason; but here he is required to express both the power and authority of a monarch and the fragility of a patient. The film’s plot, and in particular Geoffrey Rush’s perfectly executed character of Logue, mean that these two opposing images must frequently share the limelight and come into conflict with one another, whilst all the while having to make themselves felt through the Prince’s stammer. This would be a mean task for even the most skilled of actors, and for someone such as Firth who I have never seen portray weakness in this way, it is a particularly interesting challenge. I wouldn’t say that he pulls it off perfectly, or that I find his performance massively compelling (he doesn’t quite manage to express how hard he’s trying, from my point of view), but it is nonetheless a good attempt at a very challenging role. This may have been somewhat hindered by the fact that, as usual, Bonham Carter manages to steal the show, once again showing her extraordinary versatility as an actress with a striking, and occasionally even funny, portrayal of the Duchess (a woman we would now refer to as the Queen Mother). That she and Rush only took home one ‘Supporting Actor/Actress’ role apiece is, to me, quite an eyebrow raiser, even if it was up against The Fighter. Some other performances, most notably Timothy Spall turning up as Winston Churchill for no readily explained reason, are less beneficial to the film and often feel as though they are taking screentime away from what’s important (there’s a fine line between ‘interesting cameo’ and ‘why the hell are they here?’), but thankfully they are not prevalent enough for this to be a massive problem.

To me, The King’s Speech is far from a perfect film; it is not terribly compelling all too frequently, large pieces of the plot seem to serve very little purpose, the script takes significant artistic liberties with historical fact (yes, I know that shouldn’t be important, but I’m too much of a nerd about these things), the plot is somewhat formulaic and predictable and it can’t quite seem to make up its mind over what it is, thematically speaking, about. However, it is executed so exquisitely that these flaws, in part, hardly matter; yes, they’re there, yes the film is imperfect, but that’s no reason not to sit back and enjoy the experience. Did The King’s Speech deserve two ‘Best Picture’ awards? Perhaps not. Is it a bad film? Not a chance. Perhaps not worth digging through to see, but certainly worth watching if you get the chance.

War in Three Dimensions

Warfare has changed a lot in the last century. Horses have become redundant, guns become reliable, machine guns become light enough to carry and bombs have become powerful enough to totally annihilate a small country if the guy with the button so chooses. But perhaps more significant than just the way hardware has changed is the way that warfare has changed itself; tactics and military structure have changed beyond all recognition compared to the pre-war era, and we must now fight wars whilst surrounded by a political landscape, at least in the west, that does not approve of open conflict. However, next year marks the 100th anniversary of a military innovation that not only represented massive hardware upgrade at the time, but that has changed almost beyond recognition in the century since then and has fundamentally changed the way we fight wars; the use of aeroplanes in warfare.

The skies have always been a platform to be exploited by the cunning military strategist; balloons were frequently used for messaging long before they were able to carry humans and be used for reconnaissance during the early 20th century, and for many years the only way of reliably sending a complicated message over any significant distance was via homing pigeon. It was, therefore, only natural that the Wright brothers had barely touched down after their first flight in ‘Flyer I’ when the first suggestions of a military application to such a technology were being made. However, early attempts at powered flight could not sustain it for very long, and even subsequent improvements failed to produce anything capable of carrying a machine gun. By the First World War, aircraft had become advanced enough to make controlled, sustained, two-person flight at an appreciable height a reality, and both the Army and Navy were quick to incorporate air divisions into their structures (these divisions in the British Armed Forces were the Royal Flying Corps and the Royal Naval Air Service respectively). However, these air forces were initially only used for reconnaissance purposes and ‘spotting’ for artillery to help them get their eye in; the atmosphere was quite peaceful so far above the battlefield, and pilots and observers of opposing aircraft would frequently wave to one another during the early years of the war. As time passed and the conflict grew ever-bloodier, these exchanges became less friendly; before long observers would carry supplies of bricks into the air with them and attempt to throw them at enemy aircraft, and the Germans even went so far as to develop steel darts that could reportedly split a man in two; whilst almost impossible to aim in a dogfight, these darts were incredibly dangerous for those on the ground. By 1916 aircraft had grown advanced enough to carry bombs, enabling a (slightly) more precise method of destroying enemy targets than artillery, and before long both sides could equip these bombers with turret-mounted machine guns that the observers could fire on other aircraft with; given that the aircraft of the day were basically wire and wood cages covered in fabric, these guns could cause vast amounts of damage and the men within the planes had practically zero protection (and no parachutes either, since the British top brass believed this might encourage cowardice). To further protect their bombers, both sides began to develop fighter aircraft as well; smaller, usually single-man, planes with fixed machine guns operated by the pilot (and which used a clever bit of circuitry to fire through the propeller; earlier attempts at doing this without blowing the propeller to pieces had simply consisted of putting armour plating on the back of the propeller, which not infrequently caused bullets to bounce back and hit the pilot). It wasn’t long before these fighters were given more varied orders, ranging from trench strafing to offensive patrols (where they would actively go and look for other aircraft to attack). Perhaps the most dangerous of these objectives was balloon strafing; observation balloons were valuable pieces of reconnaissance equipment, and bringing them down generally required a pilot to navigate the large escort of fighters that accompanied them. Towards the end of the war, the forces began to realise just how central to their tactics air warfare had become, and in 1918 the RFC and RNAS were combined to form the Royal Air Force, the first independent air force in the world. The RAF celebrated its inception three weeks later when German air ace Manfred von Richthofen (aka The Red Baron), who had 80 confirmed victories despite frequently flying against superior numbers or hardware, was shot down (although von Richthofen was flying close to the ground at the time in pursuit of an aircraft, and an analysis of the shot that killed him suggests that he was killed by a ground-based AA gunner rather than the Canadian fighter pilot credited with downing him. Exactly who fired the fatal shot remains a mystery.)

By the time the Second World War rolled around things had changed somewhat; in place of wire-and-fabric biplanes, sleeker metal monoplanes were in use, with more powerful and efficient engines making air combat faster affair. Air raids themselves could be conducted over far greater distances since more fuel could be carried, and this proved well suited to the style of warfare that the war generated; rather than the largely unmoving battle lines of the First World War, the early years of WW2 consisted of countrywide occupation in Europe, whilst the battlegrounds of North Africa and Soviet Russia were dominated by tank warfare and moved far too fluidly for frontline air bases to be safe. Indeed, air power featured prominently in neither of these land campaigns; but on the continent, air warfare reigned supreme. As the German forces dominated mainland Europe, they launched wave after wave of long distance bombing campaigns at Britain in an effort to gain air superiority and cripple the Allies’ ability to fight back when they attempted to cross the channel and invade. However, the British had, unbeknownst to the Germans, perfected their radar technology, and were thus able to use their relatively meagre force of fighters to greatest effect to combat the German bombing assault. This, combined with some very good planes and flying on behalf of the British and an inability to choose the right targets to bomb on behalf of the Germans, allowed the Battle of Britain to swing in favour of the Allies and turned the tide of the war in Europe. In the later years of the war, the Allies turned the tables on a German military crippled by the Russian campaign after the loss at Stalingrad and began their own orchestrated bombing campaign. With the increase in anti-aircraft technology since the First World War, bombers were forced to fly higher than ever before, making it far harder to hit their targets; thus, both sides developed the tactic of ‘carpet bombing’, whereby they would simply load up as big a plane as they could with as many bombs as it could carry and drop them all over an area in the hope of at least one of the bombs hitting the intended target. This imprecise tactic was only moderately successful when it came to destruction of key military targets, and was responsible for the vast scale of the damage to cities both sides caused in their bombing campaigns. In the war in the Pacific, where space on aircraft carriers was at a premium and Lancaster Bombers would have been impractical, they kept with the tactic of using dive bombers, but such attacks were very risky and there was still no guarantee of a successful hit. By the end of the war, air power was rising to prominence as possibly the most crucial theatre of combat, but we were reaching the limits of what our hardware was capable of; our propellor-driven, straight wing fighter aircraft seemed incapable of breaking the sound barrier, and our bombing attacks couldn’t safely hit any target less than a mile wide. Something was clearly going to have to change; and next time, I’ll investigate what did.

The End of The World

As everyone who understands the concept of buying a new calendar when the old one runs out should be aware, the world is emphatically due to not end on December 21st this year thanks to a Mayan ‘prophecy’ that basically amounts to one guy’s arm getting really tired and deciding ‘sod carving the next year in, it’s ages off anyway’. Most of you should also be aware of the kind of cosmology theories that talk about the end of the world/the sun’s expansion/the universe committing suicide that are always hastily suffixed with an ‘in 200 billion years or so’, making the point that there’s really no need to worry and that the world is probably going to be fine for the foreseeable future; or at least, that by the time anything serious does happen we’re probably not going to be in a position to complain.

However, when thinking about this, we come across a rather interesting, if slightly macabre, gap; an area nobody really wants to talk about thanks to a mixture of lack of certainty and simple fear. At some point in the future, we as a race and a culture will surely not be here. Currently, we are. Therefore, between those two points, the human race is going to die.

Now, from a purely biological perspective there is nothing especially surprising or worrying about this; species die out all the time (in fact we humans are getting so good at inadvertent mass slaughter that between 2 and 20 species are going extinct every day), and others evolve and adapt to slowly change the face of the earth. We humans and our few thousand years of existence, and especially our mere two or three thousand of organised mass society, are the merest blip in the earth’s long and varied history. But we are also unique in more ways than one; the first race to, to a very great extent, remove ourselves from the endless fight for survival and start taking control of events once so far beyond our imagination as to be put down to the work of gods. If the human race is to die, as it surely will one day, we are simply getting too smart and too good at thinking about these things for it to be the kind of gradual decline & changing of a delicate ecosystem that characterises most ‘natural’ extinctions. If we are to go down, it’s going to be big and it’s going to be VERY messy.

In short, with the world staying as it is and as it has for the past few millennia we’re not going to be dying out very soon. However, this is also not very biologically unusual, for when a species go extinct it is usually the result of either another species with which they are engaging in direct competition out-competing them and causing them to starve, or a change in environmental conditions meaning they are no longer well-adapted for the environment they find themselves in. But once again, human beings appear to be showing a semblance of being rather above this; having carved out what isn’t so much an ecological niche as a categorical redefining of the way the world works there is no other creature that could be considered our biological competitor, and the thing that has always set humans apart ecologically is our ability to adapt. From the ice ages where we hunted mammoth, to the African deserts where the San people still live in isolation, there are very few things the earth can throw at us that are beyond the wit of humanity to live through. Especially a human race that is beginning to look upon terraforming and cultured food as a pretty neat idea.

So, if our environment is going to change sufficiently for us to begin dying out, things are going to have to change not only in the extreme, but very quickly as well (well, quickly in geographical terms at least). This required pace of change limits the number of potential extinction options to a very small, select list. Most of these you could make a disaster film out of (and in most cases one has), but one that is slightly less dramatic (although they still did end up making a film about it) is global warming.

Some people are adamant that global warming is either a) a myth, b) not anything to do with human activity or c) both (which kind of seems a contradiction in terms, but hey). These people can be safely categorized under ‘don’t know what they’re *%^&ing talking about’, as any scientific explanation that covers all the available facts cannot fail to reach the conclusion that global warming not only exists, but that it’s our fault. Not only that, but it could very well genuinely screw up the world- we are used to the idea that, in the long run, somebody will sort it out, we’ll come up with a solution and it’ll all be OK, but one day we might have to come to terms with a state of affairs where the combined efforts of our entire race are simply not enough. It’s like the way cancer always happens to someone else, until one morning you find a lump. One day, we might fail to save ourselves.

The extent to which global warming looks set to screw around with our climate is currently unclear, but some potential scenarios are extreme to say the least. Nothing is ever quite going to match up to the picture portrayed in The Day After Tomorrow (for the record, the Gulf Stream will take around a decade to shut down if/when it does so), but some scenarios are pretty horrific. Some predict the flooding of vast swathes of the earth’s surface, including most of our biggest cities, whilst others predict mass desertification, a collapse of many of the ecosystems we rely on, or the polar regions swarming across Northern Europe. The prospect of the human population being decimated is a very real one.

But destroyed? Totally? After thousands of years of human society slowly getting the better of and dominating all that surrounds it? I don’t know about you, but I find that quite unlikely- at the very least, it at least seems to me like it’s going to take more than just one wave of climate change to finish us off completely. So, if climate change is unlikely to kill us, then what else is left?

Well, in rather a nice, circular fashion, cosmology may have the answer, even if we don’t some how manage to pull off a miracle and hang around long enough to let the sun’s expansion get us. We may one day be able to blast asteroids out of existence. We might be able to stop the super-volcano that is Yellowstone National Park blowing itself to smithereens when it erupts as it is due to in the not-too-distant future (we also might fail at both of those things, and let either wipe us out, but ho hum). But could we ever prevent the sun emitting a gamma ray burst at us, of a power sufficient to cause the third largest extinction in earth’s history last time it happened? Well, we’ll just have to wait and see…

Scrum Solutions

First up- sorry I suddenly disappeared over last week. I was away, and although I’d planned to tell WordPress to publish a few for me (I have a backlog now and everything), I was unfortunately away from my computer on Saturday and could not do so. Sorry. Today I would like to follow on from last Wednesday’s post dealing with the problems faced in the modern rugby scrum, to discuss a few solutions that have been suggested for dealing with the issue, and even throw in a couple of ideas of my own. But first, I’d like to offer my thoughts to another topic that has sprung up amid the chaos of scrummaging discussions (mainly by rugby league fans): the place, value and even existence of the scrum.

As the modern game has got faster and more free-flowing, the key focus of the game of rugby union has shifted. Where once entire game plans were built around the scrum and (especially) lineout, nowadays the battle of the breakdown is the vital one, as is so ably demonstrated by the world’s current openside flanker population. Thus, the scrum is becoming less and less important as a tactical tool, and the extremists may argue that it is no more than a way to restart play. This is the exact situation that has been wholeheartedly embraced by rugby league, where lineouts are non-existent and scrums are an uncontested way of restarting play after a minor infringement. To some there is, therefore, something of a crossroads: do we as a game follow the league path of speed and fluidity at the expense of structure, or stick to our guns and keep the scrum (and set piece generally) as a core tenet of our game?

There is no denying that our modern play style, centred around fast rucks and ball-in-hand play, is certainly faster and more entertaining than its slow, sluggish predecessor, if only for the fans watching it, and has certainly helped transform rugby union into the fun, flowing spectators game we know and love today. However having said that, if we just wanted to watch players run with the ball and nothing else of any interest to happen, then we’d all just go and play rugby league, and whilst league is certainly a worthwhile sport (with, among other things, the most passionate fans of any sport on earth), there is no point trying to turn union into its clone. In any case, the extent to which league as a game has been simplified has meant that there are now hardly any infringements or stoppages to speak of and that a scrum is a very rare occurence. This is very much unlike its union cousin, and to do away with the scrum as a tool in the union code would perhaps not suit the game as well as it does in union. Thus, it is certainly worth at least trying to prevent the scrum turning into a dour affair of constant collapses and resets before everyone dies of boredom and we simply scrap the thing.

(I know I’ve probably broken my ‘no Views’ rule here, but I could go on all day about the various arguments and I’d like to get onto some solutions)

The main problem with the modern scrum according to the IRB concerns the engage procedure- arguing (as do many other people) that trying to restrain eight athletes straining to let rip their strength is a tough task for even the stoutest front rower, they have this year changed the engage procedure to omit the ‘pause’ instruction from the ‘crouch, touch, pause, engage’ sequence. Originally included to both help the early players structure their engagement (thus ensuring they didn’t have to spend too much time bent down too far) and to ensure the referee had control over the engagement, they are now arguing that it has no place in the modern game and that it is time to see what effect getting rid of it will have (they have also replaced the ‘engage’ instruction with ‘set’ to reduce confusion about which syllable to engage on).

Whether this will work or not is a matter of some debate. It’s certainly a nice idea- speaking as a forward myself, I can attest that giving the scrum time to wind itself up is perhaps not the best way to ensure they come together in a safe, controlled fashion. However, what this does do is place a lot of onus on the referee to get his timing right. If the ‘crouch, touch, set’ procedure is said too quickly, it can be guaranteed that one team will not have prepared themselves properly and the whole engagement will be a complete mess. Say it too slowly, and both sides will have got themselves all wound up and we’ll be back to square one again. I suppose we’ll all find out how well it works come the new season (although I do advise giving teams time to settle back in- I expect to see a lot of packs waiting for a split second on the ‘set’ instruction as they wait for the fourth command they are so used to)

Other solutions have also been put forward. Many advocate a new law demanding gripping areas on the shirts of front row players to ensure they have something to get hold of on modern, skintight shirts, although the implementation of such a law would undoubtedly be both expensive and rather chaotic for all concerned, which is presumably why the IRB didn’t go for it. With the increasing use and importance of the Television Match Official (TMO) in international matches, there are a few suggesting that both they and the line judge should be granted extra responsibilities at scrum time to ensure the referee’s attention is not distracted, but it is understandable that referees do not want to be patronised by and become over-reliant on a hardly universally present system where the official in question is wholly dependent on whether the TV crews think that the front row binding will make a good shot.

However, whilst these ideas may help to prevent the scrum collapsing, with regards to the scrum’s place in the modern game they are little more than papering over the cracks. On their own, they will not change the way the game is played and will certainly not magically bring the scrum back to centre stage in the professional game.

For that to happen though, things may have to change quite radically. We must remember that the scrum as an invention is over 150 years old and was made for a game that has since changed beyond all recognition, so it could well be time that it began to reflect that. It’s all well and good playing the running game of today, but if the scrum starts to become little more than a restart then it has lost all its value. However, it is also true that if it is allowed to simply become a complete lottery, then the advantage for the team putting the ball in is lost and everyone just gets frustrated with it.

An answer could be (to pick an example idea) to turn the scrum into a more slippery affair, capable of moving back and forth far more easily than it can at the moment, almost more like a maul than anything else. This would almost certainly require radical changes regarding the structure and engagement of it- perhaps we should say that any number of players (between, say, three and ten) can take part in a scrum, in the same way as happens at lineouts, thereby introducing a tactical element to the setup and meaning that some sneaky trickery and preplanned plays could turn an opposition scrum on its head. Perhaps the laws on how the players are allowed to bind up should be relaxed, forcing teams to choose between a more powerful pushing setup and a looser one allowing for faster attacking & defending responses. Perhaps a law should be trialled demanding that if two teams engaged correctly, but the scrum collapsed because one side went lower than the other then the free kick would be awarded to the ‘lower’ side, thus placing a greater onus on technique over sheer power and turning the balance of the scrum on its head. Would any of these work? Maybe not, but they’re ideas.

I, obviously, do not have all the definitive answers, and I couldn’t say I’m a definite advocate of any of the ideas I voiced above (especially the last one, now I think how ridiculously impractical it would be to manage). But it is at least worth thinking about how much the game has evolved since the scrum’s invention, and whether it’s time for it to catch up.