Woolwich and Double Standards

Last week, a crime rocked Britain. Drummer Lee Rigby, a soldier in the Royal Regiment of Fusiliers, was walking outside the Royal Artillery Barracks (he was off-duty at the time) in Woolwich, London, when two men ran him down in their car. They then proceeded to attack him with knives and a cleaver, screaming all the while, and attempted to behead him. They then hung around, talking to passers-by, until the police arrived, whereupon they charged at them. Both men were shot and wounded.

This, on its own, would have been sufficient to make headlines, but then the always-provocative topics of race and religion entered the fray. Both attackers, Michael Olumide Adebolajo and Michael Oluwatobi Adebowale, were Nigerian-born Britons and Muslims, and declared their attack an act of revenge against the British military’s killing of Muslims in the Afghanistan war. This lead them to, once again, fall under the title of ‘radical Islamists’, and the attack prompted widespread outrage, of all kinds, across Britain. Most people were, naturally, merely shocked that such a horrific attack had taken place, particularly against one of the soldiers of which our country is so proud, and there was a massive outpouring of sympathy for his family. However, others went on the attack, with of cases anti-Muslim attacks including verbal abuse, assault, graffiti and even attempted arson, prompting the police to mobilise as many officers as they could get their hands on. The Queen was even informed, and issued a statement appealing for calm.

There are a couple of things about this frankly horrific incidents that I believe are worth bearing in mind- and, just so nobody accuses me of ripping the following arguments off, I should point out that they were generated from stuff I saw online following the event. The first considers the news we receive; every few months or so, the news will report a case of a black, Islamic or gay person being murdered, assaulted or somesuch in what is dubbed ‘a hate crime’. And this is only the ones that get on the news; around 700 people are murdered every year in the UK (that’s an average of two per day), and I personally find it unlikely that only the two or three of those that get reported are motivated by racial or sexual hatred- the number of more ‘casual’ hate crimes must number well into the thousands. I don’t know, I couldn’t find any numbers.

Anyway, the point is this; these hate crimes generally make it onto the evening news, where they are covered by a serious-looking journalist standing by a police line before everyone moves on. People watching will think ‘oh how terrible’, and then start thinking about what happened in the stock exchange today or who was wearing what dress at some star-spangled event the night before. Point being, people generally don’t make a massive deal out of it; an exception was perhaps the awful murder of Indian student Anuj Bidve last year, but his killer (Kiaran Stapleton) claimed it wasn’t a hate crime so much as Bidve being the closest available person to shoot. However, even then, the case didn’t garner anywhere near the coverage the Woolwich incident has done; multiple protests from all political angles have been organised, every major political or religious figure has made some statement or other, and the whole country has been gripped by the shock. Not only that, but this attack has, rather than a ‘hate crime’ been labelled in some quarters as terrorism.

Now, never let it be said that I think the outpouring of grief and emotion over this case is in any way wrong; a seemingly random victim, innocent of any crime against his attackers, has been viciously murdered in cold blood in a public, supposedly safe, place, and I cannot imagine what his family must be going through. I feel I should also say explicitly that nothing I say from hereon in is intended to insult the memory of Lee Rigby: may he rest in peace. However, I do think that the public outcry to the event has revealed something of a double standard in the public eye; when a Muslim is murdered by some white extremist nutter, then it’s a hate crime that we all consider gravely, but when a white guy gets murdered by a pair of Muslim nutters, then the entire country is wrapped up in a frenzy. I understand that it would be impractical (not to mention depressing) for us to get this wrapped up every time somebody is murdered, but treating each case differently based upon who kills who is simply not fair, at least on the families of the half-forgotten hate crime victims.

And then there are the accusations of terrorism. Now, I will be the first to admit that the line between terrorism and hate crime is, particularly in the modern age, a narrow one; both involve the killing of innocent people based, usually, on the fact that they violently disagree with a practice or philosophy ostensibly held by the victim. The difference is, ostensibly, that terrorism is a typically organised campaign that intends to strike terror into the hearts of the group or nation being targeted, in order to make them give in to their demands (which, historically, never works), whilst a hate crime is simply done out of anger or, in this case vengeance. Because the Woolwich Incident was a hate crime, nothing more or less, and was horrible for precisely that reason.

My point is that there is, frankly, only one reason that some quarters have dubbed this attack terrorism; because the perpetrators were two Muslims. Again, we find ourselves facing a double standard, the kind of discrimination that does not consciously register to the discriminators and is all the more harmful because of it. One of the biggest examples of this occurred as the story of the Anders Breivik incident broke; whilst news agencies were still unsure as to who the perpetrator was, one story went around that the attacker was a Muslim extremist. Immediately, several news organisations began reporting the incident as a terrorist attack, but as soon as it became widely known that the attacker was a white guy, the whole thing went back to being a ‘lone gunman’ story, one madman, a twisted political philosophy and a gun coming together in the most tragic of ways, as the case actually was. Again, we see evidence of this double standard, and this unintentional institutionalised racism.

A post like this doesn’t really have a natural ending beyond the idea that such a double standard is simply wrong; our society (well, most of it) prides itself on being accepting of all different cultures, and such attitudes run directly contrary to this. So, just to bring it all full circle, I shall end with this; rest in peace, Lee Rigby. May your soul find peace.

Advertisements

FILM FORTNIGHT: Rain Man

Hey, I run out of ideas sometimes, and it is partly for that reason that I like my little on-running series’ (although I try to minimise continuity here as a rule). So, for the next two weeks, I’ll be putting up six film reviews as posts. These are films selected pretty much at random from my back catalogue of ‘stuff that is reasonably interesting to write about and that I’ve watched fairly recently’. First up, something that I only got round to watching a couple of days ago; four-time Oscar winner Rain Man.

Rain Man has a lot to answer for nowadays; not only does it suggest that card counting in blackjack is illegal (it isn’t provided you use no electronic aids or camera equipment, but will get you thrown out of every casino in the world) but it is also responsible for almost every presumption going on the subject of autism. Back in 1988 autism was a rather poorly-understood issue by the general public, so when a film about an autistic savant completely lacking in social and human skills but with a superhuman memory and mathematical ability came onto the scene, it was projecting itself onto a somewhat blank canvas. Unfortunately, this has lead (perhaps understandably) to a public perception that autism automatically equals superlative brainpower alongside mental deficiency, which is both very wrong and a heinous oversimplification of a complex issue. However, the film does at least have the guts to put such a serious and previously largely unknown illness on the big screen, and deserves plaudits for managing to do so with respect without being preachy.

But back to the actual film; our first lead character is Charlie Babbitt (Tom Cruise), a west-coast car dealer and insufferably arrogant berk with a somewhat stormy relationship with father Sanford. When said father dies, Charlie learns that he will not be inheriting his father’s $3 million estate but has instead been left a ’57 Buick and a few rose bushes,as something of a final ‘get stuffed’ from beyond the grave. To say Charlie is slightly annoyed by this would be rather a gross understatement, and he is quick to seek out the Cincinatti mental institute which his father has made a trustee of the money in search of a few answers. He finds them in the form of long-lost brother Raymond (Dustin Hoffman), the elder of the two siblings who Charlie was hitherto completely unaware of. Raymond is confined to the institution on account of crippling autism, giving him a religious obsession with schedule and routine and a near-total inability to communicate or interact with the outside world. This doesn’t sit too well with his brash, outgoing brother who dislikes the idea of $3 million going to a man with ‘no concept of money’ so he whisks him away to LA (where his business is located) in an effort to wring some cash out of the institute in court. Unfortunately, Raymond refuses to fly or drive on major highways, turning what should have been a simple journey into a long haul, cross-country road trip, which rather strains the nerves between the mumbling autistic and the guy who thinks it’s all just some big show.

And that’s… pretty much the film; indeed the entire second act can basically be summarised as two guys in a car getting angry at one another for an hour. In this small way it is vaguely similar to The Motorcycle Diaries, but whilst Diaries featured two relatable, funny and genuinely meaningful central characters we here have one person who we don’t want to relate to (re the ‘insufferably arrogant’ comment above) and one to whom it’s almost physically impossible to do so. It is possible to build a meaningful, compelling storyline around an unsympathetic character as District 9’s Wikus van der Merwe proved (if you don’t know what I’m talking about, then I highly recommend changing that), but it’s certainly difficult and Rain Man doesn’t quite manage to pull it off. That neither character either develops or builds any sort of relationship to the other for the first two-thirds of the film hardly helps matters, and this period is not one of compelling watchability once we’ve got over the essential theme of ‘Hey, autism! It’s weird!’. Yes, it’s all very real and due kudos to the director, but it tends to get a little too real in patches, concentrating on the facts and parts of life that don’t make compelling screentime.

That the first part of the film is rather trying to watch is certainly no fault of execution; director Barry Levinson won an Oscar for his work on this film, and his cinematography is certainly effective (if somewhat clunky and obvious in places). Added to that, we have the acting; Cruise offers up a complete performance, bringing his character’s personality to the fore through his every action, big or small, and manages to make up for the communicative difficulties of Hoffman’s character and carries the film with aplomb. However, even his impressive acting performance is knocked into a cocked hat by a piece of typical Hoffman brilliance; Hoffman reportedly spent time with American memory savant Kim Peek (the man who inspired the film) among others before taken on his role, and it shows. His performance is so whole and complete that it’s genuinely mind blowing, and even to someone such as myself who has very little experience with severe autism it feels more real than any other acting role I’ve seen this year. His is a performance made in the details; not just all the little ticks, but the repetition of them, the execution, the little movements of the head and hands that are never explicitly mentioned, even the way his expression changes, all manages to portray a powerfully consistent image of a trapped, distorted mind hidden away somewhere. This is the first time I have been genuinely moved by an acting performance alone, and it was a fantastic experience.

However, waxing lyrical about an acting performance still doesn’t overshadow the film’s initial failings , but I am happy to report that the film’s twist (which I won’t spoil for you) kicks in at around the hour and a half mark and finally gives the experience some momentum. With events moving along nicely with a definite direction and emotional dynamic in place rather than the aimless meandering we were previously subjected to, one is finally able to settle back and enjoy the experience, and to immerse oneself in the film in front of you. It is something of a pity that Cruise’s character loses some of its steam when called upon to offer up some emotion, but he nonetheless does his job and the conclusion has an eerie, if somewhat cliched, beauty about it. Rain Man is a film that takes its time to get going, and one that I nearly gave up on after the first hour, but it’s most definitely worth sticking to; recommended, at least to to those with the patience.

“A towel is about the most mind-bogglingly useful thing any interstellar hitchhiker can carry…”

Today is Towel Day, universally recognised as the single most important day of the year for fellow fans of ‘The Hitchhiker’s Guide To The Galaxy’, when all members of the great following carry their towels in solidarity for the Hitchhiker’s cause; because you should always know where your towel is. You might expect that I would take this opportunity to write a long fan-service tribute to Douglas Adams, the series’ creator, but I already did one of those last year. No, today I turn to the very subject matter of today itself and, indeed, a central theme in much of Adams’ work; towels.

I don’t think I can begin in any better place than to quote the Guide itself on the subject of towels:

“A towel, it [the Guide] says, is about the most massively useful thing an interstellar hitchhiker can have. Partly it has great practical value – you can wrap it around you for warmth as you bound across the cold moons of Jaglan Beta; you can lie on it on the brilliant marble-sanded beaches of Santraginus V, inhaling the heady sea vapours; you can sleep under it beneath the stars which shine so redly on the desert world of Kakrafoon; use it to sail a mini raft down the slow heavy river Moth; wet it for use in hand-to- hand-combat; wrap it round your head to ward off noxious fumes or to avoid the gaze of the Ravenous Bugblatter Beast of Traal (a mindboggingly stupid animal, it assumes that if you can’t see it, it can’t see you – daft as a bush, but very ravenous); you can wave your towel in emergencies as a distress signal, and of course dry yourself off with it if it still seems to be clean enough.”

…before going on to talk about the immense psychological value gained by being the kind of frood who always knows where his towel is through whatever horrors the universe may throw at him. However, excellent though the passage is, the relevance of this to our earthly towel-related endeavours is somewhat limited. There is very little need, for example, for a Londoner to protect himself from the Ravenous Bugblatter Beast of a planet several hundred light-years away, or ability for casual suburbanite to ‘bound across’ many cold moons. Due to the relatively limited advancement of earthly towel technology, some of the advice stated there is not only irrelevant but downright unhelpful; a towel makes for a slow, unwieldy weapon in hand-to-hand combat (a problem only exacerbated once said towel is sopping wet) that can be combatted easily with a well-timed rush if it fails, as is unfortunately likely, to disable one’s opponent in its first strike. And since most towels are made from fabric, rather than ‘solid’ material, the protection they offer against any particularly noxious fumes is minimal at best; OK for keeping out dust and smells, but not much more.

Many of the problems relating to towels and their lack of prevalence in our modern-day existence concerns weight and volume. Your typical bath-towel must, in order to wrap around bodies of all sizes adequately, measure about 0.8 x 1.5m, as well as being able to absorb a good quantity of water with some comfort. In most ‘traditional’ towels, this takes the form of taking a piece of fabric baselayer and adding loops of absorbent material to it. These loops trap the water close to the fabric via surface tension (probably; I’m trying to apply my limited knowledge of fluid dynamics here), and longer loops allow for more water to be absorbed- the upper limit can be seen in your typical bath towel construction. This material is known as terrycloth, and is usually made from cotton (sometimes with a little polyester). However, all of these tiny little loops add up to a sizeable quantity of stuff, such that even when pressed flat they turn a towel into a pretty thick bit of material. This combined with the size of any useful towel means that even a folded up one takes up a lot of space in, say, a suitcase, making it impractical as an everyday item to carry around with you. Its size also presence a psychological issue, making it difficult to carry discreetly. The over-the-shoulder approach displayed proudly by Towel Day practitioners will, in everyday life, make one a target for minor abuse and funny looks from people you may happen to bump into, and is hardly suitable for events other than special occasions. Not only that, but a terrycloth construction makes any large towel heavy, decreasing its practicality both for transport and general use. Small wonder towels present a major problem to holidaymakers with limited suitcase space every year.

A solution that has presented itself in recent years to the towel/travelling problem is the microfibre towel, a typically smaller, lightweight version. These do not use looped material to absorb water; instead, the microfibre (kevlar, interestingly, is a common choice of material for microfibre, although not necessarily towels) designed specifically to be very thin, allowing it to be woven incredibly finely and to have highly absorbent properties, as well as drying out quickly. Microfibre towels offer many benefits; they are thin, lightweight and easy to manipulate, allowing them to be stored with ease, and because of their ultrafine construction will probably present a better barrier to the aforementioned noxious fumes. However, this feature has also proved their downfall to the towel enthusiast; by marketing themselves as ‘travel towels’, designed solely for the purpose of taking up as little space as possible in a suitcase, it is virtually impossible to find one that is not too thin and too small to be very useful with in a non-drying oneself context. My own travel towel, which I do find makes an acceptable backup, never feels like it could hold my weight (always an important consideration in considering the usefulness of one’s towel; you never know when you’re going to need an impromptu rope) like my bath towel can, and I can barely fit around my waist if I use it in the shower; it would make a useless raft-sail. In their current format, therefore, microfibre towels are not the solution to the problem of the convenient towel.

With contemporary towel technology, the best solution is probably the beach towel; they are reasonably lightweight, strong, comfortable, available in a variety of shapes & sizes to suit one’s need, and they make up for in maneouvreability what they lack in absorbance. A medium-sized beach towel is a good compromise for many basic needs, and I advocate keeping one in the car (seriously, you’d be amazed how often they can come in handy).  But perhaps microfibre may be the way forward at some point in the future; especially if we can one day get rid of the tiresome stigma that comes with striding down the street, proudly carrying one’s towel slung boldly across the shoulder, ready to take on the universe.

Aurum Potestas Est

We as a race and a culture have a massive love affair with gold. It is the basis of our currency, the definitive mark of wealth and status, in some ways the bedrock of our society. We hoard it, we covet it, we hide it away except for special occasions, but we never really use it.

This is perhaps the strangest thing about gold; for something around which we have based our economy on, it is remarkably useless. To be sure, gold has many advantageous properties; it is the best thermal and electrical conductor and is pretty easy to shape, leading it to be used widely in contacts for computing and on the engine cover for the McLaren F1 supercar. But other than these, relatively minor, uses, gold is something we keep safe rather than make use of; it has none of the ubiquity nor usefulness of such metals as steel or copper. So why are we on the gold standard? Why not base our economy around iron, around copper, around praseodymium (a long shot, I will admit), something a bit more functional? What makes gold so special?

In part we can blame gold’s chemical nature; as a transition metal it is hard, tough, and a solid at room temperature, making it able to be mined, extracted, transported and used with ease and without degenerating and breaking too easily. It is also very malleable, meaning it can be shaped easily to form coins and jewellery; shaping into coins is especially important in order to standardise the weight of meta worth a particular amount. However, by far its most defining chemical feature is its reactivity; gold is very chemically stable in its pure, unionised, ‘native’ form, meaning it is unreactive, particularly with such common substances as; for this reason it is often referred to as a noble metal. This means gold is usually found native, making it easier to identify and mine, but is also means that gold products take millennia to oxidise and tarnish, if they do so at all. Therefore, gold holds its purity like no other chemical (shush, helium & co.), and this means it holds its value like nothing else. Even silver, another noble and comparatively precious metal, will blacken eventually and lose its perfection, but not gold. To an economist, gold is eternal, and this makes it the most stable and safe of all potential investments. Nothing can replace it, it is always a safe bet; a fine thing to base an economy on.

However, just as important as gold’s refusal to tarnish and protect is beauty is the simple presence of a beauty to protect. This is partly put down to the uniqueness of its colour; in the world around us there are many greens, blues, blacks, browns and whites, as well as the odd purple. However, red and yellow are (fire and a few types of fish and flower excepted) comparatively rare, and only four chemical elements that we commonly come across are red or yellow in colour; phosphorus, sulphur, copper and gold. And rusty iron but… just no. Of the others, phosphorus (red) is rather dangerous given its propensity to burst into flames, is also commonly found as a boring old white element, and is rather reactive, meaning it is not often found in its reddish form. Sulphur is also reactive, also burns and also readily forms compounds; but these compounds have the added bonus of stinking to high heaven. It is partly for this reason, and partly for the fact that it turns blood-red when molten, that brimstone (aka sulphur) is heavily associated with hell, punishment and general sinfulness in the Bible and that it would be rather an unpopular choice to base an economy on. In any case, the two non-metals do not have any of the properties that the transition metals of copper and gold do; those of being malleable, hard, having a high melting point, and being shiny and pwettiful. Gold edged out over copper partly for its unreactivity as explored above (after time copper loses its reddish beauty and takes on a, but also because of its deep, beautiful, lustrous finish. That beauty made it precious to us, made it something we desired and lusted after, and (combined with gold’s relative rarity, which could be an entire section of its own) made it valuable. This value allows relatively small amounts of gold to represent large quantities of worth and value, and justifies its use as coinage, bullion and an economic standard.

However, for me the key feature of gold’s place as our defining scale of value concerns its relative uselessness. Consider the following scenario; in the years preceding the birth of Christ, the technology, warfare and overall political situation of the day was governed by one material, bronze. It was used to make swords, armour, jewellery, the lot; until one day some smartarse figured out how to smelt iron. Iron was easier to work than bronze, allowing better stuff to be made, and with some skill it could be turned into steel. Steel was stronger as well as more malleable than bronze, and could be tempered to change its properties; over time, skilled metalsmiths even learned how to make the edge of a sword blade harder than the centre, making it better at cutting whilst the core absorbed the impact. This was all several hundred years in the future, but in the end the result was the same; bronze fell from grace and its societal value slumped. It is still around today, but it will never again enjoy its place as the metal that ruled the world.

Now, consider if that metal had, instead of bronze, been gold. Something that had been ultra-precious, the king of all metals, reduced to something that was merely valuable. It had been trumped by iron, and iron would have this connotation of being better than it; gold’s value would have dropped. In any economic system, even a primitive one, having the value of the substance around which your economy is based change in value would be catastrophic; when Mansa Musa travelled from Mali on a pilgrimage to Mecca, he stopped off in Cairo, then the home of the world’s foremost gold trade, and spent so much gold that the non-Malian world had never known about that the price of gold collapsed and it took more than a decade for the Egyptian economy to recover. If gold were to have a purpose, it could be usurped; we might find something better, we might decide we don’t need that any more, and thus gold’s value, once supported by those wishing to buy it for this purpose, would drop. Gold is used so little that this simply doesn’t happen, making it the most economically stable substance; it is valuable precisely and solely because we want it to be and, strange though it may seem, gold is always in fashion. Economically as well as chemically, gold is uniquely stable- the perfect choice around which to base a global economy.

One Foot In Front Of The Other

According to many, the thing that really sets human beings apart from the rest of the natural world is our mastery of locomotion; the ability to move faster, further and with heavier loads than any other creature typically does (never mind that our historical method of doing this was strapping several other animals to a large heap of wood and nails) across every medium our planet has to throw at us; land, sky, sea, snow, whatever. Nowadays, this concept has become associated with our endeavours in powered transport (cars, aeroplanes and such), but the story of human locomotion begins with a far more humble method of getting about that I shall dedicate today’s post to; walking.

It is thought that the first walkers were creatures that roughly approximate to our modern-day crustaceans; the early arthropods. In the early days of multicellular life on earth, these creatures ruled the seas (where all life had thus far been based) and fossils of the time show a wide variety of weird and wonderful creatures. The trilobites that one can nowadays buy as tourist souvenirs in Morocco are but one example; the top predators of the time were massive things, measuring several metres in length with giant teeth and layers of armour plate. All had bony exoskeletons, like the modern insects that are their descendants, bar a few small fish-like creatures a few millimetres in length who had developed the first backbones; in time, the descendants of these creatures would come to dominate life on earth. Since it was faster and allowed a greater range of motion, most early arthropods swam to get about; but others, like the metre-long Brontoscorpio (basically a giant underwater scorpion) preferred the slightly slower, but more efficient, idea of walking about on the seabed. Here, food was relatively plentiful in the form of small ‘grazers’ and attempting to push oneself through the water was wasteful of energy compared to trundling along the bottom. However, a new advantage also presented itself before too long; these creatures were able to cross land over short distances to reach prey- by coincidence, their primitive ‘lungs’ (that collected dissolved oxygen from water in much the same fashion as modern fish gills, but with a less fragile structure) worked just as well at harvesting oxygen from air as water, enabling them to survive on land. As plant life began to venture out onto land to better gain access to the air and light needed to survive, so the vertebrates (in the form of early amphibians) and arthropods began to follow the food, until the land was well and truly colonised by walking life forms.

Underwater, walking was significantly easier than on land; water is a far more dense fluid than air (hence why we can swim in the former but not the latter), and the increased buoyancy this offered meant that early walkers’ legs did not have to support so much of their body’s weight as they would do on land. This made it easier for them to develop the basic walking mechanic; one foot (or whatever you call the end of a scorpion’s leg) is pressed against the ground, before being held stiff and solid as the rest of the body is rotated around it’s joint, moving the creature as a whole forward slightly as it pivots. In almost all invertebrates, and early vertebrates, the creature’s legs are positioned at the side of the body, meaning that as the creature walks they tend to swing from side to side. Invertebrates typically partially counter this problem by having a lot of legs and stepping them in such an order to help them travel in a constant direction, and by having multi-jointed legs that can flex and translate the lateral components of motion into more forward-directed movement, preventing them from swinging from side to side. However, this doesn’t work so well at high speed when the sole priority is speed of movement of one’s feet, which is why most reconstructions of the movement of vertebrates circa 300 million years ago (with just four single-jointed legs stuck out to the side of the body) tends to show their body swinging dramatically from side to side, spine twisting this way and that.  This all changed with the coming of the dinosaurs, whose revolutionary evolutionary advantage was a change in construction of the hip that allowed their legs to point underneath the body, rather than sticking out at the side. Now, the pivoting action of the leg produces motion in the vertical, rather than horizontal direction, so no more spine-twisting mayhem. This makes travelling quickly easier and allows the upper body to be kept in a more stable position, good for striking at fleeing prey, as well as being more energy efficient. Such an evolutionary advantage would soon prove so significant that, during the late Triassic period, it allowed dinosaurs to completely take over from the mammal-like reptiles who had previously dominated the world. It would take more than 150 million years, a hell of a lot of evolution and a frickin’ asteroid to finally let these creatures’ descendants, in the form of mammals, finally prevail over the dinosaurs (by which time they had discovered the whole ‘legs pointing down’ trick).

When humankind were first trying to develop walking robots in the mid-twentieth century, the mechanics of the process were poorly understood, and there are a great many funny videos of prototype sets of legs completely failing. These designers had been operating under the idea that the role of the legs when walking was not just to keep a body standing up, but also to propel them forward, each leg pulling on the rest of the body when placed in front. However, after a careful study of new slow-motion footage of bipedal motion, it was realised that this was not the case at all, and we instead have gravity to thank for pushing us forward. When we walk, we actually lean over our frontmost foot, in effect falling over it before sticking our other leg out to catch ourselves, hence why we tend to go face to floor if the other leg gets caught or stuck. Our legs only really serve to keep us off the ground, pushing us upwards so we don’t actually fall over, and our leg muscles’ function here is to simply put each foot in front of the other (OK, so your calves might give you a bit of an extra flick but it’s not the key thing). When we run or climb, our motion changes; our legs bend, before our quadriceps extend them quickly, throwing us forward. Here we lean forward still further, but this is so that the motion of our quads is directed in the forward, rather than upward direction. This form of motion is less energy efficient, but covers more ground. This is the method by which we run, but does not define running itself; running is simply defined as the speed at which every step incorporates a bit of time where both feet are off the ground. Things get a little more complicated when we introduce more legs to the equation; so for four legged animals, such as horses, there are four footspeeds. When walking there are always three feet on the ground at any one time, when trotting there are always two, when cantering at least one, and when galloping a horse spends the majority of its time with both feet off the ground.

There is one downside to walking as a method of locomotion, however. When blogging about it, there isn’t much of a natural way to end a post.

F=ma

On Christmas Day 1642, a baby boy was born to a well-off Lincolnshire family in Woolsthorpe Manor. His childhood was somewhat chaotic; his father had died before he was born, and his mother remarried (to a stepfather he came to acutely dislike) when he was three. He was later to run away from school, discovered he hated the farming alternative and returned to become the school’s top pupil. He was also to later attend Trinity College Cambridge; oh, and became arguably the greatest scientist and mathematician of all time. His name was Isaac Newton.

Newton started off in a small way, developing binomial theorem; a technique used to expand powers of polynomials, which is a kind of fundamental technique used pretty much everywhere in modern science and mathematics; the advanced mathematical equivalent of knowing that 2 x 4 = 8. Oh, and did I mention that he was still a student at this point? Taking a break from his Cambridge career for a couple of years due to the minor inconvenience of the Great Plague, he whiled away the hours inventing calculus, which he finalised upon his return to Cambridge. Calculus is the collective name for differentiating and integrating, which allows one to find out the rate at which something is occurring, the gradient of a graph and the area under it algebraically; plus enabling us to reverse all of the above processes. This makes it sound like rather a neat and useful gimmick, but belies the fact that it allows us to mathematically describe everything from water flowing through a pipe to how aeroplanes fly (the Euler equations mentioned in my aerodynamics posts come from advanced calculus), and the discovery of it alone would have been enough to warrant Newton’s place in the history books. OK, and Leibniz who discovered pretty much the same thing at roughly the same time, but he got there later than Newton. So there.

However, discovering the most important mathematical tool to modern scientists and engineers was clearly not enough to occupy Newton’s prodigious mind during his downtime, so he also turned his attention to optics, aka the behaviour of light. He began by discovering that white light was comprised of all colours, revolutionising all contemporary scientific understanding of light itself by suggesting that coloured objects did not create their own colour, but reflected only certain portions of already coloured light. He combined this with discovering diffraction; that light shone through glass or another transparent material at an angle will bend. This then lead him to explain how telescopes worked, why the existing designs (based around refracting light through a lens) were flawed, and to design an entirely new type of telescope (the reflecting telescope) that is used in all modern astronomical equipment, allowing us to study, look at and map the universe like never before. Oh, and he also took the time to theorise the existence of photons (he called them corpuscles), which wouldn’t be discovered for another 250 years.

When that got boring, Newton turned his attention to a subject that he had first fiddled around with during his calculus time: gravity. Nowadays gravity is a concept taught to every schoolchild, but in Newton’s day the idea that objects fall to earth was barely even considered. Aristotle’s theories dictated that every object ‘wanted’ to be in a state of stillness on the ground unless disturbed, and Newton was the first person to make a serious challenge to that theory in nearly two millennia (whether an apple tree was involved in his discovery is heavily disputed). Not only did he and colleague Robert Hooke define the force of gravity, but they also discovered the inverse-square law for its behaviour (aka if you multiply the distance you are away from a planet by 2, then you will decrease the gravitational force on you by 2 squared, or 4) and turned it into an equation (F=-GMm/r^2). This single equation would explain Kepler’s work on celestial mechanics, accurately predict the orbit of the ****ing planets (predictions based, just to remind you, on the thoughts of one bloke on earth with little technology more advanced than a pen and paper) and form the basis of his subsequent book: “Philosophiæ Naturalis Principia Mathematica”.

Principia, as it is commonly known, is probably the single most important piece of scientific writing ever written. Not only does it set down all Newton’s gravitational theories and explore their consequences (in minute detail; the book in its original Latin is bigger than a pair of good-sized bricks), but he later defines the concepts of mass, momentum and force properly for the first time; indeed, his definitions survive to this day and have yet to be improved upon.  He also set down his three laws of motion: velocity is constant unless a force acts upon an object, the acceleration of an object is proportional to the force acting on it and the object’s mass (summarised in the title of this post) and action and reaction are equal and opposite. These three laws not only tore two thousand years of scientific theory to shreds, but nowadays underlie everything we understand about object mechanics; indeed, no flaw was found in Newton’s equations until relativity was discovered 250 years later, which only really applies to objects travelling at around 100,000 kilometres per second or greater; not something Newton was ever likely to come across.

Isaac Newton’s life outside science was no less successful; he was something of an amateur alchemist and when he was appointed Master of the Royal Mint (a post he held for 30 years until his death; there is speculation his alchemical meddling may have resulted in mercury poisoning) he used those skills to great affect in assessing coinage, in an effort to fight Britain’s massive forgery problem. He was successful in this endeavour and later became the first man to put Britain onto the gold, rather than silver, standard, reflecting his knowledge of the superior chemical qualities of the latter metal (see another previous post). He is still considered by many to be the greatest genius who ever lived, and I can see where those people are coming from.

However, the reason I find Newton especially interesting concerns his private life. Newton was a notoriously hard man to get along with; he never married, almost certainly died a virgin and is reported to have only laughed once in his life (when somebody asked him what was the point in studying Euclid. The joke is somewhat highbrow, I’ll admit). His was a lonely existence, largely friendless, and he lived, basically for his work (he has been posthumously diagnosed with everything from bipolar disorder to Asperger’s syndrome). In an age when we are used to such charismatic scientists as Richard Feynman and Stephen Hawking, Newton’s cut-off, isolated existence with only his prodigious intellect for company seems especially alien. That the approach was effective is most certainly not in doubt; every one of his scientific discoveries would alone be enough to place him in science’s hall of fame, and to have done all of them puts him head and shoulders above all of his compatriots. In many ways, Newton’s story is one of the price of success. Was Isaac Newton a successful man? Undoubtedly, in almost every field he turned his hand to. Was he a happy man? We don’t know, but it would appear not. Given the choice between success and happiness, where would you fall?

Flying Supersonic

Last time (OK, quite a while ago actually), I explained the basic principle (from the Newtonian end of things; we can explain it using pressure, but that’s more complicated) of how wings generate lift when travelling at subsonic speeds, arguably the most important principle of physics affecting our modern world. However, as the second World War came to an end and aircraft started to get faster and faster, problems started to appear.

The first aircraft to approach the speed of sound (Mach 1, or around 700-odd miles an hour depending on air pressure) were WWII fighter aircraft; most only had top speeds of around 400-500mph or so whilst cruising, but could approach the magic number when going into a steep dive. When they did so, they found their aircraft began suffering from severe control issues and would shake violently; there are stories of Japanese Mitsubishi Zeroes that would plough into the ground at full speed, unable to pull out of a deathly transonic dive. Subsequent aerodynamic analyses of these aircraft suggest that if any of them had  in fact broken the sound barrier, their aircraft would most likely have been shaken to pieces. For this reason, the concept of ‘the sound barrier’ developed.

The problem arises from the Doppler effect (which is also, incidentally, responsible for the stellar red-shift that tells us our universe is expanding), and the fact that as an aircraft moves it emits pressure waves, carried through the air by molecules bumping into one another. Since this exactly the same method by which sound propagates in air, these pressure waves move at the speed of sound, and travel outwards from the aircraft in all directions. If the aircraft is travelling forwards, then each time it emits a pressure wave it will be a bit further forward than the centre of the pressure wave it emitted last, causing each wave in front of the aircraft to get closer together and waves behind it to spread out. This is the Doppler Effect.

Now, when the aircraft starts travelling very quickly, this effect becomes especially pronounced, wave fronts becoming compressed very close to one another. When the aircraft is at the speed of sound, the same speed at which the waves propagate, it catches up with the wave fronts themselves and all wave fronts are in the same place just in front of the aircraft. This causes them to build up on top of one another into a band of high-pressure air, which is experienced as a shockwave; the pressure drop behind this shockwave can cause water to condense out of the air and is responsible for pictures such as these.

But the shockwave does not just occur at Mach 1; we must remember that the shape of an aerofoil is such to cause air to travel faster over the top of the wing than it does normally. This means parts of the wing reach supersonic speeds, effectively, before the rest of the aircraft, causing shockwaves to form over the wings at a lower speed. The speed at which this first occurs is known as the critical Mach number. Since these shockwaves are at a high-pressure, then Bernoulli’s principle tells us they cause air to slow down dramatically; this contributes heavily to aerodynamic drag, and is part of the reason why such shockwaves can cause major control issues. Importantly, we must note that shockwaves always cause air to slow down to subsonic speeds, since the shockwave is generated at the point of buildup of all the pressure waves so acts as a barrier between the super- and sub-sonic portions of the airflow. However, there is another problem with this slowing of the airflow; it causes the air to have a higher pressure than the supersonic air in front of the shockwave. Since there is always a force from high pressure to low pressure, this can cause (at speeds sufficiently higher above the critical Mach number) parts of the airflow close to the wing (the boundary layer, which also experience surface friction from the wing) to change direction and start travelling forwards. This causes the boundary layer to recirculate, forming a turbulent portion of air that generates very little lift and quite a lot of drag, and for the rest of the airflow to separate from the wing surface; an effect known as boundary layer separation, (or Mach stall, since it causes similar problems to a regular stall) responsible for even more problems.

The practical upshot of all of this is that flying at transonic speeds (close to and around the speed of sound) is problematic and inefficient; but once we push past Mach 1 and start flying at supersonic speeds, things change somewhat. The shockwave over the wing moves to its trailing edge, as all of the air flowing over it is now travelling at supersonic speeds, and ceases to pose problems, but now we face the issues posed by a bow wave. At subsonic speeds, the pressure waves being emitted by the aircraft help to push air out of the way and mean it is generally deflected around the wing rather than just hitting it and slowing down dramatically; but at subsonic speeds, we leave those pressure waves behind us and we don’t have this advantage. This means supersonic air hits the front of the air and is slowed down or even stopped, creating a portion of subsonic air in front of the wing and (you guessed it) another shockwave between this and the supersonic air in front. This is known as a bow wave, and once again generates a ton of drag.

We can combat the formation of the wing by using a supersonic aerofoil; these are diamond-shaped, rather than the cambered subsonic aerofoils we are more used to, and generate lift in a different way (the ‘skipping stone’ theory is actually rather a good approximation here, except we use the force generated by the shockwaves above and below an angled wing to generate lift). The sharp leading edge of these wings prevents bow waves from forming and such aerofoils are commonly used on missiles, but they are inefficient at subsonic speeds and make takeoff and landing nigh-on impossible.

The other way to get round the problem is somewhat neater; as this graphic shows, when we go past the speed of sound the shockwave created by the aeroplane is not flat any more, but forms an angled cone shape- the faster we go, the steeper the cone angle (the ‘Mach angle’ is given by the formula sin(a)=v/c, for those who are interested). Now, if we remember that shockwaves cause the air behind them to slow down to subsonic speeds, it follows that if our wings lie just behind the shockwave, the air passing over them at right angles to the shockwave will be travelling at subsonic speeds, and the wing can generate lift perfectly normally. This is why the wings on military and other high-speed aircraft (such as Concorde) are ‘swept back’ at an angle; it allows them to generate lift much more easily when travelling at high speeds. Some modern aircraft even have variable-sweep wings (or ‘swing wings’), which can be pointed out flat when flying subsonically (which is more efficient) before being tucked back into a swept position for supersonic flight.

Aerodynamics is complicated.