There is an art, or rather, a knack, to flying…

The aerofoil is one of the greatest inventions mankind has come up with in the last 150 years; in the late 19th century, aristocratic Yorkshireman (as well as inventor, philanthropist, engineer and generally quite cool dude) George Cayley identified the way bird wings generated lift merely by moving through the air (rather than just by flapping), and set about trying to replicate this lift force. To this end, he built a ‘whirling arm’ to test wings and measure the upwards lift force they generated, and found that a cambered wing shape (as in modern aerofoils) similar to that of birds was more efficient at generating lift than one with flat surfaces. This was enough for him to engineer the first manned, sustained flight, sending his coachman across Brompton Dale in 1863 in a homemade glider (the coachman reportedly handed in his notice upon landing with the immortal line “I was hired to drive, not fly”), but he still didn’t really have a proper understanding of how his wing worked.

Nowadays, lift is understood better by both science and the general population; but many people who think they know how a wing works don’t quite understand the full principle. There are two incomplete/incorrect theories that people commonly believe in; the ‘skipping stone’ theory and the ‘equal transit time’ theory.

The ‘equal transit time’ theory is popular because it sounds very sciency and realistic; because a wing is a cambered shape, the tip-tail distance following the wing shape is longer over the top of the wing than it is when following the bottom surface. Therefore, air travelling over the top of the wing has to travel further than the air going underneath. Now, since the aircraft is travelling at a constant speed, all the air must surely be travelling past the aircraft at the same rate; so, regardless of what path the air takes, it must take the same time to travel the same lateral distance. Since speed=distance/time, and air going over the top of the wing has to cover a greater distance, it will be travelling faster than the air going underneath the wing. Bernoulli’s principle tells us that if air travels faster, the air pressure is lower; this means the air on top of the wing is at a lower pressure than the air underneath it, and this difference in pressure generates an upwards force. This force is lift.

The key flaw in this theory is the completely wrong assumption that the air over the top and bottom of the wing must take the same time to travel across it. If we analyse the airspeed at various points over a wing we find that air going over the top does, in fact, travel faster than air going underneath it (the reason for this comes from Euler’s fluid dynamics equations, which can be used to derive the Navier-Stokes equations for aerofoil behaviour. Please don’t ask me to explain them). However, this doesn’t mean that the two airflows necessarily coincide at the same point when we reach the trailing edge of the wing, so the theory doesn’t correctly calculate the amount of lift generated by the wing. This is compounded by the theory not explaining any of the lift generated from the bottom face of the wing, or why the angle wing  is set at (the angle of attack) affects the lift it generates, or how one is able to generate some lift from just a flat sheet set at an angle (or any other symmetrical wing profile), or how aircraft fly upside-down.

Then we have the (somewhat simpler) ‘skipping stone’ theory, which attempts to explain the lift generated from the bottom surface of the wing. Its basic postulate concerns the angle of attack; with an angled wing, the bottom face of the wing strikes some of the incoming air, causing air molecules to bounce off it. This is like the bottom of the wing being continually struck by lots of tiny ball bearings, sort of the same thing that happens when a skimming stone bounces off the surface of the water, and it generates a net force; lift. Not only that, but this theory claims to explain the lower pressure found on top of the wing; since air is blocked by the tilted wing, not so much gets to the area immediately above/behind it. This means there are less air molecules in a given space, giving rise to a lower pressure; another way of explaining the lift generated.

There isn’t much fundamentally wrong with this theory, but once again the mathematics don’t check out; it also does not accurately predict the amount of lift generated by a wing. It also fails to explain why a cambered wing set at a zero angle of attack is still able to generate lift; but actually it provides a surprisingly good model when we consider supersonic flight.

Lift can be explained as a combination of these two effects, but to do so is complex and unnecessary  we can find a far better explanation just by considering the shape the airflow makes when travelling over the wing. Air when passing over an aerofoil tends to follow the shape of its surface (Euler again), meaning it deviates from its initially straight path to follow a curved trajectory. This curve-shaped motion means the direction of the airflow must be changing; and since velocity is a vector quantity, any change in the direction of the air’s movement represents a change in its overall velocity, regardless of any change in airspeed (which contributes separately). Any change in velocity corresponds to the air being accelerated, and since Force = mass x acceleration this acceleration generates a net force; this force is what corresponds to lift. This ‘turning’ theory not only describes lift generation on both the top and bottom wing surfaces, since air is turned upon meeting both, but also why changing the angle off attack affects lift; a steeper angle means the air has to turn more when following the wing’s shape, meaning more lift is generated. Go too steep however, and the airflow breaks away from the wing and undergoes a process called flow separation… but I’m getting ahead of myself.

This explanation works fine so long as our aircraft is travelling at less than the speed of sound. However, as we approach Mach 1, strange things start to happen, as we shall find out next time…

Big Brother is Watching You…

Twenty or so years ago, the title of this post would have been associated with only one thing, namely the single finest piece of literature written during the 20th century (there you go, I said it). However, in the past decade and a bit, this has all changed somewhat, and Big Brother now no longer refers to some all-seeing eye of oppression and dictatorship but to some strange TV show about people doing weird things in a house. Except that that ‘strange TV show’ happens to be the most famous product of one of the biggest cultural phenomena of the noughties; reality TV.

The concept of reality TV is an inherently simple one; get a bunch of not entirely normal people, point some cameras at them, give them a few things to do and enjoy yourself giggling at their unscripted activities and general mayhem. If the people in question happen to be minor celebrities then so much the jollier; anything to draw in the viewers. However, it is not just this basic format that makes reality TV what it is, but the obsessively following nature of it; reality TV is there all day, every day, reported on sister shows, constantly being mentioned in the ad breaks, and making itself felt in the media. In the past, the people and events concerned with the genre have made headline news and have been talked about by the Prime freakin’ Minister (Gordon Brown, specifically), and for a couple of years it was hard to get away from its all-pervading grip.

The first TV show that could be defined as ‘reality television’ was Candid Camera, which first came into being back in 1948. This basically involved a guy (Allen Funt) wandering round a city performing pranks of various descriptions on unsuspecting members of the public, whilst someone hid in the background filming their reactions; how this was possible given the camera technology of the 40s always baffles me. This format is still in existence today, in shows such as Dom Joly’s Trigger Happy TV, and since then the genre in its broadest terms has gained a few more sub-genres; unscripted police/crime TV in the style of Crimewatch came along in the 50s and the 60s experimented in a style that we would now consider more of an observational documentary. During the 70s, Chuck Barris invented the concept of and hosted the first reality game shows such as ‘The Dating Game’ (a forerunner to Blind Date), and these introduced the concept of, rather than simply filming normal people in normal environments doing normal things, putting these people in a structured situation specifically designed to entertain (even if said entertainment came at the expense of a little dignity). The reality shows that were popularised throughout the late nineties and early noughties took the concept to extremes, taking people completely out of their normal environment and putting them in a tightly controlled, heavily-filmed, artificial construct to film everything about them.

One of the early pioneers of this type of television was the American show Survivor. Here, the isolation environment was a tropical island, with contestants split into ‘tribes’ and tasked to build a habitable living environment and compete against one another for rewards. Survivor also introduced the concept of ‘voting off’ contestants; after each challenge, tribes would gather to select which participant they wanted to get rid of, causing the number of participants to dwindle throughout until only one ‘Sole Survivor’ remained. The idea here was to derive entertainment from inter-group conflicts, initially as people attempted to get their living space sorted (and presumably bitched about who wasn’t pulling their weight/was being a total jerk about it all), later as people began to complain about the results of challenges. The key feature that distinguishes this show as reality TV in the modern sense concerns the focus of the show; the challenges and such are merely background to try and provoke the group tensions and dynamics that are the real hook producers are aiming for. The show also displayed another feature commonly demonstrated on reality TV (and later shown more clearly on game shows such as The Weakest Link) that added a tactical element to proceedings; early on, voting of weaker members is advantageous as it increases your success rate and thus potential prize, but later it makes sense to vote off the other competitors who might beat you.

In Britain, Castaway soon followed in a similar vein, but removed the element of competition; ‘castaways’ were merely whisked off to a Scottish island for a year and tasked to build a self-sustaining community in that time. The show was originally intended to not be reality TV in the traditional sense, instead being billed as ‘an experiment’ to see what a selected cross-section of British society would come up with. However, in response to falling ratings later in the year, the show’s producers increased the number of cameras around the island and became increasingly focused on group dynamics and disputes. The reason for this can be explained in two words: Big Brother.

The concept behind Big Brother took the concept of Survivor and tweaked the focus of it, playing down the element of challenge and playing up the element of semi-voyeurism. Tropical island was replaced by house, with a large open-plan central area that made all drama very public and obvious. And everything was filmed; every weird conversation that presenters could make fun of, every time somebody complained about who was leaving the toilet seat up (I don’t know, I never watched it)- all was filmed and cleverly edited together to create a kind of blooper real of people’s lives for viewers to snigger at. Playing down the element of competition also introduced the practice of letting viewers, rather than contestants, vote people off, both increasing the watchability of the show by adding some minor element of interactivity and turning the whole thing into some kind of strange popularity contest, where the criteria for popularity are ‘how fun are you to watch messing around on screen?’.

Nowadays reality TV is on the way out; Channel 4 cancelled Big Brother in the UK some years ago after ratings slumped for later seasons, and with the TV talent show looking to be not far from following popular culture has yet to find a format for the ‘teenies’ (has nobody managed to think of a better name than that yet’) to latch onto and let define the televisual era. Let’s hope that, when it does, it has a little more dignity about it.

Man, and I don’t even watch reality TV…

War Games

So, what haven’t I done a post on in a while. Hmm…

Film reviewing?

WarGames was always going to struggle to age gracefully; even in 1983 setting one’s plot against the backdrop of the Cold War was something of an old idea, and the fear of the unofficial conflict degenerating into armageddon had certainly lessened since the ‘Red Scare’ days of the 50s and 60s. Then there’s the subject matter and plot- ‘supercomputer almost destroys world via nuclear war’ must have seemed terribly futuristic and sci-fi, but several years of filmmaking have rendered the idea somewhat cliched; it’s no coincidence that the film’s 2008 ‘sequel’ went straight to DVD. In an age where computers have now become ubiquitous, the computing technology on display also seems hilariously old-fashioned, but a bigger flaw is the film’s presentation of how computers work. Our AI antagonist, ‘Joshua’, shows the ability to think creatively, talk and respond like a human and to learn from experience & repetition, all features that 30 years of superhuman technological advancement in the field of computing have still not been able to pull off with any real success; the first in a long series of plot holes. I myself spent much of the second act inwardly shouting at the characters for making quite so many either hideously dumb or just plain illogical decisions, ranging from agreeing on a whim to pay for a flight across the USA to a friend met just days earlier to deciding that the best way to convince a bunch of enraged FBI officers of that you are not a Soviet-controlled terrorist bent on destruction of the USA is to break out of their custody.

The first act largely avoided these problems, and the setup was well executed; our protagonist is David (Matthew Broderick), a late teenage high school nerd who manages to avoid the typical Hollywood idea of nerd-dom by being articulate, well-liked, not particularly concerned about his schoolwork and relatively normal. Indeed, the only clues we have to his nerdery come thanks to his twin loves of video gaming and messing around in his room with a computer, hacking into anything undefended that he considers interesting. The film also manages to avoid reverting to formula with regards to the film’s female lead, his friend Jennifer (Ally Sheedy), who manages to not fall into the role of designated love interest whilst acting as an effective sounding board for the audience’s questions; a nice touch when dealing subject matter that audiences of the time would doubtless have found difficult to understand. This does leave her character somewhat lacking in depth, but thankfully this proves the exception rather than the rule.

Parallel to this, we have NORAD; the USA’s nuclear defence headquarters, who after realising the potential risk of human missile operators being unwilling to launch their deadly weapons, decide to place their entire nuclear arsenal under computerised control. The computer in question is the WOPR, a supercomputer intended to continually play ‘war games’ to identify the optimal strategy in the event of nuclear war. So we have a casual computer hacker at one end of the story and a computer with far too much control for its own good in the other; you can guess how things are going to go from there.

Unfortunately, things start to unravel once the plot starts to gather speed. Broderick’s presentation of David works great when he’s playing a confident, playful geek, but when he starts trying to act scared or serious his delivery becomes painfully unnatural. Since he and Sheedy’s rather depthless character et the majority of the screen time, this leaves large portions of the film lying fallow; the supporting characters, such as the brash General Beringer (Barry Corbin) and the eccentric Dr. Stephen Falken (John Wood) do a far better job of filling out their respective character patterns, but they can’t quite overshadow the plot holes and character deficiencies of the twin leads. This is not to say the film is bad, far from it; director John Badham clearly knows how to build tension, using NORAD’s Defcon level as a neat indicator of just how high the stakes are/how much **** is waiting to hit the proverbial fan. Joshua manages to be a compelling bad guy, in spite of being faceless and having less than five minutes of actual screen time, and his famous line “A strange game. The only winning move is not to play” carries enough resonance and meaning that I’d heard of it long before I had the film it came from. It also attempts the classic trick, demonstrated to perfection in Inception, of dealing with subject matter that attempts to blur the line between fiction (the ‘war games’) and reality (nuclear war) in an effort to similarly blur its own fiction with the reality of the audience; it is all desperately trying to be serious and meaningful.

But in the end, it all feels like so much add-ons, and somehow the core dynamics and characterisation left me out of the experience. WarGames tries so very hard to hook the viewer in to a compelling, intriguing, high-stakes plot, but for me it just failed to quite pull it off. It’s not a bad film, but to me it all felt somehow underwhelming. The internet tells me that for some people, it’s a favourite, but for me it was gently downhill from the first act onwards. I don’t really have much more to say.

Shining Curtains

When the Vikings swept across Europe in the 7th and 8th centuries, they brought with them many stories; stories of their Gods, of the birth of the world, of Asgard, of Valhalla, of Jormundur the world-serpent, of Loki the trickster, Odin the father and of Ragnarok- the end of this world and the beginning of the next. However, the reason I mention the Vikings today is in reference to one particular set of stories they brought with them; of shining curtains of brilliant, heavenly fire, dancing across the northern sky as the Gods fought with one another. Such lights were not common in Europe, but they were certainly known, and throughout history have provoked terror at the anger of the various Gods that was clearly being displayed across the heavens. Now, we know these shining curtains as the aurora borealis (Aurora was the Roman goddess of the dawn, whilst boreas was the Greek name for the north wind (because the aurora was only observed in the far north- a similar feature known as the aurora australis is seen near the south pole). The name was acquired in 1621).

Nowadays, we know that the auroras are an electromagnetic effect, which was demonstrated quite spectacularly in 1859. On the 28th of August and 2nd of September that year, spectacular auroras erupted across much of the northern hemisphere, reaching their peak at one o’clock in the morning EST, and as far south as Boston the light was enough to read by. However, the feature I am interested here concerns the American Telegraph Line, stretching almost due north between Boston, Massachusetts, and Portland, Maine. Because of the great length and orientation of this line, the electromagnetic field generated by the aurora was sufficient to induce a current in the telegraph, to the extent that operators at both ends of the line communicated to decide to switch off their batteries (which were only interfering) and operate solely on aurora-power for around two hours. Aside from a gentle fluctuation of current, no problems were reported with this system.

We now know that the ultimate cause of the aurorae is our sun, and that two loads of exceptional solar activity were responsible for the 1859 aurora. We all know the sun emits a great deal of energy from the nuclear fusion going on in its core, but it also emits a whole lot of other stuff; including a lot of ionised (charged) gas, or plasma. This outflow of charged particles forms what is known as the solar wind, flowing out into space in all directions; it is this solar wind that generates the tail on comets, and is why such a tail always points directly away from the sun. However, things get interesting when the solar wind hits a planet such as Earth, which has a magnetic field surrounding it. Earth’s magnetic field looks remarkably similar to that of a large, three-dimensional bar magnet (this picture demonstrates it’s shape well), and when a large amount of charged particles passes through this magnetic field it is subject to something known as the motor effect. As every GCSE physics student knows, it is this effect that allows us to generate motion from electricity, and the same thing happens here; the large mass of moving charge acts as a current, and this cuts across the earth’s magnetic field. This generates a force (this is basically what the motor effect does), and this force points sideways, pushing the solar wind sideways. However, as it moves, so does the direction of the ‘current’, and thus the direction of the force changes too; this process ends up causing the charge to spin around the magnetic field lines of the earth, causing it to spiral as this mass of charged particles moves along them. Following these field lines, the charge will end up spiralling towards the poles of the earth, at which point the field lines bend and start going into the earth itself. As the plasma follows these lines therefore, it will come into contact with the Earth’s atmosphere, and one section of it in particular; the magnetosphere.

The magnetosphere is a region of our atmosphere that covers the upper level of our ionosphere which has a strong magnetic field. Here, the magnetic fields of both the charged plasma and the magnetosphere itself combine in a rather complicated process known as magnetic reconnection, the importance of which will be discussed later. Now, let us consider the contents of the plasma, all these charged particles and in particular high energy electrons that are now bumping into atoms of air in the ionosphere. This bumping into atoms gives them energy, which an atom deals with by having electrons within the atoms jump up energy levels and enter an excited state. After a short while, the atoms ‘cool down’ by having electrons drop down energy levels again, releasing packets of electromagnetic energy as they do so. We observe this release of EM radiation as visible light, and hey presto! we can see the aurorae. What colour the aurora ends up being depends on what atoms we are interacting with; oxygen is more common higher up and generates green and red aurorae depending on height, so these are the most common colours. If the solar wind is able to get further down in the atmosphere, it can interact with nitrogen and produce blue and purple aurorae.

The shape of the aurorae can be put down to the whole business of spiralling around field lines; this causes, as the field lines bend in towards the earth’s poles, them to describe roughly circular paths around the north and south poles. However, plasma does not conduct electricity very well between magnetic field lines, as this pattern is, so we would not expect the aurora to be very bright under normal circumstances. The reason this is not the case, and that aurorae are as visible and beautiful as they are, can be put down to the process of magnetic reconnection, which makes the plasma more conductive and allows these charged particles to flow more easily around in a circular path. This circular path around the poles causes the aurora to follow approximately east-west lines into the far distance, and thus we get the effect of ‘curtains’ of light following (roughly) this east-west pattern. The flickery, wavy nature of these aurora is, I presume, due to fluctuations in the solar wind and/or actual winds in the upper atmosphere. The end result? Possibly the most beautiful show Earth has to offer us. I love science.

The Ultimate Try

Over the years, the game of rugby has seen many fantastic tries. From Andy Hancock’s 85 yard dash to snatch a draw from the jaws of defeat, to Philippe Saint-Andre’s own piece of Twickenham magic in 1991 voted Twickenham’s try of the century, and of course via ‘that try’ scored by Gareth Edwards in the opening minutes of the 1973 New Zealand-Barbarians match, we don’t even have to delve into the reams of amazing tries at club level to experience a vast cavalcade of sporting excellence and excitement when it comes to crossing the whitewash. And this has got me thinking; what is the recipe for the perfect try? The ideal, the pinnacle, the best, most exciting and most exquisite possible way to to touch down for five points?

Well, it seems logical to start at the beginning, the try’s inception. To me, a try should start from humble beginnings, a state where the crowd are not excited, and then build to a fantastic crescendo of joy and amazement; so our start point should be humble as well. The job of our first play is to prick the crowd’s attention, to give us the first sniff of something to happen, to offer potential to a situation where, apparently, nothing is on. Surprisingly few situations on a rugby field can offer such innocuous beginnings, but one slightly unusual example was to be found in the buildup to Chris Ashton’s famous try against Australia at Twickenham two years ago; Australia were on the offensive, but England won a turnover ruck. The pressure eased off; now, surely, England would kick it safe. A brief moment of innocuousness, before Ben Youngs spotted a gap.

But the classic in this situation, and the spawn of many a great try, is the moment of receiving a long kick. Here, again, we expect a responding kick, and thus have our period of disinterest before the step and run that begins our try. It was such a reception from Phil Bennett, along with two lovely sidesteps, that precipitated Gareth Edwards’ 1973 try, and I think this may prove the ideal starting point for my try.

Now, to the midsection of this try, which should be fast and fluid. Defender after defender should come and be beaten; and although many a good try has been scored with a ruck halfway through it, the best are uninterrupted start to finish as we build and build both tension and excitement. Here, the choice to begin by receiving a kick plays in our favour, since this naturally produces multiple staggered waves of defenders to beat one at a time as we advance up the pitch. Another key feature for success during this period is variety, for this is when a team shows off its full breadth of skill; possibly the only flaw with the 1973 special is that all defenders are beaten by simple passing. By contrast, Saint-Andre’s try featured everything from slick passing through individual speed and skilful running, capped by a lovely chip to finish things off; it is vitally important that a kick is not utilised too early, where it may slow the try’s pacing. A bit of skill during the kick collection itself helps too, adding a touch of difficulty and class to the move whilst also giving a moment of will he/won’t he tension to really crank it up; every little helps in the search for perfection. A good example of a properly good kick collection occurred in the Super 15 recently, with a sublime one handed pickup on the bounce for Julian Savea as he ran in for the 5 points. For my try, I think we’ll have a bit of everything; a sneaky sidestep or two, some pace to beat a defender on the wing, a bit of outrageous ambition (through-the-legs pass would work well, I think), some silky hands and a nice kick to finish things off; a crossfield would work nicely, I feel.

And the finish, the finish- a crucial and yet under-considered element to any great try. For a try to feel truly special, to reach it’s crowning crescendo, the eventual try scorer must have a good run-in to finish the job. It needn’t be especially long, but prior to the touchdown all the great tries have that moment where everybody knows that the score is about to come- the moment of release that means, when the touchdown does eventually come, our emotions are ones of joy at the moment rather than relief that he’s got it down. However, such an ending does not follow naturally from a crossfield kick, as I have chosen to include in my try, so there will need to be one finishing touch to allow a run in.

Well, we have all the ingredients ready, now to face the final product. So everyone reading this, I invite you to sit back, fill your mind with a stadium and a team, and let Cliff Morgan’s dulcet tones fill your ears with my own little theoretical contribution to the pantheon of rugby greatness:

(I have chosen for my try to be scored in the 2003 World Cup final for England against Australia, or at the least using the teams that finished that match because… well why the hell not?)

“And Robinson collects the kick, deep in his 22… Roff with the chase… Oh, and the step from Robinson, straight past Roff and off he goes… Steps inside, around Smith, this is great stuff from Robinson… and the tackle comes in from Waugh- but a cracking offload and Greenwood’s away up the wing! Greenwood, to Back, flick to Catt… Catt’s over the halfway line, but running into traffic… the pop to Dallaglio, and *oof*! What a hit there, straight through Harrison! Nice pop, back to Greenwood, it’s Greenwood on Larkham… the long pass, out to Cohen on the left… Cohen going for the ball, under pressure from Flatley- and oh, that’s fantastic, through the legs, to Wilkinson! Wilkinson over the 22, coming inside, can he get round Rogers? Wilkinson the golden boy… Oh, the kick! Wilkinson, with the crossfield kick to Lewsey! It’s Lewsey on Tuqiri, in the far corner, Lewsey jumps… Lewsey takes, Lewsey passes to Robinson! What a score!- Lewsey with the midair flick, inside to Robinson, and it’s Robinson over for the try! Robinson started the move, and now he has finished with quite the most remarkable try! What a fantastic score…”

OK, er, sorry about that, I’ll try to be less self-indulgent next time.

Call of Duty: Modern Moneymaking

The first person shooter (FPS, or shoot-em-up) genre is the biggest and most profitable in the gaming industry, which is itself now (globally) the biggest entertainment industry on earth. Every year Activision pay off their expenditure for the next decade by releasing another Call of Duty game, and every so often Battlefield, Medal of Honour and Halo like to join the party to similar financial response. Given that many critics have built their name on slagging off such games, and that even the most ardent fans will admit that perhaps only four of the nine CoD games have actually improved on the previous one, this fact seems a trifle odd to my eye (which may have something to do with me being awful at them), and since I cannot apply science to this problem I thought I might retreat to my other old friend; history.

The FPS genre took a while to get going; partly due to the graphical fidelity and processing power required to replicate a decent first-person perspective, it wasn’t until 1992 that Wolfenstein 3D, the game often credited with ‘inventing’ the genre, was released, long after the first four home console generations had passed. However, the genre had existed after a fashion before then; a simple game called Maze War, akin to Pac Man from a rudimentary 3D perspective and with guns, is considered an early example and was released as far back as 1974. Other, similar games, including the space simulator Spasim (the same thing but in space) and tank simulator Battlezone (very slightly different and with tanks) were released over the next decade. All, as well as most subsequent efforts pre-Wolfenstein, used a tile-based movement system, whereby one’s movement was restricted to moving from one square to the next, since this was pretty much all that was possible with contemporary technology.

Further advances dabbled in elements of multiplayer, and introduced such features as texture mapping to enhance graphical fidelity, but Wolfenstein’s great success lay in its gameplay format. Gone was any tile-based or otherwise restrictive movement, and in its place were maps that one was free to move around in all directions and orientations in two dimensions. It also incorporated a health meter (and healing pickups), depleting ammo and interchangeable weapons, all of which would become mainstays of the genre over the next few years. Despite its controversial use of Nazi iconography (because the bad guys were Nazis, rather than the developers fascists), the game was wildly successful; at least for the short time before the same company, id software, released Doom. Doom used a similar interface as Wolfenstein, had better graphics and a more detailed 3D environment, but its real success lay in its release format; the first third of the game was distributed for free, encouraging gamers to experience all that the game had to offer before gladly paying for the remainder. With it’s consolidation and enhancement of Wolfenstein’s format and its adoption of a now-ubiquitous multiplayer mode, Doom is often considered the most influential FPS of all time, and one of the most important games full stop; its fame is such that versions of the game have been available on almost every major console for the last 20 years.

Over the next few years, many other features that would later become staples of the FPS genre were developed. The Apple Mac, not usually a traditional stronghold for gaming, was the platform for Marathon, which introduced a number of new game modes (including cooperative multiplayer), more complex weapons and placed a heavy emphasis on story as well as gameplay. Star Wars: Dark Forces introduced the ability to crouch for the first time, thus setting the template for today’s FPS pattern of repeatedly hiding behind chest-high walls, and 1995’s Descent changed the graphical playing field by changing from using sprites to represent objects and NPC’s in the gameworld to a 3D system based around polygonal graphics. This technology was one of the many technologies used in Doom’s 1996 sequel, Quake, which also increased the series’ emphasis on online multiplayer. Unfortunately, this market would soon be totally conquered by 1997’s GoldenEye, a tie-in to the James Bond film of the same name; the game itself experimented with new, claustrophobic game environments and required you to manually reload your weapon, but it was the multiplayer that proved its success. It has now been revealed that the multiplayer was actually nothing more than a hasty add-on knocked up in matter of weeks, but the circituous maps and multiple weapons & characters on offer made it endlessly compelling, and right up until 2004 GoldenEye was the best selling game for the Nintendo 64.

But the defining FPS of this era was undoubtedly Half Life; released in 1998, the game combined Quake’s graphical technology with a bulletproof gameplay format and one of the strongest narratives and plots of any game ever made. The single player experience alone was enough to raise Valve, the game’s makers, to iconic status almost overnight (a label they retain to this day due to their penchant for innovation and not being dicks about their business tactics), and when a multiplayer mod for it was developed (Counterstrike), it and its successor (Counterstrike: Source) became the most popular multiplayer FPS experience ever.

After Half Life, some felt that the FPS genre had been taken about as far as it could in its current iteration, and that the genre’s immediate future was to be based around increasing graphical quality, fiddling with storylines and making money. However, in 2000 Microsoft acquired Bungie studios (who had made Marathon back in 1994) and released their real-time-strategy-turned-third-person-shooter-turned-first-person-shooter as a startup title for their newly released Xbox console. The game incorporated a heavy focus on characterisation (helped by it occasionally leaving first person perspective for cutscenes, which Half Life never did) with a new style of enemies (well-rendered and varied alien opponents), a wide variety of weapons and the perhaps unusual feature of having an auto-healing system rather than health pickups. The game was called Halo, and it revolutionised the FPS genre.

Since then, advancements have been less revolutionary and more gradual, as the FPS genre has diversified. Halo has now gone through several incarnations whilst keeping the basic format the same, but the gameplay principle has been applied in almost every conceivable way. Battlefield and Call of Duty applied the concept to military-style gameplay with a strong multiplayer emphasis, whilst the likes of Resident Evil and Left 4 Dead added a horror theme (or at least used zombies as bad guys). The games based on the Crytek engine (Crysis and Far Cry) turned the focus away from linear mission design and on to beautifully rendered open-world levels (some would argue in direct contrast to CoD’s increasingly linear single player mode), and recently Spec Ops: The Line has followed in Half Life’s plot-centric footsteps with a nonlinear storyline based around the mental impact of post-traumatic stress disorder.

Some argue that the current FPS genre is stagnating; indeed super-critical game reviewer Yahtzee Croshaw has recently created a new genre called ‘spunkgargleweewee’ to cover generic linear modern military shooters (ie Call of Duty and her extended family) and indicate his contempt at their current form of existence. But to many they are the pinnacle of current-generation gaming, or at least the most fun way yet devised to spend an afternoon. By way of an example as to how much people… enjoy these things, the most recent Call of Duty game was released with a feature for the PS3 to allow the map packs used for multiplayer to be downloaded to the console’s hard disk. This was a feature requested of Activision by their hardcore fan base, who were somewhat perplexed at the request; the feature was, they pointed out, not going to make the game run any faster. But the fan base said they realised this, and it wasn’t a performance issue; it was just that they were playing the game so much that the process of continually reading the map data from the game disc was beginning to wear out the laser used to read the disc information. Thank you, Call of Duty fans, for making me feel especially productive after spending an afternoon writing an article for nobody on the internet to read.

“Lies, damn lies, and statistics”

Ours is the age of statistics; of number-crunching, of quantifying, of defining everything by what it means in terms of percentages and comparisons. Statistics crop up in every walk of life, to some extent or other, in fields as widespread as advertising and sport. Many people’s livelihoods now depend on their ability to crunch the numbers, to come up with data and patterns, and much of our society’s increasing ability to do awesome things can be traced back to someone making the numbers dance.

In fact, most of what we think of as ‘statistics’ are not really statistics at all, but merely numbers; to a pedantic mathematician, a statistic is defined as a mathematical function of a sample of data, not the whole ‘population’ we are considering. We use statistics when it would be impractical to measure the whole population, usually because it’s too large, and when we instead are trying to mathematically model the whole population based on a small sample of it. Thus, next to no sporting ‘statistics’ are in fact true statistics as they tend to cover the whole game; if I heard during a rugby match that “Leicester had 59% of the possession”, that is nothing more than a number; or, to use the mathematical term, a parameter. A statistic would be to say “From our sample [of one game] we can conclude that Leicester control an average of 59% of the possession when they play rugby”, but this is quite evidently not true since we couldn’t extrapolate Leicester’s normal behaviour from a single match. It is for this reason that complex mathematical formulae are used to determine the uncertainty of a conclusion drawn from a statistical test, and these are based on the size of the sample we are testing compared to the overall size of the population we are trying to model. These uncertainty levels are often brushed under the carpet when pseudoscientists try to make dramatic, sweeping claims about something, but they are possibly the most important feature of modern statistics.

Another weapon for the poor statistician can be the mis-application of the idea of correlation. Correlation is basically what it means when you take two variables, plot them against one another on a graph, and find you get a nice neat line joining them, suggesting that the two are in some way related. Correlation tends to get scientists very excited, since if two things are linked then it suggests that you can make one thing happen by doing another, an often advantageous concept, and this is known as a causal relationship. However, whilst correlation and causation are rarely not intertwined, the first lesson every statistician learns is this; correlation DOES NOT imply causation.

Imagine, for instance, you have a cold. You feel like crap, your head is spinning, you’re dehydrated and you can’t breath through your nose. If we were, during the period before, during and after your cold, to plot a graph of one’s relative ability to breath through the nose against the severity of your headache (yeah, not very scientific I know), these two facts would both correlate, since they happen at the same time due to the cold. However, if I were to decide that this correlation implies causation, then I would draw the conclusion that all I need to do to give you a terrible headache is to plug your nose with tissue paper so you can’t breath through it. In this case, I have ignored the possibility (and, as it transpires, the eventuality) of there being a third variable (the cold virus) that causes both of the other two variables, and this is very hard to investigate without poking our head out of the numbers and looking at the real world. There are statistical techniques that enable us to do this, but they are for another time.

Whilst this example was more childish than anything, mis-extrapolation of a correlation can have deadly consequences. One example, explored in Ben Goldacre’s Bad Science, concerns beta-carotene, an antioxidant found in carrots, and in 1981 an epidemiologist called Richard Peto published a meta-analysis (post for another time) of a series of scientific studies that suggested people with high beta-carotene levels showed a reduced risk of cancer. At the time, antioxidants were considered the wonder-substance of the nutrition, and everyone got on board with the idea that beta-carotene was awesome stuff. However, all of the studies examined were observational ones; taking a lot of different people, seeing what their beta-carotene levels were and then examining whether or not they had cancer or developed it in later life. None of the studies actually gave their subjects beta-carotene and then saw if that affected their cancer risk, and this prompted the editor of Nature magazine (the scientific journal in which Peto’s paper was published) to include a footnote reading:

Unwary readers (if such there are) should not take the accompanying article as a sign that the consumption of large quantities of carrots (or other dietary sources of beta-carotene) is necessarily protective against cancer.

The editor’s footnote quickly proved a well-judged one; a study conducted in Finland some time afterwards actually gave participants at high risk of lung cancer beta-carotene and found their risk of both getting the cancer and of death were higher than for the ‘placebo’ control group. A later study, named CARET (Carotene And Retinol Efficiency Trial), also tested groups at a high risk of lung cancer, giving half of them a mixture of beta-carotene and vitamin A and the other half placebos. The idea was to run the trial for six years and see how many illnesses/deaths each group ended up with; but after preliminary data found that those having the antioxidant tablets were 46% more likely to die from lung cancer, they decided it would be unethical to continue the trial and it was terminated early. Had the Nature article been allowed to get out of hand before this research was done, then it could have put thousands of people who hadn’t read the article properly at risk; and all because of the dangers of assuming correlation=causation.

This wasn’t really the gentle ramble through statistics I originally intended it to be, but there you go; stats. Next time, something a little less random. Maybe

Air Warfare Today

My last post summarised the ins and outs of the missile weaponry used by most modern air forces today, and the impact that this had on fighter technology with the development of the interceptor and fighter-bomber as separate classes. This technology was flashy and rose to prominence during the Korean war, but the powers-that-be still used large bomber aircraft during that conflict and were convinced that carpet bombing was the most effective strategy for a large-scale land campaign. And who knows; if WWIII ever ends up happening, maybe that sheer scale of destruction will once again be called for.

However, this tactic was not universally appreciated. As world warfare descended ever more into world politics and scheming, several countries began to adopt the fighter-bomber as their principle strike aircraft. A good example is Israel, long-time allies of the US, who used American fighter-bombers early on during the 1970s Middle East conflict to take out the air bases of their Soviet-backed Arab neighbours, giving them air superiority in the region that proved very valuable in the years to come as that conflict escalated. These fighters were valuable to such countries, who could not afford the cost of a large-scale bombing campaign; faster, precision guided destruction made far better fiscal sense and annoyed the neighbours less when they were parked on their doorstep (unless your government happened to be quite as gung-ho as Israel’s). Throughout the 1960s, this realisation of the value of fighter aircraft lead to further developments in their design; ground-assault weapons, in the form of air-to-surface missiles and laser-guided bombs, began to be standard equipment on board fighter aircraft once their value as principle strike weapons was realised and demand for them to perform as such increased.  Furthermore, as wars were fought and planes were brought down, it was also realised that dogfighting was not in fact a dead art when one’s opponents (ie the Soviet Union and her friends) also had good hardware, so maneouvreability was once again reinstated as a design priority. Both of these advances were greatly aided by the rapid advancements in the electronics of the age, which quickly found their way into avionics; the electronic systems used by aircraft for navigation, monitoring, and (nowadays) help flying the aircraft, among other things.

It was also at this time that aircraft began experimenting with the idea of VTOL: Vertical Take Off and Landing. This was an advantageous property for an aircraft to have since it limited the space it needed for its take off and landing, allowing it to land in a wider range of environments where there wasn’t a convenient long stretch of bare tarmac. It was also particularly useful for aircraft carriers, which had been shown during WW2’s battle of Midway to be incredibly useful military tools, since any space not used for runway could be used to carry more precious aircraft. Many approaches were tried, including some ‘tail-sitting’ aircraft that mounted onto a vertical wall, but the only one to achieve mainstream success was the British Harrier, with two rotatable engine vents that could be aimed downwards for vertical takeoff. These offered the Harrier another trick- it was the only aircraft with a reverse gear. A skilled pilot could, if being tailed by a hostile, face his vents forward so his engines were pushing him in the opposite direction to his direction of travel, causing him to rapidly slow down and for his opponent to suddenly find himself with an enemy behind him eyeing up a shot. This isn’t especially relevant, I just think it’s really cool.

However, the event that was to fundamentally change late 20th century air warfare like no other was the Vietnam war; possibly the USA’s biggest ever military mistake. The war itself was chaotic on almost every level, with soldiers being accused of everything from torture to drug abuse, and by the mid 1960s it had already been going on, on and off, for over a decade years. The American public was rapidly becoming disillusioned with the war in general, as the hippy movement began to lift off, but in August 1964 the USS Maddox allegedly fired at a couple of torpedo boats that were following it through the Gulf of Tonkin. I say allegedly, because there is much speculation as to the identity of the vessels themselves; as then-president Lyndon B. Johnson said, “those sailors out there may have been shooting at flying fish”. In any case, the outcome was the important bit; when (now known to be false) reports came in two days later of a second attack in the area, Congress backed Johnson in the Gulf of Tonkin Resolution, which basically gave the President the power to do what he liked in South-East Asia without making the war official (which would have meant consulting the UN). This resulted in a heavy escalation of the war both on the ground and in the air, but possibly the most significant side-effect was ‘Operation Rolling Thunder’, which authorised a massive-scale bombing campaign to be launched on the Communist North Vietnam. The Air Force Chief of Staff at the time, Curtis LeMay, had been calling for such a saturation bombing campaign for a while by then, and said “we’re going to bomb them back into the Stone Age”.

Operation Rolling Thunder ended up dropping, mainly via B-52 bombers, a million tonnes of bombs across North Vietnam and the Ho Chi Minh trail (used to supply the militant NLF, aka Viet Cong, operating in South Vietnam) across neighbouring Cambodia and Laos, in possibly the worst piece of foreign politics ever attempted by a US government- and that’s saying something. Not only did opinion of the war, both at home and abroad, take a large turn for the worse, but the bombing campaign itself was a failure; the Communist support for the NLF did not come from any physical infrastructure, but from an underground system that could not be targeted by a carpet bombing campaign. As such, NLF support along the Ho Chi Minh continued throughout Rolling Thunder, and after three years the whole business was called off as a very expensive failure. The shortcomings of the purpose-built bomber as a concept had been highlighted in painful detail for all the world to see; but two other aircraft used in Vietnam showed the way forward. The F-111 had variable geometry wings, meaning they could change their shape depending on the speed the aircraft was going. This meant it performed well at a wide variety of airspeeds, both super- and sub-sonic (see my post regarding supersonic flight for the ins and outs of this), and whilst the F-111 never had the performance to utilise them properly (since it was turboprop, rather than purely jet powered) the McDonnell F-4 Phantom did; the Phantom claimed more kills than any other fighter aircraft during Vietnam, and was (almost entirely accidentally) the first multi-role aircraft, operating both as the all-weather interceptor it was designed to be and the strike bomber its long range and large payload capacity allowed it to be.

The key advantage of multi-role aircraft is financial; in an age where the massive wars of the 20th century are slowly fading into the past (ha, ha) and defence budgets are growing ever-slimmer, it makes much more sense to own two or three aircraft that can each do five things very well than 15 that can only do one each to a superlative degree of perfection. This also makes an air force more flexible and able to respond faster; if an aircraft is ready for anything, then it alone is sufficient to cover a whole host of potential situations. Modern day aircraft such as the Eurofighter Typhoon take this a stage further; rather than being able to be set up differently to perform multiple different roles, they try to have a single setup that can perform any role (or, at least, that any ‘specialised’ setup also allows for other scenarios and necessities should the need arise). Whilst the degree of unspecialisation of the hardware does leave multirole aircraft vulnerable to more specialised variations if the concept is taken too far, the advantages of multirole capabilities in a modern air force existing with the modern political landscape are both obvious and pressing. Pursuit and refinement of this capability has been the key challenge facing aircraft designers over the last 20 to 30 years, but there have been two new technologies that have made their way into the field. The first of these is built-in aerodynamic instability (or ‘relaxed stability’), which has been made possible by the invention of ‘fly-by-wire’ controls, by which the joystick controls electronic systems that then tell the various components to move, rather than being mechanically connected to them. Relaxed stability basically means that, left to its own devices, an aircraft will oscillate from side to side or even crash by uncontrollable sideslipping rather than maintain level flight, but makes the aircraft more responsive and maneouvrable. To ensure that the aircraft concerned do not crash all the time, computer systems generally monitor the pitch and yaw of the aircraft and make the tiny corrections necessary to keep the aircraft flying straight. It is an oft-quoted fact that if the 70 computer systems on a Eurofighter Typhoon that do this were to crash, the aircraft would quite literally fall out of the sky.

The other innovation to hit the airframe market in recent years has been the concept of stealth, taking one of two forms. Firstly we consider the general design of modern fighters, carefully designed to minimise their radar cross-section and make them less visible to enemy radar. They also tend to shroud their engine exhausts so they aren’t visually visible from a distance. Then, we consider specialist designs such as the famous American Lockheed Nighthawk, whose strange triangular design covered in angled, black sheets of material are designed to scatter and absorb radar and make them ‘invisible’, especially at night. This design was, incidentally, one of the first to be unflyably unstable when in flight, and required a fly-by-wire control system that was revolutionary for that time.

Perhaps the best example of how far air warfare has come over the last century is to be found in the first Gulf War, during 1991. At night, Nighthawk stealth bombers would cross into Hussein-held territory to drop their bombs, invisible to Hussein’s radar and anti-aircraft systems, but unlike wars of old they didn’t just drop and hope at their targets. Instead, they were able to target bunkers and other such fortified military installations with just one bomb; a bomb that they could aim at and drop straight down a ventilation shaft. Whilst flying at 600 miles an hour.

Fire and Forget

By the end of my last post, we’d got as far as the 1950s in terms of the development of air warfare, an interesting period of transition, particularly for fighter technology. With the development of the jet engine and supersonic flight, the potential of these faster, lighter aircraft was beginning to outstrip that of the slow, lumbering bombers they ostensibly served. Lessons were quickly learned during the chaos of the Korean war, the first of the second half of the twentieth century, during which American & Allied forces fought a back-and-forth swinging conflict against the North Koreans and Chinese. Air power proved a key feature of the conflict; the new American jet fighters took apart the North Korean air force, consisting mainly of old propellor-driven aircraft, as they swept north past the 52nd parallel and toward the Chinese border, but when China joined in they brought with them a fleet of Soviet Mig-15 jet fighters, and suddenly the US and her allies were on the retreat. The American-lead UN campaign did embark on a bombing campaign using B-29 bombers, utterly annihilating vast swathes of North Korea and persuading the high command that carpet bombing was still a legitimate strategy, but it was the fast aerial fighter combat that really stole the show.

One of the key innovations that won the Allies the Battle of Britain during WWII proved during the Korean war to be particularly valuable during the realm of air warfare; radar. British radar technology during the war was designed to utilise massive-scale machinery to detect the approximate positions of incoming German raids, but post-war developments had refined it to use far smaller bits of equipment to identify objects more precisely and over a smaller range. This was then combined with the exponentially advancing electronics technology and the deadly, but so far difficult to use accurately, rocketeering technology developed during the two world wars to create a new weapon; the guided missile, based on the technology used on the German V2. The air-to-air missile (AAM) subsequently proved both more accurate & destructive than the machine guns previously used for air combat, whilst air-to-surface missiles (ASM’s) began to offer fighters the ability to take out ground targets in the same way as bombers, but with far superior speed and efficiency; with the development of the guided missile, fighters began to gain a capability in firepower to match their capability in airspeed and agility.

The earliest missiles were ‘beam riders’, using radar equipment attached to either an aircraft or (more typically) ground-based platform to aim at a target and then simply allowing a small bit of electronics, a rocket motor and some fins on the missile to follow the radar beam. These were somewhat tricky to use, especially as quite a lot of early radar sets had to be aimed manually rather than ‘locking on’ to a target, and the beam tended to fade when used over long range, so as technology improved post-Korea these beam riders were largely abandoned; but during the Korean war itself, these weapons proved deadly, accurate alternatives to machine guns capable of attacking from great range and many angles. Most importantly, the technology showed great potential for improvement; as more sensitive radiation-detecting equipment was developed, IR-seeking missiles (aka heat seekers) were developed, and once they were sensitive enough to detect something cooler than the exhaust gases from a jet engine (requiring all missiles to be fired from behind; tricky in a dogfight) these proved tricky customers to deal with. Later developments of the ‘beam riding’ system detected radiation being reflected from the target and tracked with their own inbuilt radar, which did away with the decreasing accuracy of an expanding beam in a system known as semi-active radar homing, and another modern guidance technique to target radar installations or communications hubs is to simply follow the trail of radiation they emit and explode upon hitting something. Most modern missiles however use fully active radar homing (ARH), whereby they carry their own radar system capable of sending out a beam to find a target, identify and lock onto its position ever-changing position, steering itself to follow the reflected radiation and doing the final, destructive deed entirely of its own accord. The greatest advantage to this is what is known as the ‘fire and forget’ capability, whereby one can fire the missile and start doing something else whilst safe in the knowledge that somebody will be exploding in the near future, with no input required from the aircraft.

As missile technology has advanced, so too have the techniques for fighting back against it; dropping reflective material behind an aircraft can confuse some basic radar systems, whilst dropping flares can distract heat seekers. As an ‘if all else fails’ procedure, heavy material can be dropped behind the aircraft for the missile to hit and blow up. However, only one aircraft has ever managed a totally failsafe method of avoiding missiles; the previously mentioned Lockheed SR-71A Blackbird, the fastest aircraft ever, had as its standard missile avoidance procedure to speed up and simply outrun the things. You may have noticed that I think this plane is insanely cool.

But now to drag us back to the correct time period. With the advancement of military technology and shrinking military budgets, it was realised that one highly capable jet fighter could do the work of many more basic design, and many forsaw the day when all fighter combat would concern beyond-visual-range (BVR) missile warfare. To this end, the interceptor began to evolve as a fighter concept; very fast aircraft (such as the ‘two engines and a seat’ design of the British Lightning) with a high ceiling, large missile inventories and powerful radars, they aimed to intercept (hence the name) long-range bombers travelling at high altitudes. To ensure the lower skies were not left empty, the fighter-bomber also began to develop as a design; this aimed to use the natural speed of fighter aircraft to make hit-and-run attacks on ground targets, whilst keeping a smaller arsenal of missiles to engage other fighters and any interceptors that decided to come after them. Korea had made the top brass decide that dogfights were rapidly becoming a thing of the past, and that future air combat would become a war of sneaky delivery of missiles as much as anything; but it hadn’t yet persuaded them that fighter-bombers could ever replace carpet bombing as an acceptable strategy or focus for air warfare. It would take some years for these two fallacies to be challenged, as I shall explore in next post’s, hopefully final, chapter.