Flying Supersonic

Last time (OK, quite a while ago actually), I explained the basic principle (from the Newtonian end of things; we can explain it using pressure, but that’s more complicated) of how wings generate lift when travelling at subsonic speeds, arguably the most important principle of physics affecting our modern world. However, as the second World War came to an end and aircraft started to get faster and faster, problems started to appear.

The first aircraft to approach the speed of sound (Mach 1, or around 700-odd miles an hour depending on air pressure) were WWII fighter aircraft; most only had top speeds of around 400-500mph or so whilst cruising, but could approach the magic number when going into a steep dive. When they did so, they found their aircraft began suffering from severe control issues and would shake violently; there are stories of Japanese Mitsubishi Zeroes that would plough into the ground at full speed, unable to pull out of a deathly transonic dive. Subsequent aerodynamic analyses of these aircraft suggest that if any of them had  in fact broken the sound barrier, their aircraft would most likely have been shaken to pieces. For this reason, the concept of ‘the sound barrier’ developed.

The problem arises from the Doppler effect (which is also, incidentally, responsible for the stellar red-shift that tells us our universe is expanding), and the fact that as an aircraft moves it emits pressure waves, carried through the air by molecules bumping into one another. Since this exactly the same method by which sound propagates in air, these pressure waves move at the speed of sound, and travel outwards from the aircraft in all directions. If the aircraft is travelling forwards, then each time it emits a pressure wave it will be a bit further forward than the centre of the pressure wave it emitted last, causing each wave in front of the aircraft to get closer together and waves behind it to spread out. This is the Doppler Effect.

Now, when the aircraft starts travelling very quickly, this effect becomes especially pronounced, wave fronts becoming compressed very close to one another. When the aircraft is at the speed of sound, the same speed at which the waves propagate, it catches up with the wave fronts themselves and all wave fronts are in the same place just in front of the aircraft. This causes them to build up on top of one another into a band of high-pressure air, which is experienced as a shockwave; the pressure drop behind this shockwave can cause water to condense out of the air and is responsible for pictures such as these.

But the shockwave does not just occur at Mach 1; we must remember that the shape of an aerofoil is such to cause air to travel faster over the top of the wing than it does normally. This means parts of the wing reach supersonic speeds, effectively, before the rest of the aircraft, causing shockwaves to form over the wings at a lower speed. The speed at which this first occurs is known as the critical Mach number. Since these shockwaves are at a high-pressure, then Bernoulli’s principle tells us they cause air to slow down dramatically; this contributes heavily to aerodynamic drag, and is part of the reason why such shockwaves can cause major control issues. Importantly, we must note that shockwaves always cause air to slow down to subsonic speeds, since the shockwave is generated at the point of buildup of all the pressure waves so acts as a barrier between the super- and sub-sonic portions of the airflow. However, there is another problem with this slowing of the airflow; it causes the air to have a higher pressure than the supersonic air in front of the shockwave. Since there is always a force from high pressure to low pressure, this can cause (at speeds sufficiently higher above the critical Mach number) parts of the airflow close to the wing (the boundary layer, which also experience surface friction from the wing) to change direction and start travelling forwards. This causes the boundary layer to recirculate, forming a turbulent portion of air that generates very little lift and quite a lot of drag, and for the rest of the airflow to separate from the wing surface; an effect known as boundary layer separation, (or Mach stall, since it causes similar problems to a regular stall) responsible for even more problems.

The practical upshot of all of this is that flying at transonic speeds (close to and around the speed of sound) is problematic and inefficient; but once we push past Mach 1 and start flying at supersonic speeds, things change somewhat. The shockwave over the wing moves to its trailing edge, as all of the air flowing over it is now travelling at supersonic speeds, and ceases to pose problems, but now we face the issues posed by a bow wave. At subsonic speeds, the pressure waves being emitted by the aircraft help to push air out of the way and mean it is generally deflected around the wing rather than just hitting it and slowing down dramatically; but at subsonic speeds, we leave those pressure waves behind us and we don’t have this advantage. This means supersonic air hits the front of the air and is slowed down or even stopped, creating a portion of subsonic air in front of the wing and (you guessed it) another shockwave between this and the supersonic air in front. This is known as a bow wave, and once again generates a ton of drag.

We can combat the formation of the wing by using a supersonic aerofoil; these are diamond-shaped, rather than the cambered subsonic aerofoils we are more used to, and generate lift in a different way (the ‘skipping stone’ theory is actually rather a good approximation here, except we use the force generated by the shockwaves above and below an angled wing to generate lift). The sharp leading edge of these wings prevents bow waves from forming and such aerofoils are commonly used on missiles, but they are inefficient at subsonic speeds and make takeoff and landing nigh-on impossible.

The other way to get round the problem is somewhat neater; as this graphic shows, when we go past the speed of sound the shockwave created by the aeroplane is not flat any more, but forms an angled cone shape- the faster we go, the steeper the cone angle (the ‘Mach angle’ is given by the formula sin(a)=v/c, for those who are interested). Now, if we remember that shockwaves cause the air behind them to slow down to subsonic speeds, it follows that if our wings lie just behind the shockwave, the air passing over them at right angles to the shockwave will be travelling at subsonic speeds, and the wing can generate lift perfectly normally. This is why the wings on military and other high-speed aircraft (such as Concorde) are ‘swept back’ at an angle; it allows them to generate lift much more easily when travelling at high speeds. Some modern aircraft even have variable-sweep wings (or ‘swing wings’), which can be pointed out flat when flying subsonically (which is more efficient) before being tucked back into a swept position for supersonic flight.

Aerodynamics is complicated.

Advertisement

There is an art, or rather, a knack, to flying…

The aerofoil is one of the greatest inventions mankind has come up with in the last 150 years; in the late 19th century, aristocratic Yorkshireman (as well as inventor, philanthropist, engineer and generally quite cool dude) George Cayley identified the way bird wings generated lift merely by moving through the air (rather than just by flapping), and set about trying to replicate this lift force. To this end, he built a ‘whirling arm’ to test wings and measure the upwards lift force they generated, and found that a cambered wing shape (as in modern aerofoils) similar to that of birds was more efficient at generating lift than one with flat surfaces. This was enough for him to engineer the first manned, sustained flight, sending his coachman across Brompton Dale in 1863 in a homemade glider (the coachman reportedly handed in his notice upon landing with the immortal line “I was hired to drive, not fly”), but he still didn’t really have a proper understanding of how his wing worked.

Nowadays, lift is understood better by both science and the general population; but many people who think they know how a wing works don’t quite understand the full principle. There are two incomplete/incorrect theories that people commonly believe in; the ‘skipping stone’ theory and the ‘equal transit time’ theory.

The ‘equal transit time’ theory is popular because it sounds very sciency and realistic; because a wing is a cambered shape, the tip-tail distance following the wing shape is longer over the top of the wing than it is when following the bottom surface. Therefore, air travelling over the top of the wing has to travel further than the air going underneath. Now, since the aircraft is travelling at a constant speed, all the air must surely be travelling past the aircraft at the same rate; so, regardless of what path the air takes, it must take the same time to travel the same lateral distance. Since speed=distance/time, and air going over the top of the wing has to cover a greater distance, it will be travelling faster than the air going underneath the wing. Bernoulli’s principle tells us that if air travels faster, the air pressure is lower; this means the air on top of the wing is at a lower pressure than the air underneath it, and this difference in pressure generates an upwards force. This force is lift.

The key flaw in this theory is the completely wrong assumption that the air over the top and bottom of the wing must take the same time to travel across it. If we analyse the airspeed at various points over a wing we find that air going over the top does, in fact, travel faster than air going underneath it (the reason for this comes from Euler’s fluid dynamics equations, which can be used to derive the Navier-Stokes equations for aerofoil behaviour. Please don’t ask me to explain them). However, this doesn’t mean that the two airflows necessarily coincide at the same point when we reach the trailing edge of the wing, so the theory doesn’t correctly calculate the amount of lift generated by the wing. This is compounded by the theory not explaining any of the lift generated from the bottom face of the wing, or why the angle wing  is set at (the angle of attack) affects the lift it generates, or how one is able to generate some lift from just a flat sheet set at an angle (or any other symmetrical wing profile), or how aircraft fly upside-down.

Then we have the (somewhat simpler) ‘skipping stone’ theory, which attempts to explain the lift generated from the bottom surface of the wing. Its basic postulate concerns the angle of attack; with an angled wing, the bottom face of the wing strikes some of the incoming air, causing air molecules to bounce off it. This is like the bottom of the wing being continually struck by lots of tiny ball bearings, sort of the same thing that happens when a skimming stone bounces off the surface of the water, and it generates a net force; lift. Not only that, but this theory claims to explain the lower pressure found on top of the wing; since air is blocked by the tilted wing, not so much gets to the area immediately above/behind it. This means there are less air molecules in a given space, giving rise to a lower pressure; another way of explaining the lift generated.

There isn’t much fundamentally wrong with this theory, but once again the mathematics don’t check out; it also does not accurately predict the amount of lift generated by a wing. It also fails to explain why a cambered wing set at a zero angle of attack is still able to generate lift; but actually it provides a surprisingly good model when we consider supersonic flight.

Lift can be explained as a combination of these two effects, but to do so is complex and unnecessary  we can find a far better explanation just by considering the shape the airflow makes when travelling over the wing. Air when passing over an aerofoil tends to follow the shape of its surface (Euler again), meaning it deviates from its initially straight path to follow a curved trajectory. This curve-shaped motion means the direction of the airflow must be changing; and since velocity is a vector quantity, any change in the direction of the air’s movement represents a change in its overall velocity, regardless of any change in airspeed (which contributes separately). Any change in velocity corresponds to the air being accelerated, and since Force = mass x acceleration this acceleration generates a net force; this force is what corresponds to lift. This ‘turning’ theory not only describes lift generation on both the top and bottom wing surfaces, since air is turned upon meeting both, but also why changing the angle off attack affects lift; a steeper angle means the air has to turn more when following the wing’s shape, meaning more lift is generated. Go too steep however, and the airflow breaks away from the wing and undergoes a process called flow separation… but I’m getting ahead of myself.

This explanation works fine so long as our aircraft is travelling at less than the speed of sound. However, as we approach Mach 1, strange things start to happen, as we shall find out next time…

Air Warfare Today

My last post summarised the ins and outs of the missile weaponry used by most modern air forces today, and the impact that this had on fighter technology with the development of the interceptor and fighter-bomber as separate classes. This technology was flashy and rose to prominence during the Korean war, but the powers-that-be still used large bomber aircraft during that conflict and were convinced that carpet bombing was the most effective strategy for a large-scale land campaign. And who knows; if WWIII ever ends up happening, maybe that sheer scale of destruction will once again be called for.

However, this tactic was not universally appreciated. As world warfare descended ever more into world politics and scheming, several countries began to adopt the fighter-bomber as their principle strike aircraft. A good example is Israel, long-time allies of the US, who used American fighter-bombers early on during the 1970s Middle East conflict to take out the air bases of their Soviet-backed Arab neighbours, giving them air superiority in the region that proved very valuable in the years to come as that conflict escalated. These fighters were valuable to such countries, who could not afford the cost of a large-scale bombing campaign; faster, precision guided destruction made far better fiscal sense and annoyed the neighbours less when they were parked on their doorstep (unless your government happened to be quite as gung-ho as Israel’s). Throughout the 1960s, this realisation of the value of fighter aircraft lead to further developments in their design; ground-assault weapons, in the form of air-to-surface missiles and laser-guided bombs, began to be standard equipment on board fighter aircraft once their value as principle strike weapons was realised and demand for them to perform as such increased.  Furthermore, as wars were fought and planes were brought down, it was also realised that dogfighting was not in fact a dead art when one’s opponents (ie the Soviet Union and her friends) also had good hardware, so maneouvreability was once again reinstated as a design priority. Both of these advances were greatly aided by the rapid advancements in the electronics of the age, which quickly found their way into avionics; the electronic systems used by aircraft for navigation, monitoring, and (nowadays) help flying the aircraft, among other things.

It was also at this time that aircraft began experimenting with the idea of VTOL: Vertical Take Off and Landing. This was an advantageous property for an aircraft to have since it limited the space it needed for its take off and landing, allowing it to land in a wider range of environments where there wasn’t a convenient long stretch of bare tarmac. It was also particularly useful for aircraft carriers, which had been shown during WW2’s battle of Midway to be incredibly useful military tools, since any space not used for runway could be used to carry more precious aircraft. Many approaches were tried, including some ‘tail-sitting’ aircraft that mounted onto a vertical wall, but the only one to achieve mainstream success was the British Harrier, with two rotatable engine vents that could be aimed downwards for vertical takeoff. These offered the Harrier another trick- it was the only aircraft with a reverse gear. A skilled pilot could, if being tailed by a hostile, face his vents forward so his engines were pushing him in the opposite direction to his direction of travel, causing him to rapidly slow down and for his opponent to suddenly find himself with an enemy behind him eyeing up a shot. This isn’t especially relevant, I just think it’s really cool.

However, the event that was to fundamentally change late 20th century air warfare like no other was the Vietnam war; possibly the USA’s biggest ever military mistake. The war itself was chaotic on almost every level, with soldiers being accused of everything from torture to drug abuse, and by the mid 1960s it had already been going on, on and off, for over a decade years. The American public was rapidly becoming disillusioned with the war in general, as the hippy movement began to lift off, but in August 1964 the USS Maddox allegedly fired at a couple of torpedo boats that were following it through the Gulf of Tonkin. I say allegedly, because there is much speculation as to the identity of the vessels themselves; as then-president Lyndon B. Johnson said, “those sailors out there may have been shooting at flying fish”. In any case, the outcome was the important bit; when (now known to be false) reports came in two days later of a second attack in the area, Congress backed Johnson in the Gulf of Tonkin Resolution, which basically gave the President the power to do what he liked in South-East Asia without making the war official (which would have meant consulting the UN). This resulted in a heavy escalation of the war both on the ground and in the air, but possibly the most significant side-effect was ‘Operation Rolling Thunder’, which authorised a massive-scale bombing campaign to be launched on the Communist North Vietnam. The Air Force Chief of Staff at the time, Curtis LeMay, had been calling for such a saturation bombing campaign for a while by then, and said “we’re going to bomb them back into the Stone Age”.

Operation Rolling Thunder ended up dropping, mainly via B-52 bombers, a million tonnes of bombs across North Vietnam and the Ho Chi Minh trail (used to supply the militant NLF, aka Viet Cong, operating in South Vietnam) across neighbouring Cambodia and Laos, in possibly the worst piece of foreign politics ever attempted by a US government- and that’s saying something. Not only did opinion of the war, both at home and abroad, take a large turn for the worse, but the bombing campaign itself was a failure; the Communist support for the NLF did not come from any physical infrastructure, but from an underground system that could not be targeted by a carpet bombing campaign. As such, NLF support along the Ho Chi Minh continued throughout Rolling Thunder, and after three years the whole business was called off as a very expensive failure. The shortcomings of the purpose-built bomber as a concept had been highlighted in painful detail for all the world to see; but two other aircraft used in Vietnam showed the way forward. The F-111 had variable geometry wings, meaning they could change their shape depending on the speed the aircraft was going. This meant it performed well at a wide variety of airspeeds, both super- and sub-sonic (see my post regarding supersonic flight for the ins and outs of this), and whilst the F-111 never had the performance to utilise them properly (since it was turboprop, rather than purely jet powered) the McDonnell F-4 Phantom did; the Phantom claimed more kills than any other fighter aircraft during Vietnam, and was (almost entirely accidentally) the first multi-role aircraft, operating both as the all-weather interceptor it was designed to be and the strike bomber its long range and large payload capacity allowed it to be.

The key advantage of multi-role aircraft is financial; in an age where the massive wars of the 20th century are slowly fading into the past (ha, ha) and defence budgets are growing ever-slimmer, it makes much more sense to own two or three aircraft that can each do five things very well than 15 that can only do one each to a superlative degree of perfection. This also makes an air force more flexible and able to respond faster; if an aircraft is ready for anything, then it alone is sufficient to cover a whole host of potential situations. Modern day aircraft such as the Eurofighter Typhoon take this a stage further; rather than being able to be set up differently to perform multiple different roles, they try to have a single setup that can perform any role (or, at least, that any ‘specialised’ setup also allows for other scenarios and necessities should the need arise). Whilst the degree of unspecialisation of the hardware does leave multirole aircraft vulnerable to more specialised variations if the concept is taken too far, the advantages of multirole capabilities in a modern air force existing with the modern political landscape are both obvious and pressing. Pursuit and refinement of this capability has been the key challenge facing aircraft designers over the last 20 to 30 years, but there have been two new technologies that have made their way into the field. The first of these is built-in aerodynamic instability (or ‘relaxed stability’), which has been made possible by the invention of ‘fly-by-wire’ controls, by which the joystick controls electronic systems that then tell the various components to move, rather than being mechanically connected to them. Relaxed stability basically means that, left to its own devices, an aircraft will oscillate from side to side or even crash by uncontrollable sideslipping rather than maintain level flight, but makes the aircraft more responsive and maneouvrable. To ensure that the aircraft concerned do not crash all the time, computer systems generally monitor the pitch and yaw of the aircraft and make the tiny corrections necessary to keep the aircraft flying straight. It is an oft-quoted fact that if the 70 computer systems on a Eurofighter Typhoon that do this were to crash, the aircraft would quite literally fall out of the sky.

The other innovation to hit the airframe market in recent years has been the concept of stealth, taking one of two forms. Firstly we consider the general design of modern fighters, carefully designed to minimise their radar cross-section and make them less visible to enemy radar. They also tend to shroud their engine exhausts so they aren’t visually visible from a distance. Then, we consider specialist designs such as the famous American Lockheed Nighthawk, whose strange triangular design covered in angled, black sheets of material are designed to scatter and absorb radar and make them ‘invisible’, especially at night. This design was, incidentally, one of the first to be unflyably unstable when in flight, and required a fly-by-wire control system that was revolutionary for that time.

Perhaps the best example of how far air warfare has come over the last century is to be found in the first Gulf War, during 1991. At night, Nighthawk stealth bombers would cross into Hussein-held territory to drop their bombs, invisible to Hussein’s radar and anti-aircraft systems, but unlike wars of old they didn’t just drop and hope at their targets. Instead, they were able to target bunkers and other such fortified military installations with just one bomb; a bomb that they could aim at and drop straight down a ventilation shaft. Whilst flying at 600 miles an hour.

Fire and Forget

By the end of my last post, we’d got as far as the 1950s in terms of the development of air warfare, an interesting period of transition, particularly for fighter technology. With the development of the jet engine and supersonic flight, the potential of these faster, lighter aircraft was beginning to outstrip that of the slow, lumbering bombers they ostensibly served. Lessons were quickly learned during the chaos of the Korean war, the first of the second half of the twentieth century, during which American & Allied forces fought a back-and-forth swinging conflict against the North Koreans and Chinese. Air power proved a key feature of the conflict; the new American jet fighters took apart the North Korean air force, consisting mainly of old propellor-driven aircraft, as they swept north past the 52nd parallel and toward the Chinese border, but when China joined in they brought with them a fleet of Soviet Mig-15 jet fighters, and suddenly the US and her allies were on the retreat. The American-lead UN campaign did embark on a bombing campaign using B-29 bombers, utterly annihilating vast swathes of North Korea and persuading the high command that carpet bombing was still a legitimate strategy, but it was the fast aerial fighter combat that really stole the show.

One of the key innovations that won the Allies the Battle of Britain during WWII proved during the Korean war to be particularly valuable during the realm of air warfare; radar. British radar technology during the war was designed to utilise massive-scale machinery to detect the approximate positions of incoming German raids, but post-war developments had refined it to use far smaller bits of equipment to identify objects more precisely and over a smaller range. This was then combined with the exponentially advancing electronics technology and the deadly, but so far difficult to use accurately, rocketeering technology developed during the two world wars to create a new weapon; the guided missile, based on the technology used on the German V2. The air-to-air missile (AAM) subsequently proved both more accurate & destructive than the machine guns previously used for air combat, whilst air-to-surface missiles (ASM’s) began to offer fighters the ability to take out ground targets in the same way as bombers, but with far superior speed and efficiency; with the development of the guided missile, fighters began to gain a capability in firepower to match their capability in airspeed and agility.

The earliest missiles were ‘beam riders’, using radar equipment attached to either an aircraft or (more typically) ground-based platform to aim at a target and then simply allowing a small bit of electronics, a rocket motor and some fins on the missile to follow the radar beam. These were somewhat tricky to use, especially as quite a lot of early radar sets had to be aimed manually rather than ‘locking on’ to a target, and the beam tended to fade when used over long range, so as technology improved post-Korea these beam riders were largely abandoned; but during the Korean war itself, these weapons proved deadly, accurate alternatives to machine guns capable of attacking from great range and many angles. Most importantly, the technology showed great potential for improvement; as more sensitive radiation-detecting equipment was developed, IR-seeking missiles (aka heat seekers) were developed, and once they were sensitive enough to detect something cooler than the exhaust gases from a jet engine (requiring all missiles to be fired from behind; tricky in a dogfight) these proved tricky customers to deal with. Later developments of the ‘beam riding’ system detected radiation being reflected from the target and tracked with their own inbuilt radar, which did away with the decreasing accuracy of an expanding beam in a system known as semi-active radar homing, and another modern guidance technique to target radar installations or communications hubs is to simply follow the trail of radiation they emit and explode upon hitting something. Most modern missiles however use fully active radar homing (ARH), whereby they carry their own radar system capable of sending out a beam to find a target, identify and lock onto its position ever-changing position, steering itself to follow the reflected radiation and doing the final, destructive deed entirely of its own accord. The greatest advantage to this is what is known as the ‘fire and forget’ capability, whereby one can fire the missile and start doing something else whilst safe in the knowledge that somebody will be exploding in the near future, with no input required from the aircraft.

As missile technology has advanced, so too have the techniques for fighting back against it; dropping reflective material behind an aircraft can confuse some basic radar systems, whilst dropping flares can distract heat seekers. As an ‘if all else fails’ procedure, heavy material can be dropped behind the aircraft for the missile to hit and blow up. However, only one aircraft has ever managed a totally failsafe method of avoiding missiles; the previously mentioned Lockheed SR-71A Blackbird, the fastest aircraft ever, had as its standard missile avoidance procedure to speed up and simply outrun the things. You may have noticed that I think this plane is insanely cool.

But now to drag us back to the correct time period. With the advancement of military technology and shrinking military budgets, it was realised that one highly capable jet fighter could do the work of many more basic design, and many forsaw the day when all fighter combat would concern beyond-visual-range (BVR) missile warfare. To this end, the interceptor began to evolve as a fighter concept; very fast aircraft (such as the ‘two engines and a seat’ design of the British Lightning) with a high ceiling, large missile inventories and powerful radars, they aimed to intercept (hence the name) long-range bombers travelling at high altitudes. To ensure the lower skies were not left empty, the fighter-bomber also began to develop as a design; this aimed to use the natural speed of fighter aircraft to make hit-and-run attacks on ground targets, whilst keeping a smaller arsenal of missiles to engage other fighters and any interceptors that decided to come after them. Korea had made the top brass decide that dogfights were rapidly becoming a thing of the past, and that future air combat would become a war of sneaky delivery of missiles as much as anything; but it hadn’t yet persuaded them that fighter-bombers could ever replace carpet bombing as an acceptable strategy or focus for air warfare. It would take some years for these two fallacies to be challenged, as I shall explore in next post’s, hopefully final, chapter.

The Development of Air Power

By the end of the Second World War, the air was the key battleground of modern warfare; with control of the air, one could move small detachments of troops to deep behind enemy lines, gather valuable reconnaissance and, of course, bomb one’s enemies into submission/total annihilation. But the air was also the newest theatre of war, meaning that there was enormous potential for improvement in this field. With the destructive capabilities of air power, it quickly became obvious that whoever was able to best enhance their flight strength would have the upper hand in the wars of the latter half of the twentieth century, and as the Cold War began hotting up (no pun intended) engineers across the world began turning their hands to problems of air warfare.

Take, for example, the question of speed; fighter pilots had long known that the faster plane in a dogfight had a significant advantage over his opponent, since he was able to manoeuvre quickly, chase his opponents if they ran for home and escape combat more easily. It also helped him cover more ground when chasing after slower, more sluggish bombers. However, the technology of the time favoured internal combustion engines powering propeller-driven aircraft, which limited both the range and speed of aircraft at the time. Weirdly, however, the solution to this particular problem had been invented 15 years earlier, after a young RAF pilot called Frank Whittle patented his design for a jet engine. However, when he submitted this idea to the RAF they referred him to engineer A. A. Griffith, whose study of turbines and compressors had lead to Whittle’s design. The reason Griffith hadn’t invented the jet engine himself was thanks to his fixed belief that jet engines would be too inefficient to act as practical engines on their own, and thought they would be better suited to powering propellers. He turned down Whittle’s engine design, which used the forward thrust of the engine itself, rather than a propeller, for power, as impractical, and so the Air Ministry didn’t fund research into the concept. Some now think that, had the jet engine been taken seriously by the British, the Second World War might have been over by 1940, but as it was Whittle spent the next ten years trying to finance his research and development privately, whilst fitting it around his RAF commitments. It wasn’t until 1945, by which time the desperation of war had lead to governments latching to every idea there was, that the first jet-powered aircraft got off the ground; and it was made by a team of Germans, Whittle’s patent having been allowed to expire a decade earlier.

Still, the German jet fighter was not exactly a practical beast (its engine needed to be disassembled after every use), and by then the war was almost lost anyway. Once the Allies got really into their jet aircraft development after the war, they looked set to start reaching the kind of fantastic speeds that would surely herald the new age of air power. But there was a problem; the sound barrier. During the war, a number of planes had tried to break the magical speed limit of 768 mph, aka the speed of sound (or Mach 1, as it is known today), but none had succeeded; partly this was due to the sheer engine power required (propellers get very inefficient when one approaching the speed of sound, and propeller tips can actually exceed the speed of sound as they spin), but the main reason for failure lay in the plane breaking up. In particular, there was a recurring problems of the wings tearing themselves off as they approached the required speed. It was subsequently realised that as one approached the sound barrier, you began to catch up with the wave of sound travelling in front of you; when you got too close to this, the air being pushed in front of the aircraft began to interact with this sound wave, causing shockwaves and extreme turbulence. This shockwave is what generates the sound of a sonic boom, and also the sound of a cracking whip. Some propeller driver WW2 fighters were able to achieve ‘transonic’ (very-close-to-Mach-1) speeds in dives, but these shockwaves generally rendered the plane uncontrollable and they invariably crashed; this effect was known as ‘transonic buffeting’. A few pilots during the war claimed to have successfully broken the sound barrier in dives and lived to tell the tale, but these claims are highly disputed. During the late 40s and early 50s, a careful analysis of transonic buffeting and similar effects yielded valuable information about the aerodynamics of attempting to break the sound barrier, and yielded several pieces of valuable data. One of the most significant, and most oft-quoted, developments concerned the shape of the wings; whilst  it was discovered that the frontal shape and thickness of the wings could be seriously prohibitive to supersonic flight, it was also realised that when in supersonic flight the shockwave generated was cone shaped. Not only that, but behind the shockwave air flowed at subsonic speeds and a wing behaved as normal; the solution, therefore, was to ‘sweep back’ the shape of the wings to form a triangle shape, so that they always lay ‘inside’ the cone-shaped shockwave. If they didn’t, the wing travelling through supersonic air would be constantly being battered by shockwaves, which would massively increase drag and potentially take the wings off the plane. In reality, it’s quite impractical to have the entire wing lying in the subsonic region (not least because a very swept-back wing tends to behave badly and not generate much lift when in subsonic flight), but the sweep of a wing is still a crucial factor in designing an aircraft depending on what speeds you want it to travel at. In the Lockheed SR-71A Blackbird, the fastest manned aircraft ever made (it could hit Mach 3.3), the problem was partially solved by having wings located right at the back of the aircraft to avoid the shockwave cone. Most modern jet fighters can hit Mach 2.

At first, aircraft designed to break the sound barrier were rocket powered; the USA’s resident speed merchant Chuck Yeager was the first man to officially and veritably top 768mph in the record-breaking rocket plane Bell X-1, although Yeager’s co-tester is thought to have beaten him to the achievement by 30 minutes piloting an XP-86 Sabre. But, before long, supersonic technology was beginning to make itself felt in the more conventional spheres of warfare; second generation jet fighters were, with the help of high-powered jet engines, the first to engage in supersonic combat during the 50s, and as both aircraft and weapons technology advanced the traditional roles of fighter and bomber started to come into question. And the result of that little upheaval will be explored next time…

The Pursuit of Speed

Recent human history has, as Jeremy Clarkson constantly loves to point out, been dominated by the pursuit of speed. Everywhere we look, we see people hurrying hither and thither, sprinting down escalators, transmitting data at next to lightspeed via their phones and computers, and screaming down the motorway at over a hundred kilometres an hour (or nearly 100mph if you’re the kind of person who habitually uses the fast lane of British motorways). Never is this more apparent than when you consider our pursuit of a new maximum, top speed, something that has, over the centuries, got ever higher and faster. Even in today’s world, where we prize speed of information over speed of movement, this quest goes on, as evidenced by the team behind the ‘Bloodhound’ SSC, tipped to break the world land speed record. So, I thought I might take this opportunity to consider the history of our quest for speed, and see how it has developed over time.

(I will ignore all unmanned human exploits for now, just so I don’t get tangled up in arguments concerning why a satellite may be considered versus something out of the Large Hadron Collider)

Way back when we humans first evolved into the upright, bipedal creatures we are now, we were a fairly primitive race and our top speed was limited by how fast we could run.  Usain Bolt can, with the aid of modern shoes, running tracks and a hundred thousand people screaming his name, max out at around 13 metres per second. We will therefore presume that a fast human in prehistoric times, running on bare feet, hard ground, and the motivation of being chased by a lion, might hit 11m/s, or 43.2 kilometres per hour. Thus our top speed remained for many thousands of years, until, around 6000 years ago, humankind discovered how to domesticate animals, and more specifically horses, in the Eurasian Steppe. This sent our maximum speed soaring to 70km/h or more, a speed that was for the first time sustainable over long distances, especially on the steppe where horses where rarely asked to tow or carry much. Thus things remained for another goodly length of time- in fact, many leading doctors were of the opinion that travelling any faster would be impossible to do without asphyxiating. However, come the industrial revolution, things started to change, and records began tumbling again. The train was invented in the 1800s and quickly transformed from a slow, lumbering beast into a fast, sleek machine capable of hitherto unimaginable speed. In 1848, the Iron Horse took the land speed record away from its flesh and blood cousin, when a train in Boston finally broke the magical 60mph (ie a mile a minute) barrier to send the record shooting up to 96.6 km/h. Records continued to tumble for the next half-century, breaking the 100 mph barrier by 1904, but by then there was a new challenger on the paddock- the car. Whilst early wheel-driven speed records had barely dipped over 35mph, after the turn of the century they really started to pick up the pace. By 1906, they too had broken the 100mph mark, hitting 205km/h in a steam-powered vehicle that laid the locomotives’ claims to speed dominance firmly to bed. However, this was destined to be the car’s only ever outright speed record, and the last one to be set on the ground- by 1924 they had got up to 234km/h, a record that stands to this day as the fastest ever recorded on a public road, but the First World War had by this time been and gone, bringing with it a huge advancement in aircraft technology. In 1920, the record was officially broken in the first post-war attempt, a French pilot clocking 275km/h, and after that there was no stopping it. Records were being broken left, right and centre throughout both the Roaring Twenties and the Great Depression, right up until the breakout of another war in 1939. As during WWI, all records ceased to be officiated for the war’s duration, but, just as the First World War allowed the plane to take over from the car as the top dog in terms of pure speed, so the Second marked the passing of the propellor-driven plane and the coming of the jet & rocket engine. Jet aircraft broke man’s top speed record just 5 times after the war, holding the crown for a total of less than two years, before they gave it up for good and let rockets lead the way.

The passage of records for rocket-propelled craft is hard to track, but Chuck Yeager in 1947 became the first man ever to break the sound barrier in controlled, level flight (plunging screaming to one’s death in a deathly fireball apparently doesn’t count for record purposes), thanks not only to his Bell X-1’s rocket engine but also the realisation that breaking the sound barrier would not tear the wings of so long as they were slanted back at an angle (hence why all jet fighters adopt this design today). By 1953, Yeager was at it again, reaching Mach 2.44 (2608km/h) in the X-1’s cousing, the X-1A. The process, however, nearly killed him when he tilted the craft to try and lose height and prepare to land, at which point a hitherto undiscovered phenomenon known as ‘inertia coupling’ sent the craft spinning wildly out of control and putting Yeager through 8G’s of force before he was able to regain control. The X-1’s successor, the X-2, was even more dangerous- despite pushing the record up to first 3050km/h  one craft exploded and killed its pilot in 1953, before a world record-breaking flight reaching Mach 3.2 (3370 km/h), ended in tragedy when a banking turn at over Mach 3 sent it into another inertia coupling spin that resulted, after an emergency ejection that either crippled or killed him, in the death of pilot Milburn G. Apt. All high-speed research aircraft programs were suspended for another three years, until experiments began with the Bell X-15, the latest and most experimental of these craft. It broke the record 5 times between 1961 and 67, routinely flying above 6000km/h, before another fatal crash, this time concerning pilot Major Michael J Adams in a hypersonic spin, put paid to the program again, and the X-15’s all-time record of 7273km/h remains the fastest for a manned aircraft. But it still doesn’t take the overall title, because during the late 60s the US had another thing on its mind- space.

Astonishingly, manned spacecraft have broken humanity’s top speed record only once, when the Apollo 10 crew achieved the fastest speed to date ever achieved by human beings relative to Earth. It is true that their May 1969 flight did totally smash it, reaching 39 896km/h on their return to earth, but all subsequent space flights, mainly due to having larger modules with greater air resistance, have yet to top this speed. Whether we ever will or not, especially given today’s focus on unmanned probes and the like, is unknown. But people, some brutal abuse of physics is your friend today. Plot all of these records on a graph and add a trendline (OK you might have to get rid of the horse/running ones and fiddle with some numbers), and you have a simple equation for the speed record against time. This can tell us a number of things, but one is of particular interest- that, statistically, we will have a man travelling at the speed of light in 2177. Star Trek fans, get started on that warp drive…