Air Warfare Today

My last post summarised the ins and outs of the missile weaponry used by most modern air forces today, and the impact that this had on fighter technology with the development of the interceptor and fighter-bomber as separate classes. This technology was flashy and rose to prominence during the Korean war, but the powers-that-be still used large bomber aircraft during that conflict and were convinced that carpet bombing was the most effective strategy for a large-scale land campaign. And who knows; if WWIII ever ends up happening, maybe that sheer scale of destruction will once again be called for.

However, this tactic was not universally appreciated. As world warfare descended ever more into world politics and scheming, several countries began to adopt the fighter-bomber as their principle strike aircraft. A good example is Israel, long-time allies of the US, who used American fighter-bombers early on during the 1970s Middle East conflict to take out the air bases of their Soviet-backed Arab neighbours, giving them air superiority in the region that proved very valuable in the years to come as that conflict escalated. These fighters were valuable to such countries, who could not afford the cost of a large-scale bombing campaign; faster, precision guided destruction made far better fiscal sense and annoyed the neighbours less when they were parked on their doorstep (unless your government happened to be quite as gung-ho as Israel’s). Throughout the 1960s, this realisation of the value of fighter aircraft lead to further developments in their design; ground-assault weapons, in the form of air-to-surface missiles and laser-guided bombs, began to be standard equipment on board fighter aircraft once their value as principle strike weapons was realised and demand for them to perform as such increased.  Furthermore, as wars were fought and planes were brought down, it was also realised that dogfighting was not in fact a dead art when one’s opponents (ie the Soviet Union and her friends) also had good hardware, so maneouvreability was once again reinstated as a design priority. Both of these advances were greatly aided by the rapid advancements in the electronics of the age, which quickly found their way into avionics; the electronic systems used by aircraft for navigation, monitoring, and (nowadays) help flying the aircraft, among other things.

It was also at this time that aircraft began experimenting with the idea of VTOL: Vertical Take Off and Landing. This was an advantageous property for an aircraft to have since it limited the space it needed for its take off and landing, allowing it to land in a wider range of environments where there wasn’t a convenient long stretch of bare tarmac. It was also particularly useful for aircraft carriers, which had been shown during WW2’s battle of Midway to be incredibly useful military tools, since any space not used for runway could be used to carry more precious aircraft. Many approaches were tried, including some ‘tail-sitting’ aircraft that mounted onto a vertical wall, but the only one to achieve mainstream success was the British Harrier, with two rotatable engine vents that could be aimed downwards for vertical takeoff. These offered the Harrier another trick- it was the only aircraft with a reverse gear. A skilled pilot could, if being tailed by a hostile, face his vents forward so his engines were pushing him in the opposite direction to his direction of travel, causing him to rapidly slow down and for his opponent to suddenly find himself with an enemy behind him eyeing up a shot. This isn’t especially relevant, I just think it’s really cool.

However, the event that was to fundamentally change late 20th century air warfare like no other was the Vietnam war; possibly the USA’s biggest ever military mistake. The war itself was chaotic on almost every level, with soldiers being accused of everything from torture to drug abuse, and by the mid 1960s it had already been going on, on and off, for over a decade years. The American public was rapidly becoming disillusioned with the war in general, as the hippy movement began to lift off, but in August 1964 the USS Maddox allegedly fired at a couple of torpedo boats that were following it through the Gulf of Tonkin. I say allegedly, because there is much speculation as to the identity of the vessels themselves; as then-president Lyndon B. Johnson said, “those sailors out there may have been shooting at flying fish”. In any case, the outcome was the important bit; when (now known to be false) reports came in two days later of a second attack in the area, Congress backed Johnson in the Gulf of Tonkin Resolution, which basically gave the President the power to do what he liked in South-East Asia without making the war official (which would have meant consulting the UN). This resulted in a heavy escalation of the war both on the ground and in the air, but possibly the most significant side-effect was ‘Operation Rolling Thunder’, which authorised a massive-scale bombing campaign to be launched on the Communist North Vietnam. The Air Force Chief of Staff at the time, Curtis LeMay, had been calling for such a saturation bombing campaign for a while by then, and said “we’re going to bomb them back into the Stone Age”.

Operation Rolling Thunder ended up dropping, mainly via B-52 bombers, a million tonnes of bombs across North Vietnam and the Ho Chi Minh trail (used to supply the militant NLF, aka Viet Cong, operating in South Vietnam) across neighbouring Cambodia and Laos, in possibly the worst piece of foreign politics ever attempted by a US government- and that’s saying something. Not only did opinion of the war, both at home and abroad, take a large turn for the worse, but the bombing campaign itself was a failure; the Communist support for the NLF did not come from any physical infrastructure, but from an underground system that could not be targeted by a carpet bombing campaign. As such, NLF support along the Ho Chi Minh continued throughout Rolling Thunder, and after three years the whole business was called off as a very expensive failure. The shortcomings of the purpose-built bomber as a concept had been highlighted in painful detail for all the world to see; but two other aircraft used in Vietnam showed the way forward. The F-111 had variable geometry wings, meaning they could change their shape depending on the speed the aircraft was going. This meant it performed well at a wide variety of airspeeds, both super- and sub-sonic (see my post regarding supersonic flight for the ins and outs of this), and whilst the F-111 never had the performance to utilise them properly (since it was turboprop, rather than purely jet powered) the McDonnell F-4 Phantom did; the Phantom claimed more kills than any other fighter aircraft during Vietnam, and was (almost entirely accidentally) the first multi-role aircraft, operating both as the all-weather interceptor it was designed to be and the strike bomber its long range and large payload capacity allowed it to be.

The key advantage of multi-role aircraft is financial; in an age where the massive wars of the 20th century are slowly fading into the past (ha, ha) and defence budgets are growing ever-slimmer, it makes much more sense to own two or three aircraft that can each do five things very well than 15 that can only do one each to a superlative degree of perfection. This also makes an air force more flexible and able to respond faster; if an aircraft is ready for anything, then it alone is sufficient to cover a whole host of potential situations. Modern day aircraft such as the Eurofighter Typhoon take this a stage further; rather than being able to be set up differently to perform multiple different roles, they try to have a single setup that can perform any role (or, at least, that any ‘specialised’ setup also allows for other scenarios and necessities should the need arise). Whilst the degree of unspecialisation of the hardware does leave multirole aircraft vulnerable to more specialised variations if the concept is taken too far, the advantages of multirole capabilities in a modern air force existing with the modern political landscape are both obvious and pressing. Pursuit and refinement of this capability has been the key challenge facing aircraft designers over the last 20 to 30 years, but there have been two new technologies that have made their way into the field. The first of these is built-in aerodynamic instability (or ‘relaxed stability’), which has been made possible by the invention of ‘fly-by-wire’ controls, by which the joystick controls electronic systems that then tell the various components to move, rather than being mechanically connected to them. Relaxed stability basically means that, left to its own devices, an aircraft will oscillate from side to side or even crash by uncontrollable sideslipping rather than maintain level flight, but makes the aircraft more responsive and maneouvrable. To ensure that the aircraft concerned do not crash all the time, computer systems generally monitor the pitch and yaw of the aircraft and make the tiny corrections necessary to keep the aircraft flying straight. It is an oft-quoted fact that if the 70 computer systems on a Eurofighter Typhoon that do this were to crash, the aircraft would quite literally fall out of the sky.

The other innovation to hit the airframe market in recent years has been the concept of stealth, taking one of two forms. Firstly we consider the general design of modern fighters, carefully designed to minimise their radar cross-section and make them less visible to enemy radar. They also tend to shroud their engine exhausts so they aren’t visually visible from a distance. Then, we consider specialist designs such as the famous American Lockheed Nighthawk, whose strange triangular design covered in angled, black sheets of material are designed to scatter and absorb radar and make them ‘invisible’, especially at night. This design was, incidentally, one of the first to be unflyably unstable when in flight, and required a fly-by-wire control system that was revolutionary for that time.

Perhaps the best example of how far air warfare has come over the last century is to be found in the first Gulf War, during 1991. At night, Nighthawk stealth bombers would cross into Hussein-held territory to drop their bombs, invisible to Hussein’s radar and anti-aircraft systems, but unlike wars of old they didn’t just drop and hope at their targets. Instead, they were able to target bunkers and other such fortified military installations with just one bomb; a bomb that they could aim at and drop straight down a ventilation shaft. Whilst flying at 600 miles an hour.

Fire and Forget

By the end of my last post, we’d got as far as the 1950s in terms of the development of air warfare, an interesting period of transition, particularly for fighter technology. With the development of the jet engine and supersonic flight, the potential of these faster, lighter aircraft was beginning to outstrip that of the slow, lumbering bombers they ostensibly served. Lessons were quickly learned during the chaos of the Korean war, the first of the second half of the twentieth century, during which American & Allied forces fought a back-and-forth swinging conflict against the North Koreans and Chinese. Air power proved a key feature of the conflict; the new American jet fighters took apart the North Korean air force, consisting mainly of old propellor-driven aircraft, as they swept north past the 52nd parallel and toward the Chinese border, but when China joined in they brought with them a fleet of Soviet Mig-15 jet fighters, and suddenly the US and her allies were on the retreat. The American-lead UN campaign did embark on a bombing campaign using B-29 bombers, utterly annihilating vast swathes of North Korea and persuading the high command that carpet bombing was still a legitimate strategy, but it was the fast aerial fighter combat that really stole the show.

One of the key innovations that won the Allies the Battle of Britain during WWII proved during the Korean war to be particularly valuable during the realm of air warfare; radar. British radar technology during the war was designed to utilise massive-scale machinery to detect the approximate positions of incoming German raids, but post-war developments had refined it to use far smaller bits of equipment to identify objects more precisely and over a smaller range. This was then combined with the exponentially advancing electronics technology and the deadly, but so far difficult to use accurately, rocketeering technology developed during the two world wars to create a new weapon; the guided missile, based on the technology used on the German V2. The air-to-air missile (AAM) subsequently proved both more accurate & destructive than the machine guns previously used for air combat, whilst air-to-surface missiles (ASM’s) began to offer fighters the ability to take out ground targets in the same way as bombers, but with far superior speed and efficiency; with the development of the guided missile, fighters began to gain a capability in firepower to match their capability in airspeed and agility.

The earliest missiles were ‘beam riders’, using radar equipment attached to either an aircraft or (more typically) ground-based platform to aim at a target and then simply allowing a small bit of electronics, a rocket motor and some fins on the missile to follow the radar beam. These were somewhat tricky to use, especially as quite a lot of early radar sets had to be aimed manually rather than ‘locking on’ to a target, and the beam tended to fade when used over long range, so as technology improved post-Korea these beam riders were largely abandoned; but during the Korean war itself, these weapons proved deadly, accurate alternatives to machine guns capable of attacking from great range and many angles. Most importantly, the technology showed great potential for improvement; as more sensitive radiation-detecting equipment was developed, IR-seeking missiles (aka heat seekers) were developed, and once they were sensitive enough to detect something cooler than the exhaust gases from a jet engine (requiring all missiles to be fired from behind; tricky in a dogfight) these proved tricky customers to deal with. Later developments of the ‘beam riding’ system detected radiation being reflected from the target and tracked with their own inbuilt radar, which did away with the decreasing accuracy of an expanding beam in a system known as semi-active radar homing, and another modern guidance technique to target radar installations or communications hubs is to simply follow the trail of radiation they emit and explode upon hitting something. Most modern missiles however use fully active radar homing (ARH), whereby they carry their own radar system capable of sending out a beam to find a target, identify and lock onto its position ever-changing position, steering itself to follow the reflected radiation and doing the final, destructive deed entirely of its own accord. The greatest advantage to this is what is known as the ‘fire and forget’ capability, whereby one can fire the missile and start doing something else whilst safe in the knowledge that somebody will be exploding in the near future, with no input required from the aircraft.

As missile technology has advanced, so too have the techniques for fighting back against it; dropping reflective material behind an aircraft can confuse some basic radar systems, whilst dropping flares can distract heat seekers. As an ‘if all else fails’ procedure, heavy material can be dropped behind the aircraft for the missile to hit and blow up. However, only one aircraft has ever managed a totally failsafe method of avoiding missiles; the previously mentioned Lockheed SR-71A Blackbird, the fastest aircraft ever, had as its standard missile avoidance procedure to speed up and simply outrun the things. You may have noticed that I think this plane is insanely cool.

But now to drag us back to the correct time period. With the advancement of military technology and shrinking military budgets, it was realised that one highly capable jet fighter could do the work of many more basic design, and many forsaw the day when all fighter combat would concern beyond-visual-range (BVR) missile warfare. To this end, the interceptor began to evolve as a fighter concept; very fast aircraft (such as the ‘two engines and a seat’ design of the British Lightning) with a high ceiling, large missile inventories and powerful radars, they aimed to intercept (hence the name) long-range bombers travelling at high altitudes. To ensure the lower skies were not left empty, the fighter-bomber also began to develop as a design; this aimed to use the natural speed of fighter aircraft to make hit-and-run attacks on ground targets, whilst keeping a smaller arsenal of missiles to engage other fighters and any interceptors that decided to come after them. Korea had made the top brass decide that dogfights were rapidly becoming a thing of the past, and that future air combat would become a war of sneaky delivery of missiles as much as anything; but it hadn’t yet persuaded them that fighter-bombers could ever replace carpet bombing as an acceptable strategy or focus for air warfare. It would take some years for these two fallacies to be challenged, as I shall explore in next post’s, hopefully final, chapter.

War in Three Dimensions

Warfare has changed a lot in the last century. Horses have become redundant, guns become reliable, machine guns become light enough to carry and bombs have become powerful enough to totally annihilate a small country if the guy with the button so chooses. But perhaps more significant than just the way hardware has changed is the way that warfare has changed itself; tactics and military structure have changed beyond all recognition compared to the pre-war era, and we must now fight wars whilst surrounded by a political landscape, at least in the west, that does not approve of open conflict. However, next year marks the 100th anniversary of a military innovation that not only represented massive hardware upgrade at the time, but that has changed almost beyond recognition in the century since then and has fundamentally changed the way we fight wars; the use of aeroplanes in warfare.

The skies have always been a platform to be exploited by the cunning military strategist; balloons were frequently used for messaging long before they were able to carry humans and be used for reconnaissance during the early 20th century, and for many years the only way of reliably sending a complicated message over any significant distance was via homing pigeon. It was, therefore, only natural that the Wright brothers had barely touched down after their first flight in ‘Flyer I’ when the first suggestions of a military application to such a technology were being made. However, early attempts at powered flight could not sustain it for very long, and even subsequent improvements failed to produce anything capable of carrying a machine gun. By the First World War, aircraft had become advanced enough to make controlled, sustained, two-person flight at an appreciable height a reality, and both the Army and Navy were quick to incorporate air divisions into their structures (these divisions in the British Armed Forces were the Royal Flying Corps and the Royal Naval Air Service respectively). However, these air forces were initially only used for reconnaissance purposes and ‘spotting’ for artillery to help them get their eye in; the atmosphere was quite peaceful so far above the battlefield, and pilots and observers of opposing aircraft would frequently wave to one another during the early years of the war. As time passed and the conflict grew ever-bloodier, these exchanges became less friendly; before long observers would carry supplies of bricks into the air with them and attempt to throw them at enemy aircraft, and the Germans even went so far as to develop steel darts that could reportedly split a man in two; whilst almost impossible to aim in a dogfight, these darts were incredibly dangerous for those on the ground. By 1916 aircraft had grown advanced enough to carry bombs, enabling a (slightly) more precise method of destroying enemy targets than artillery, and before long both sides could equip these bombers with turret-mounted machine guns that the observers could fire on other aircraft with; given that the aircraft of the day were basically wire and wood cages covered in fabric, these guns could cause vast amounts of damage and the men within the planes had practically zero protection (and no parachutes either, since the British top brass believed this might encourage cowardice). To further protect their bombers, both sides began to develop fighter aircraft as well; smaller, usually single-man, planes with fixed machine guns operated by the pilot (and which used a clever bit of circuitry to fire through the propeller; earlier attempts at doing this without blowing the propeller to pieces had simply consisted of putting armour plating on the back of the propeller, which not infrequently caused bullets to bounce back and hit the pilot). It wasn’t long before these fighters were given more varied orders, ranging from trench strafing to offensive patrols (where they would actively go and look for other aircraft to attack). Perhaps the most dangerous of these objectives was balloon strafing; observation balloons were valuable pieces of reconnaissance equipment, and bringing them down generally required a pilot to navigate the large escort of fighters that accompanied them. Towards the end of the war, the forces began to realise just how central to their tactics air warfare had become, and in 1918 the RFC and RNAS were combined to form the Royal Air Force, the first independent air force in the world. The RAF celebrated its inception three weeks later when German air ace Manfred von Richthofen (aka The Red Baron), who had 80 confirmed victories despite frequently flying against superior numbers or hardware, was shot down (although von Richthofen was flying close to the ground at the time in pursuit of an aircraft, and an analysis of the shot that killed him suggests that he was killed by a ground-based AA gunner rather than the Canadian fighter pilot credited with downing him. Exactly who fired the fatal shot remains a mystery.)

By the time the Second World War rolled around things had changed somewhat; in place of wire-and-fabric biplanes, sleeker metal monoplanes were in use, with more powerful and efficient engines making air combat faster affair. Air raids themselves could be conducted over far greater distances since more fuel could be carried, and this proved well suited to the style of warfare that the war generated; rather than the largely unmoving battle lines of the First World War, the early years of WW2 consisted of countrywide occupation in Europe, whilst the battlegrounds of North Africa and Soviet Russia were dominated by tank warfare and moved far too fluidly for frontline air bases to be safe. Indeed, air power featured prominently in neither of these land campaigns; but on the continent, air warfare reigned supreme. As the German forces dominated mainland Europe, they launched wave after wave of long distance bombing campaigns at Britain in an effort to gain air superiority and cripple the Allies’ ability to fight back when they attempted to cross the channel and invade. However, the British had, unbeknownst to the Germans, perfected their radar technology, and were thus able to use their relatively meagre force of fighters to greatest effect to combat the German bombing assault. This, combined with some very good planes and flying on behalf of the British and an inability to choose the right targets to bomb on behalf of the Germans, allowed the Battle of Britain to swing in favour of the Allies and turned the tide of the war in Europe. In the later years of the war, the Allies turned the tables on a German military crippled by the Russian campaign after the loss at Stalingrad and began their own orchestrated bombing campaign. With the increase in anti-aircraft technology since the First World War, bombers were forced to fly higher than ever before, making it far harder to hit their targets; thus, both sides developed the tactic of ‘carpet bombing’, whereby they would simply load up as big a plane as they could with as many bombs as it could carry and drop them all over an area in the hope of at least one of the bombs hitting the intended target. This imprecise tactic was only moderately successful when it came to destruction of key military targets, and was responsible for the vast scale of the damage to cities both sides caused in their bombing campaigns. In the war in the Pacific, where space on aircraft carriers was at a premium and Lancaster Bombers would have been impractical, they kept with the tactic of using dive bombers, but such attacks were very risky and there was still no guarantee of a successful hit. By the end of the war, air power was rising to prominence as possibly the most crucial theatre of combat, but we were reaching the limits of what our hardware was capable of; our propellor-driven, straight wing fighter aircraft seemed incapable of breaking the sound barrier, and our bombing attacks couldn’t safely hit any target less than a mile wide. Something was clearly going to have to change; and next time, I’ll investigate what did.

The Alternative Oven

During the Second World War, the RAF pioneered the use of radar to detect the presence of the incoming Luftwaffe raids. One of the key pieces of equipment used in the construction of the radars was called a magnetron, which uses a magnetic field to propel high-speed electrons and generate the kind of high-powered radio waves needed for such a technology to be successful over long distances. After the war was over, the British government felt it could share such technology with its American allies, and so granted permission for Raytheon, a private American enterprise, to produce them. Whilst experimenting with such a radar set in 1945, a Raytheon engineer called Percy Spencer reached to the chocolate bar in his pocket; and discovered it had melted. He later realised that the electromagnetic radiation generated by the radar set had been the cause of this heating effect, and thought that such technology could be put to a different, non-military use- and so the microwave oven was born.

Since then, the microwave has become the epitome of western capitalism’s golden age; the near-ubiquitous kitchen gadget, usually in the traditional white plastic casing, designed to make certain specific aspects of a process already technically performed  by another appliance (the oven) that bit faster and more convenient. As such, it has garnered its fair share of hate over the years, shunned by serious foodies as a taste-ruining harbinger of doom to one’s gastric juices that wouldn’t be seen dead in any serious kitchen. The simplicity of the microwaving process (especially given that there is frequently no need for a pot or container) has also lead to the rise of microwavable meals, designed to take the concept of ultra-simple cooking to its extreme by creating an entire meal  from a few minutes in the microwave. However, as everyone who’s every attempted a bit of home cooking will know, such process does not naturally occur quite so easily and thus these ready meals generally require large quantities of what is technically known as ‘crap’ for them to function as meals. This low quality food has become distinctly associated with the microwave itself, further enhancing its image as a tool for the lazy and the kind of societal dregs that the media like to portray in scare statistics.

In fairness, this is hardly the device’s fault, and it is a pretty awesome one. Microwave ovens work thanks to the polarity of water molecules; they consist of one positively charged end (where the hydrogen part of H2O is) and a negatively charged end (where the electron-rich oxygen bit is). Also charged are electromagnetic waves, such as the microwaves after which the oven takes its name, and such waves (being as they are, y’know, waves) also oscillate (aka ‘wobble) back and forth. This charge wobbling back and forth causes the water molecules (technically it works with other polarised molecules too, but there are very few other liquids consisting of polarised molecules that one encounters in cookery; this is why microwaves can heat up stuff without water in, but don’t do it very well) to oscillate too. This oscillation means that they gain kinetic energy from the microwave radiation; it just so happens that the frequency of the microwave radiation is chosen so that it closely matches the resonant frequency of the oscillation of the water molecules, meaning this energy transfer is very efficient*; a microwave works out as a bit over 60% efficient (most of the energy being lost in the aforementioned magnetron used to generate the microwaves), which is exceptional compared to a kettle’s level of around 10%. The efficiency of an oven really depends on the meal and how it’s being used, but for small meals or for reheating cold (although not frozen, since ice molecules aren’t free to vibrate as much as liquid water) food the microwave is definitely the better choice. It helps even more that microwaves are really bad at penetrating the metal & glass walls of a microwave, meaning they tend to bounce off until they hit the food and that very little of the energy gets lost to the surroundings once it’s been emitted. However, if nothing is placed in the microwave then these waves are not ‘used up’ in heating food and tend to end up back in the microwave emitter, causing it to burn out and doing the device some serious damage.

*I have heard it said that this is in fact a myth, and that microwaves are in fact selected to be slightly off the resonant frequency range so that they don’t end up heating the food too violently. I can’t really cite my sources on this one nor explain why it makes sense.

This use of microwave radiation to heat food incurs some rather interesting side-effects; up first is the oft-cited myth that microwaves cook food ‘from the inside out’. This isn’t actually true, for although the inside of a piece of food may be slightly more insulated than the outside the microwaves should transfer energy to all of the food at a roughly equal rate; if anything the outside will get more heating since it is hit first by the microwaves. This effect is observed thanks to the chemical makeup of a lot of the food put in a microwave, which generally have the majority of their water content beneath the surface; this makes the surface relatively cool and crusty, with little water to heat it up, and the inside scaldingly hot. The use of high-power microwaves also means that just about everyone in the country has in their home a death ray capable of quite literally boiling someone’s brain if the rays were directed towards them (hence why dismantling a microwave is semi-illegal as I understand it), but it also means that everyone has ample opportunity to, so long as they don’t intend to use the microwave again afterwards  and have access to a fire extinguisher, do some seriously cool stuff with it. Note that this is both dangerous, rather stupid and liable to get you into some quite weird stuff, nothing is a more sure fire indicator of a scientific mind than an instinct to go ‘what happens when…’ and look at the powerful EM radiation emitter sitting in your kitchen. For the record, I did not say that this was a good idea…