Fire and Forget

By the end of my last post, we’d got as far as the 1950s in terms of the development of air warfare, an interesting period of transition, particularly for fighter technology. With the development of the jet engine and supersonic flight, the potential of these faster, lighter aircraft was beginning to outstrip that of the slow, lumbering bombers they ostensibly served. Lessons were quickly learned during the chaos of the Korean war, the first of the second half of the twentieth century, during which American & Allied forces fought a back-and-forth swinging conflict against the North Koreans and Chinese. Air power proved a key feature of the conflict; the new American jet fighters took apart the North Korean air force, consisting mainly of old propellor-driven aircraft, as they swept north past the 52nd parallel and toward the Chinese border, but when China joined in they brought with them a fleet of Soviet Mig-15 jet fighters, and suddenly the US and her allies were on the retreat. The American-lead UN campaign did embark on a bombing campaign using B-29 bombers, utterly annihilating vast swathes of North Korea and persuading the high command that carpet bombing was still a legitimate strategy, but it was the fast aerial fighter combat that really stole the show.

One of the key innovations that won the Allies the Battle of Britain during WWII proved during the Korean war to be particularly valuable during the realm of air warfare; radar. British radar technology during the war was designed to utilise massive-scale machinery to detect the approximate positions of incoming German raids, but post-war developments had refined it to use far smaller bits of equipment to identify objects more precisely and over a smaller range. This was then combined with the exponentially advancing electronics technology and the deadly, but so far difficult to use accurately, rocketeering technology developed during the two world wars to create a new weapon; the guided missile, based on the technology used on the German V2. The air-to-air missile (AAM) subsequently proved both more accurate & destructive than the machine guns previously used for air combat, whilst air-to-surface missiles (ASM’s) began to offer fighters the ability to take out ground targets in the same way as bombers, but with far superior speed and efficiency; with the development of the guided missile, fighters began to gain a capability in firepower to match their capability in airspeed and agility.

The earliest missiles were ‘beam riders’, using radar equipment attached to either an aircraft or (more typically) ground-based platform to aim at a target and then simply allowing a small bit of electronics, a rocket motor and some fins on the missile to follow the radar beam. These were somewhat tricky to use, especially as quite a lot of early radar sets had to be aimed manually rather than ‘locking on’ to a target, and the beam tended to fade when used over long range, so as technology improved post-Korea these beam riders were largely abandoned; but during the Korean war itself, these weapons proved deadly, accurate alternatives to machine guns capable of attacking from great range and many angles. Most importantly, the technology showed great potential for improvement; as more sensitive radiation-detecting equipment was developed, IR-seeking missiles (aka heat seekers) were developed, and once they were sensitive enough to detect something cooler than the exhaust gases from a jet engine (requiring all missiles to be fired from behind; tricky in a dogfight) these proved tricky customers to deal with. Later developments of the ‘beam riding’ system detected radiation being reflected from the target and tracked with their own inbuilt radar, which did away with the decreasing accuracy of an expanding beam in a system known as semi-active radar homing, and another modern guidance technique to target radar installations or communications hubs is to simply follow the trail of radiation they emit and explode upon hitting something. Most modern missiles however use fully active radar homing (ARH), whereby they carry their own radar system capable of sending out a beam to find a target, identify and lock onto its position ever-changing position, steering itself to follow the reflected radiation and doing the final, destructive deed entirely of its own accord. The greatest advantage to this is what is known as the ‘fire and forget’ capability, whereby one can fire the missile and start doing something else whilst safe in the knowledge that somebody will be exploding in the near future, with no input required from the aircraft.

As missile technology has advanced, so too have the techniques for fighting back against it; dropping reflective material behind an aircraft can confuse some basic radar systems, whilst dropping flares can distract heat seekers. As an ‘if all else fails’ procedure, heavy material can be dropped behind the aircraft for the missile to hit and blow up. However, only one aircraft has ever managed a totally failsafe method of avoiding missiles; the previously mentioned Lockheed SR-71A Blackbird, the fastest aircraft ever, had as its standard missile avoidance procedure to speed up and simply outrun the things. You may have noticed that I think this plane is insanely cool.

But now to drag us back to the correct time period. With the advancement of military technology and shrinking military budgets, it was realised that one highly capable jet fighter could do the work of many more basic design, and many forsaw the day when all fighter combat would concern beyond-visual-range (BVR) missile warfare. To this end, the interceptor began to evolve as a fighter concept; very fast aircraft (such as the ‘two engines and a seat’ design of the British Lightning) with a high ceiling, large missile inventories and powerful radars, they aimed to intercept (hence the name) long-range bombers travelling at high altitudes. To ensure the lower skies were not left empty, the fighter-bomber also began to develop as a design; this aimed to use the natural speed of fighter aircraft to make hit-and-run attacks on ground targets, whilst keeping a smaller arsenal of missiles to engage other fighters and any interceptors that decided to come after them. Korea had made the top brass decide that dogfights were rapidly becoming a thing of the past, and that future air combat would become a war of sneaky delivery of missiles as much as anything; but it hadn’t yet persuaded them that fighter-bombers could ever replace carpet bombing as an acceptable strategy or focus for air warfare. It would take some years for these two fallacies to be challenged, as I shall explore in next post’s, hopefully final, chapter.

Advertisements

Up one level

In my last post (well, last excepting Wednesday’s little topical deviation), I talked about the real nuts and bolts of a computer, detailing the function of the transistors that are so vital to the workings of a computer. Today, I’m going to take one step up and study a slightly broader picture, this time concerned with the integrated circuits that utilise such components to do the real grunt work of computing.

An integrated circuit is simply a circuit that is not comprised of multiple, separate, electronic components- in effect, whilst a standard circuit might consist of a few bits of metal and plastic connected to one another by wires, in an IC they are all stuck in the same place and all assembled as one. The main advantage of this is that since all the components don’t have to be manually stuck to one another, but are built in circuit form from the start, there is no worrying about the fiddliness of assembly and they can be mass-produced quickly and cheaply with components on a truly microscopic scale. They generally consist of several layers on top of the silicon itself, simply to allow space for all of the metal connecting tracks and insulating materials to run over one another (this pattern is usually, perhaps ironically, worked out on a computer), and the sheer detail required of their manufacture surely makes it one of the marvels of the engineering world.

But… how do they make a computer work? Well, let’s start by looking at a computer’s memory, which in all modern computers takes the form of semiconductor memory. Memory takes the form of millions upon millions of microscopically small circuits known as memory circuits, each of which consists of one or more transistors. Computers are electronic, meaning to only thing they understand is electricity- for the sake of simplicity and reliability, this takes the form of whether the current flowing in a given memory circuit is ‘on’ or ‘off’. If the switch is on, then the circuit is represented as a 1, or a 0 if it is switched off. These memory circuits are generally grouped together, and so each group will consist of an ordered pattern of ones and zeroes, of which there are many different permutations. This method of counting in ones and zeroes is known as binary arithmetic, and is sometimes thought of as the simplest form of counting. On a hard disk, patches of magnetically charged material represent binary information rather than memory circuits.

Each little memory circuit, with its simple on/off value, represents one bit of information. 8 bits grouped together forms a byte, and there may be billions of bytes in a computer’s memory. The key task of a computer programmer is, therefore, to ensure that all the data that a computer needs to process is written in binary form- but this pattern of 1s and 0s might be needed to represent any information from the content of an email to the colour of one pixel of a video. Clearly, memory on its own is not enough, and the computer needs some way of translating the information stored into the appropriate form.

A computer’s tool for doing this is known as a logic gate, a simple electronic device consisting of (you guessed it) yet more transistor switches. This takes one or two inputs, either ‘on’ or ‘off’ binary ones, and translates them into another value. There are three basic types:  AND gates (if both inputs equal 1, output equals 1- otherwise, output equals 0), OR gates (if either input equals 1, output equals 1- if both inputs equal 0, output equals 0), and NOT gates (if input equals 1, output equals 0, if input equals 0, output equals 1). The NOT gate is the only one of these with a single input, and combinations of these gates can perform other functions too, such as NAND (not-and) or XOR (exclusive OR; if either input equals 1, output equals 1, but if both inputs equal 1 or 0, output equals 0) gates. A computer’s CPU (central processing unit) will contain hundreds of these, connected up in such a way as to link various parts of the computer together appropriately, translate the instructions of the memory into what function a given program should be performing, and thus cause the relevant bit (if you’ll pardon the pun) of information to translate into the correct process for the computer to perform.

For example, if you click on an icon on your desktop, your computer will put the position of your mouse and the input of the clicking action through an AND gate to determine that it should first highlight that icon. To do this, it orders the three different parts of each of the many pixels of that symbol to change their shade by a certain degree, and the the part of the computer responsible for the monitor’s colour sends a message to the Arithmetic Logic Unit (ALU), the computer’s counting department, to ask what the numerical values of the old shades plus the highlighting is, to give it the new shades of colour for the various pictures. Oh, and the CPU should also open the program. To do this, its connections send a signal off to the memory to say that program X should open now. Another bit of the computer then searches through the memory to find program X, giving it the master ‘1’ signal that causes it to open. Now that it is open, this program routes a huge amount of data back through the CPU to tell it to change the pattern of pretty colours on the screen again, requiring another slue of data to go through the ALU, and that areas of the screen A, B and C are now all buttons, so if you click there then we’re going to have to go through this business all over again. Basically the CPU’s logical function consists of ‘IF this AND/OR this happens, which signal do I send off to ask the right part of the memory what to do next?’. And it will do all this in a miniscule fraction of a second. Computers are amazing.

Obviously, nobody in their right mind is going to go through the whole business of telling the computer exactly what to do with each individual piece of binary data manually, because if they did nothing would ever get done. For this purpose, therefore, programmers have invented programming languages to translate their wishes into binary, and for a little more detail about them, tune in to my final post on the subject…