Flying Supersonic

Last time (OK, quite a while ago actually), I explained the basic principle (from the Newtonian end of things; we can explain it using pressure, but that’s more complicated) of how wings generate lift when travelling at subsonic speeds, arguably the most important principle of physics affecting our modern world. However, as the second World War came to an end and aircraft started to get faster and faster, problems started to appear.

The first aircraft to approach the speed of sound (Mach 1, or around 700-odd miles an hour depending on air pressure) were WWII fighter aircraft; most only had top speeds of around 400-500mph or so whilst cruising, but could approach the magic number when going into a steep dive. When they did so, they found their aircraft began suffering from severe control issues and would shake violently; there are stories of Japanese Mitsubishi Zeroes that would plough into the ground at full speed, unable to pull out of a deathly transonic dive. Subsequent aerodynamic analyses of these aircraft suggest that if any of them had  in fact broken the sound barrier, their aircraft would most likely have been shaken to pieces. For this reason, the concept of ‘the sound barrier’ developed.

The problem arises from the Doppler effect (which is also, incidentally, responsible for the stellar red-shift that tells us our universe is expanding), and the fact that as an aircraft moves it emits pressure waves, carried through the air by molecules bumping into one another. Since this exactly the same method by which sound propagates in air, these pressure waves move at the speed of sound, and travel outwards from the aircraft in all directions. If the aircraft is travelling forwards, then each time it emits a pressure wave it will be a bit further forward than the centre of the pressure wave it emitted last, causing each wave in front of the aircraft to get closer together and waves behind it to spread out. This is the Doppler Effect.

Now, when the aircraft starts travelling very quickly, this effect becomes especially pronounced, wave fronts becoming compressed very close to one another. When the aircraft is at the speed of sound, the same speed at which the waves propagate, it catches up with the wave fronts themselves and all wave fronts are in the same place just in front of the aircraft. This causes them to build up on top of one another into a band of high-pressure air, which is experienced as a shockwave; the pressure drop behind this shockwave can cause water to condense out of the air and is responsible for pictures such as these.

But the shockwave does not just occur at Mach 1; we must remember that the shape of an aerofoil is such to cause air to travel faster over the top of the wing than it does normally. This means parts of the wing reach supersonic speeds, effectively, before the rest of the aircraft, causing shockwaves to form over the wings at a lower speed. The speed at which this first occurs is known as the critical Mach number. Since these shockwaves are at a high-pressure, then Bernoulli’s principle tells us they cause air to slow down dramatically; this contributes heavily to aerodynamic drag, and is part of the reason why such shockwaves can cause major control issues. Importantly, we must note that shockwaves always cause air to slow down to subsonic speeds, since the shockwave is generated at the point of buildup of all the pressure waves so acts as a barrier between the super- and sub-sonic portions of the airflow. However, there is another problem with this slowing of the airflow; it causes the air to have a higher pressure than the supersonic air in front of the shockwave. Since there is always a force from high pressure to low pressure, this can cause (at speeds sufficiently higher above the critical Mach number) parts of the airflow close to the wing (the boundary layer, which also experience surface friction from the wing) to change direction and start travelling forwards. This causes the boundary layer to recirculate, forming a turbulent portion of air that generates very little lift and quite a lot of drag, and for the rest of the airflow to separate from the wing surface; an effect known as boundary layer separation, (or Mach stall, since it causes similar problems to a regular stall) responsible for even more problems.

The practical upshot of all of this is that flying at transonic speeds (close to and around the speed of sound) is problematic and inefficient; but once we push past Mach 1 and start flying at supersonic speeds, things change somewhat. The shockwave over the wing moves to its trailing edge, as all of the air flowing over it is now travelling at supersonic speeds, and ceases to pose problems, but now we face the issues posed by a bow wave. At subsonic speeds, the pressure waves being emitted by the aircraft help to push air out of the way and mean it is generally deflected around the wing rather than just hitting it and slowing down dramatically; but at subsonic speeds, we leave those pressure waves behind us and we don’t have this advantage. This means supersonic air hits the front of the air and is slowed down or even stopped, creating a portion of subsonic air in front of the wing and (you guessed it) another shockwave between this and the supersonic air in front. This is known as a bow wave, and once again generates a ton of drag.

We can combat the formation of the wing by using a supersonic aerofoil; these are diamond-shaped, rather than the cambered subsonic aerofoils we are more used to, and generate lift in a different way (the ‘skipping stone’ theory is actually rather a good approximation here, except we use the force generated by the shockwaves above and below an angled wing to generate lift). The sharp leading edge of these wings prevents bow waves from forming and such aerofoils are commonly used on missiles, but they are inefficient at subsonic speeds and make takeoff and landing nigh-on impossible.

The other way to get round the problem is somewhat neater; as this graphic shows, when we go past the speed of sound the shockwave created by the aeroplane is not flat any more, but forms an angled cone shape- the faster we go, the steeper the cone angle (the ‘Mach angle’ is given by the formula sin(a)=v/c, for those who are interested). Now, if we remember that shockwaves cause the air behind them to slow down to subsonic speeds, it follows that if our wings lie just behind the shockwave, the air passing over them at right angles to the shockwave will be travelling at subsonic speeds, and the wing can generate lift perfectly normally. This is why the wings on military and other high-speed aircraft (such as Concorde) are ‘swept back’ at an angle; it allows them to generate lift much more easily when travelling at high speeds. Some modern aircraft even have variable-sweep wings (or ‘swing wings’), which can be pointed out flat when flying subsonically (which is more efficient) before being tucked back into a swept position for supersonic flight.

Aerodynamics is complicated.

Advertisement

The Epitome of Nerd-dom

A short while ago, I did a series of posts on computing based on the fact that I had done a lot of related research when studying the installation of Linux. I feel that I should now come clean and point out that between the time of that first post being written and now, I have tried and failed to install Ubuntu on an old laptop six times already, which has served to teach me even more about exactly how it works, and how it differs from is more mainstream competitors. So, since I don’t have any better ideas, I thought I might dedicate this post to Linux itself.

Linux is named after both its founder, Linus Torvalds, a Finnish programmer who finished compiling the Linux kernel in 1992, and Unix, the operating system that could be considered the grandfather of all modern OSs and which Torvalds based his design upon (note- whilst Torvald’s first name has a soft, extended first syllable, the first syllable of the word Linux should be a hard, short, sharp ‘ih’ sound). The system has its roots in the work of Richard Stallman, a lifelong pioneer and champion of the free-to-use, open source movement, who started the GNU project in 1983. His ultimate goal was to produce a free, Unix-like operating system, and in keeping with this he wrote a software license allowing anyone to use and distribute software associated with it so long as they stayed in keeping with the license’s terms (ie nobody can use the free software for personal profit). The software compiled as part of the GNU project was numerous (including a still widely-used compiler) and did eventually come to fruition as an operating system, but it never caught on and the project was, in regards to its achieving of its final aims, a failure (although the GNU General Public License remains the most-used software license of all time).

Torvalds began work on Linux as a hobby whilst a student in April 1991, using another Unix clone MINIX to write his code in and basing it on MINIX’s structure. Initially, he hadn’t been intending to write a complete operating system at all, but rather a type of display interface called a terminal emulator- a system that tries to emulate a graphical terminal, like a monitor, through a more text-based medium (I don’t really get it either- it’s hard to find information a newbie like me can make good sense of). Strictly speaking a terminal emulator is a program, existing independent of an operating system and acting almost like one in its own right, directly within the computer’s architecture. As such, the two are somewhat related and it wasn’t long before Torvalds ‘realised’ he had written a kernel for an operating system and, since the GNU operating system had fallen through and there was no widespread, free-to-use kernel out there, he pushed forward with his project. In August of that same year he published a now-famous post on a kind of early internet forum called Usenet, saying that he was developing an operating system that was “starting to get ready”, and asking for feedback concerning where MINIX was good and where it was lacking, “as my OS resembles it somewhat”. He also, interestingly,  said that his OS “probably never will support anything other than AT-harddisks”. How wrong that statement has proved to be.

When he finally published Linux, he originally did so under his own license- however, he borrowed heavily from GNU software in order to make it run properly (so to have a proper interface and such), and released later versions under the GNU GPL. Torvalds and his associates continue to maintain and update the Linux kernel (Version 3.0 being released last year) and, despite some teething troubles with those who have considered it old-fashioned, those who thought MINIX code was stolen (rather than merely borrowed from), and Microsoft (who have since turned tail and are now one of the largest contributors to the Linux kernel), the system is now regarded as the pinnacle of Stallman’s open-source dream.

One of the keys to its success lies in its constant evolution, and the interactivity of this process. Whilst Linus Torvalds and co. are the main developers, they write very little code themselves- instead, other programmers and members of the Linux community offer up suggestions, patches and additions to either the Linux distributors (more on them later) or as source code to the kernel itself. All the main team have to do is pick and choose the features they want to see included, and continually prune what they get to maximise the efficiency and minimise the vulnerability to viruses of the system- the latter being one of the key features that marks Linux (and OS X) over Windows. Other key advantages Linux holds includes its size and the efficiency with which it allocates CPU usage; whilst Windows may command a quite high percentage of your CPU capacity just to keep itself running, not counting any programs running on it, Linux is designed to use your CPU as efficiently as possible, in an effort to keep it running faster. The kernel’s open source roots mean it is easy to modify if you have the technical know-how, and the community of followers surrounding it mean that any problem you have with a standard distribution is usually only a few button clicks away. Disadvantages include a certain lack of user-friendliness to the uninitiated or not computer-literate user since a lot of programs require an instruction typed into the command bar, far fewer  programs, especially commercial, professional ones, than Windows, an inability to process media as well as OS X (which is the main reason Apple computers appear to exist), and a tendency to go wrong more frequently than commercial operating systems. Nonetheless, many ‘computer people’ consider this a small price to pay and flock to the kernel in their thousands.

However, the Linux kernel alone is not enough to make an operating system- hence the existence of distributions. Different distributions (or ‘distros’ as they’re known) consist of the Linux kernel bundled together with all the other features that make up an OS: software, documentation, window system, window manager, and desktop interface, to name but some. A few of these components, such as the graphical user interface (or GUI, which covers the job of several of the above components), or the package manager (that covers program installation, removal and editing), tend to be fairly ubiquitous (GNOME or KDE are common GUIs, and Synaptic the most typical package manager), but different people like their operating system to run in slightly different ways. Therefore, variations on these other components are bundled together with the kernel to form a distro, a complete package that will run as an operating system in exactly the same fashion as you would encounter with Windows or OS X. Such distros include Ubuntu (the most popular among beginners), Debian (Ubuntu’s older brother), Red Hat, Mandriva and Crunchbang- some of these, such as Ubuntu, are commercially backed enterprises (although how they make their money is a little beyond me), whilst others are entirely community-run, maintained solely thanks to the dedication, obsession and boundless free time of users across the globe.

If you’re not into all this computer-y geekdom, then there is a lot to dislike about Linux, and many an average computer user would rather use something that will get them sneered at by a minority of elitist nerds but that they know and can rely upon. But, for all of our inner geeks, the spirit, community, inventiveness and joyous freedom of the Linux system can be a wonderful breath of fresh air. Thank you, Mr. Torvalds- you have made a lot of people very happy.

Practical computing

This looks set to be my final post of this series about the history and functional mechanics of computers. Today I want to get onto the nuts & bolts of computer programming and interaction, the sort of thing you might learn as a budding amateur wanting to figure out how to mess around with these things, and who’s interested in exactly how they work (bear in mind that I am not one of these people and am, therefore, likely to get quite a bit of this wrong). So, to summarise what I’ve said in the last two posts (and to fill in a couple of gaps): silicon chips are massive piles of tiny electronic switches, memory is stored in tiny circuits that are either off or on, this pattern of off and on can be used to represent information in memory, memory stores data and instructions for the CPU, the CPU has no actual ability to do anything but automatically delegates through the structure of its transistors to the areas that do, the arithmetic logic unit is a dumb counting machine used to do all the grunt work and is also responsible, through the CPU, for telling the screen how to make the appropriate pretty pictures.

OK? Good, we can get on then.

Programming languages are a way of translating the medium of computer information and instruction (binary data) into our medium of the same: words and language. Obviously, computers do not understand that the buttons we press on our screen have symbols on them, that these symbols mean something to us and that they are so built to produce the same symbols on the monitor when we press them, but we humans do and that makes computers actually usable for 99.99% of the world population. When a programmer brings up an appropriate program and starts typing instructions into it, at the time of typing their words mean absolutely nothing. The key thing is what happens when their data is committed to memory, for here the program concerned kicks in.

The key feature that defines a programming language is not the language itself, but the interface that converts words to instructions. Built into the workings of each is a list of ‘words’ in binary, each word having a corresponding, but entirely different, string of data associated with it that represents the appropriate set of ‘ons and offs’ that will get a computer to perform the correct task. This works in one of two ways: an ‘interpreter’ is an inbuilt system whereby the programming is stored just as words and is then converted to ‘machine code’ by the interpreter as it is accessed from memory, but the most common form is to use a compiler. This basically means that once you have finished writing your program, you hit a button to tell the computer to ‘compile’ your written code into an executable program in data form. This allows you to delete the written file afterwards, makes programs run faster, and gives programmers an excuse to bum around all the time (I refer you here)

That is, basically how computer programs work- but there is one last, key feature, in the workings of a modern computer, one that has divided both nerds and laymen alike across the years and decades and to this day provokes furious debate: the operating system.

An OS, something like Windows (Microsoft), OS X (Apple) or Linux (nerds), is basically the software that enables the CPU to do its job of managing processes and applications. Think of it this way: whilst the CPU might put two inputs through a logic gate and send an output to a program, it is the operating system that will set it up to determine exactly which gate to put it through and exactly how that program will execute. Operating systems are written onto the hard drive, and can, theoretically, be written using nothing more than a magnetized needle, a lot of time and a plethora of expertise to flip the magnetically charged ‘bits’ on the hard disk. They consist of many different parts, but the key feature of all of them is the kernel, the part that manages the memory, optimises the CPU performance and translates programs from memory to screen. The precise translation and method by which this latter function happens differs from OS to OS, hence why a program written for Windows won’t work on a Mac, and why Android (Linux-powered) smartphones couldn’t run iPhone (iOS) apps even if they could access the store. It is also the cause of all the debate between advocates of different operating systems, since different translation methods prioritise/are better at dealing with different things, work with varying degrees of efficiency and are more  or less vulnerable to virus attack. However, perhaps the most vital things that modern OS’s do on our home computers is the stuff that, at first glance seems secondary- moving stuff around and scheduling. A CPU cannot process more than one task at once, meaning that it should not be theoretically possible for a computer to multi-task; the sheer concept of playing minesweeper whilst waiting for the rest of the computer to boot up and sort itself out would be just too outlandish for words. However, a clever piece of software called a scheduler in each OS which switches from process to process very rapidly (remember computers run so fast that they can count to a billion, one by one, in under a second) to give the impression of it all happening simultaneously. Similarly, a kernel will allocate areas of empty memory for a given program to store its temporary information and run on, but may also shift some rarely-accessed memory from RAM (where it is accessible) to hard disk (where it isn’t) to free up more space (this is how computers with very little free memory space run programs, and the time taken to do this for large amounts of data is why they run so slowly) and must cope when a program needs to access data from another part of the computer that has not been specifically allocated a part of that program.

If I knew what I was talking about, I could witter on all day about the functioning of operating systems and the vast array of headache-causing practicalities and features that any OS programmer must consider, but I don’t and as such won’t. Instead, I will simply sit back, pat myself on the back for having actually got around to researching and (after a fashion) understanding all this, and marvel at what strange, confusing, brilliant inventions computers are.