The Development of Air Power

By the end of the Second World War, the air was the key battleground of modern warfare; with control of the air, one could move small detachments of troops to deep behind enemy lines, gather valuable reconnaissance and, of course, bomb one’s enemies into submission/total annihilation. But the air was also the newest theatre of war, meaning that there was enormous potential for improvement in this field. With the destructive capabilities of air power, it quickly became obvious that whoever was able to best enhance their flight strength would have the upper hand in the wars of the latter half of the twentieth century, and as the Cold War began hotting up (no pun intended) engineers across the world began turning their hands to problems of air warfare.

Take, for example, the question of speed; fighter pilots had long known that the faster plane in a dogfight had a significant advantage over his opponent, since he was able to manoeuvre quickly, chase his opponents if they ran for home and escape combat more easily. It also helped him cover more ground when chasing after slower, more sluggish bombers. However, the technology of the time favoured internal combustion engines powering propeller-driven aircraft, which limited both the range and speed of aircraft at the time. Weirdly, however, the solution to this particular problem had been invented 15 years earlier, after a young RAF pilot called Frank Whittle patented his design for a jet engine. However, when he submitted this idea to the RAF they referred him to engineer A. A. Griffith, whose study of turbines and compressors had lead to Whittle’s design. The reason Griffith hadn’t invented the jet engine himself was thanks to his fixed belief that jet engines would be too inefficient to act as practical engines on their own, and thought they would be better suited to powering propellers. He turned down Whittle’s engine design, which used the forward thrust of the engine itself, rather than a propeller, for power, as impractical, and so the Air Ministry didn’t fund research into the concept. Some now think that, had the jet engine been taken seriously by the British, the Second World War might have been over by 1940, but as it was Whittle spent the next ten years trying to finance his research and development privately, whilst fitting it around his RAF commitments. It wasn’t until 1945, by which time the desperation of war had lead to governments latching to every idea there was, that the first jet-powered aircraft got off the ground; and it was made by a team of Germans, Whittle’s patent having been allowed to expire a decade earlier.

Still, the German jet fighter was not exactly a practical beast (its engine needed to be disassembled after every use), and by then the war was almost lost anyway. Once the Allies got really into their jet aircraft development after the war, they looked set to start reaching the kind of fantastic speeds that would surely herald the new age of air power. But there was a problem; the sound barrier. During the war, a number of planes had tried to break the magical speed limit of 768 mph, aka the speed of sound (or Mach 1, as it is known today), but none had succeeded; partly this was due to the sheer engine power required (propellers get very inefficient when one approaching the speed of sound, and propeller tips can actually exceed the speed of sound as they spin), but the main reason for failure lay in the plane breaking up. In particular, there was a recurring problems of the wings tearing themselves off as they approached the required speed. It was subsequently realised that as one approached the sound barrier, you began to catch up with the wave of sound travelling in front of you; when you got too close to this, the air being pushed in front of the aircraft began to interact with this sound wave, causing shockwaves and extreme turbulence. This shockwave is what generates the sound of a sonic boom, and also the sound of a cracking whip. Some propeller driver WW2 fighters were able to achieve ‘transonic’ (very-close-to-Mach-1) speeds in dives, but these shockwaves generally rendered the plane uncontrollable and they invariably crashed; this effect was known as ‘transonic buffeting’. A few pilots during the war claimed to have successfully broken the sound barrier in dives and lived to tell the tale, but these claims are highly disputed. During the late 40s and early 50s, a careful analysis of transonic buffeting and similar effects yielded valuable information about the aerodynamics of attempting to break the sound barrier, and yielded several pieces of valuable data. One of the most significant, and most oft-quoted, developments concerned the shape of the wings; whilst  it was discovered that the frontal shape and thickness of the wings could be seriously prohibitive to supersonic flight, it was also realised that when in supersonic flight the shockwave generated was cone shaped. Not only that, but behind the shockwave air flowed at subsonic speeds and a wing behaved as normal; the solution, therefore, was to ‘sweep back’ the shape of the wings to form a triangle shape, so that they always lay ‘inside’ the cone-shaped shockwave. If they didn’t, the wing travelling through supersonic air would be constantly being battered by shockwaves, which would massively increase drag and potentially take the wings off the plane. In reality, it’s quite impractical to have the entire wing lying in the subsonic region (not least because a very swept-back wing tends to behave badly and not generate much lift when in subsonic flight), but the sweep of a wing is still a crucial factor in designing an aircraft depending on what speeds you want it to travel at. In the Lockheed SR-71A Blackbird, the fastest manned aircraft ever made (it could hit Mach 3.3), the problem was partially solved by having wings located right at the back of the aircraft to avoid the shockwave cone. Most modern jet fighters can hit Mach 2.

At first, aircraft designed to break the sound barrier were rocket powered; the USA’s resident speed merchant Chuck Yeager was the first man to officially and veritably top 768mph in the record-breaking rocket plane Bell X-1, although Yeager’s co-tester is thought to have beaten him to the achievement by 30 minutes piloting an XP-86 Sabre. But, before long, supersonic technology was beginning to make itself felt in the more conventional spheres of warfare; second generation jet fighters were, with the help of high-powered jet engines, the first to engage in supersonic combat during the 50s, and as both aircraft and weapons technology advanced the traditional roles of fighter and bomber started to come into question. And the result of that little upheaval will be explored next time…

Advertisement

Art vs. Science

All intellectual human activity can be divided into one of three categories; the arts, humanities, and sciences (although these terms are not exactly fully inclusive). Art here covers everything from the painted medium to music, everything that we humans do that is intended to be creative and make our world as a whole a more beautiful place to live in. The precise definition of ‘art’ is a major bone of contention among creative types and it’s not exactly clear where the boundary lies in some cases, but here we can categorise everything intended to be artistic as an art form. Science here covers every one of the STEM disciplines; science (physics, biology, chemistry and all the rest in its vast multitude of forms and subgenres), technology, engineering (strictly speaking those two come under the same branch, but technology is too satisfying a word to leave out of any self-respecting acronym) and mathematics. Certain portions of these fields too could be argued to be entirely self-fulfilling, and others are considered by some beautiful, but since the two rarely overlap the title of art is never truly appropriate. The humanities are an altogether trickier bunch to consider; on one hand they are, collectively, a set of sciences, since they purport to study how the world we live in behaves and functions. However, this particular set of sciences are deemed separate because they deal less with fundamental principles of nature but of human systems, and human interactions with the world around them; hence the title ‘humanities’. Fields as diverse as economics and geography are all blanketed under this title, and are in some ways the most interesting of sciences as they are the most subjective and accessible; the principles of the humanities can be and usually are encountered on a daily basis, so anyone with a keen mind and an eye for noticing the right things can usually form an opinion on them. And a good thing too, otherwise I would be frequently short of blogging ideas.

Each field has its own proponents, supporters and detractors, and all are quite prepared to defend their chosen field to the hilt. The scientists point to the huge advancements in our understanding of the universe and world around us that have been made in the last century, and link these to the immense breakthroughs in healthcare, infrastructure, technology, manufacturing and general innovation and awesomeness that have so increased our quality of life (and life expectancy) in recent years. And it’s not hard to see why; such advances have permanently changed the face of our earth (both for better and worse), and there is a truly vast body of evidence supporting the idea that these innovations have provided the greatest force for making our world a better place in recent times. The artists provide the counterpoint to this by saying that living longer, healthier lives with more stuff in it is all well and good, but without art and creativity there is no advantage to this better life, for there is no way for us to enjoy it. They can point to the developments in film, television, music and design, all the ideas of scientists and engineers tuned to perfection by artists of each field, and even the development in more classical artistic mediums such as poetry or dance, as key features of the 20th century that enabled us to enjoy our lives more than ever before. The humanities have advanced too during recent history, but their effects are far more subtle; innovative strategies in economics, new historical discoveries and perspectives and new analyses of the way we interact with our world have all come, and many have made news, but their effects tend to only be felt in the spheres of influence they directly concern- nobody remembers how a new use of critical path analysis made J. Bloggs Ltd. use materials 29% more efficiently (yes, I know CPA is technically mathematics; deal with it). As such, proponents of humanities tend to be less vocal than those in other fields, although this may have something to do with the fact that the people who go into humanities have a tendency to be more… normal than the kind of introverted nerd/suicidally artistic/stereotypical-in-some-other-way characters who would go into the other two fields.

This bickering between arts & sciences as to the worthiness/beauty/parentage of the other field has lead to something of a divide between them; some commentators have spoken of the ‘two cultures’ of arts and sciences, leaving us with a sect of sciences who find it impossible to appreciate the value of art and beauty, thinking it almost irrelevant compared what their field aims to achieve (to their loss, in my opinion). I’m not entirely sure that this picture is entirely true; what may be more so, however, is the other end of the stick, those artistic figures who dominate our media who simply cannot understand science beyond GCSE level, if that. It is true that quite a lot of modern science is very, very complex in the details, but Albert Einstein was famous for saying that if a scientific principle cannot be explained to a ten-year old then it is almost certainly wrong, and I tend to agree with him. Even the theory behind the existence of the Higgs Boson, right at the cutting edge of modern physics, can be explained by an analogy of a room full of fans and celebrities. Oh look it up, I don’t want to wander off topic here.

The truth is, of course, that no field can sustain a world without the other; a world devoid of STEM would die out in a matter of months, a world devoid of humanities would be hideously inefficient and appear monumentally stupid, and a world devoid of art would be the most incomprehensibly dull place imaginable. Not only that, but all three working in harmony will invariably produce the best results, as master engineer, inventor, craftsman and creator of some of the most famous paintings of all time Leonardo da Vinci so ably demonstrated. As such, any argument between fields as to which is ‘the best’ or ‘the most worthy’ will simply never be won, and will just end up a futile task. The world is an amazing place, but the real source of that awesomeness is the diversity it contains, both in terms of nature and in terms of people. The arts and sciences are not at war, nor should they ever be; for in tandem they can achieve so much more.

Time is an illusion, lunchtime doubly so…

In the dim and distant past, time was, to humankind, a thing and not much more. There was light-time, then there was dark-time, then there was another lot of light-time; during the day we could hunt, fight, eat and try to stay alive, and during the night we could sleep and have sex. However, we also realised that there were some parts of the year with short days and colder night, and others that were warmer, brighter and better for hunting. Being the bright sort, we humans realised that the amount of time it spent in winter, spring, summer and autumn (fall is the WRONG WORD) was about the same each time around, and thought that rather than just waiting for it to warm up every time we could count how long it took for one cycle (or year) so that we could work out when it was going to get warm next year. This enabled us to plan our hunting and farming patterns, and it became recognised that some knowledge of how the year worked was advantageous to a tribe. Eventually, this got so important that people started building monuments to the annual seasonal progression, hence such weird and staggeringly impressive prehistoric engineering achievements as Stonehenge.

However, this basic understanding of the year and the seasons was only one step on the journey, and as we moved from a hunter-gatherer paradigm to more of a civilised existence, we realised the benefits that a complete calendar could offer us, and thus began our still-continuing test to quantify time. Nowadays our understanding of time extends to clocks accurate to the degree of nanoseconds, and an understanding of relativity, but for a long time our greatest quest into the realm of bringing organised time into our lives was the creation of the concept of the wee.

Having seven days of the week is, to begin with, a strange idea; seven is an awkward prime number, and it seems odd that we don’t pick number that is easier to divide and multiply by, like six, eight or even ten, as the basis for our temporal system. Six would seem to make the most sense; most of our months have around 30 days, or 5 six-day weeks, and 365 days a year is only one less than multiple of six, which could surely be some sort of religious symbolism (and there would be an exact multiple on leap years- even better). And it would mean a shorter week, and more time spent on the weekend, which would be really great. But no, we’re stuck with seven, and it’s all the bloody moon’s fault.

Y’see, the sun’s daily cycle is useful for measuring short-term time (night and day), and the earth’s rotation around it provides the crucial yearly change of season. However, the moon’s cycle is 28 days long (fourteen to wax, fourteen to wane, regular as clockwork), providing a nice intermediary time unit with which to divide up the year into a more manageable number of pieces than 365. Thus, we began dividing the year up into ‘moons’ and using them as a convenient reference that we could refer to every night. However, even a moon cycle is a bit long for day-to-day scheduling, and it proved advantageous for our distant ancestors to split it up even further. Unfortunately, 28 is an awkward number to divide into pieces, and its only factors are 1, 2, 4, 7 and 14. An increment of 1 or 2 days is simply too small to be useful, and a 4 day ‘week’ isn’t much better. A 14 day week would hardly be an improvement on 28 for scheduling purposes, so seven is the only number of a practical size for the length of the week. The fact that months are now mostly 30 or 31 days rather than 28 to try and fit the awkward fact that there are 12.36 moon cycles in a year, hasn’t changed matters, so we’re stuck with an awkward 7 day cycle.

However, this wasn’t the end of the issue for the historic time-definers (for want of a better word); there’s not much advantage in defining a seven day week if you can’t then define which day of said week you want the crops to be planted on. Therefore, different days of the week needed names for identification purposes, and since astronomy had already provided our daily, weekly and yearly time structures it made sense to look skyward once again when searching for suitable names. At this time, centuries before the invention of the telescope, we only knew of seven planets, those celestial bodies that could be seen with the naked eye; the sun, the moon (yeah, their definition of ‘planet’ was a bit iffy), Mercury, Venus, Mars, Jupiter and Saturn. It might seem to make sense, with seven planets and seven days of the week, to just name the days after the planets in a random order, but humankind never does things so simply, and the process of picking which day got named after which planet was a complicated one.

In around 1000 BC the Egyptians had decided to divide the daylight into twelve hours (because they knew how to pick a nice, easy-to-divide number), and the Babylonians then took this a stage further by dividing the entire day, including night-time, into 24 hours. The Babylonians were also great astronomers, and had thus discovered the seven visible planets- however, because they were a bit weird, they decided that each planet had its place in a hierarchy, and that this hierarchy was dictated by which planet took the longest to complete its cycle and return to the same point in the sky. This order was, for the record, Saturn (29 years), Jupiter (12 years), Mars (687 days), Sun (365 days), Venus (225 days), Mercury (88 days) and Moon (28 days). So, did they name the days after the planets in this order? Of course not, that would be far too simple; instead, they decided to start naming the hours of the day after the planets (I did say they were a bit weird) in that order, going back to Saturn when they got to the Moon.

However, 24 hours does not divide nicely by seven planets, so the planet after which the first hour of the day was named changed each day. So, the first hour of the first day of the week was named after Saturn, the first hour of the second day after the Sun, and so on. Since the list repeated itself each week, the Babylonians decided to name each day after the planet that the first hour of each day was named, so we got Saturnday, Sunday, Moonday, Marsday, Mercuryday, Jupiterday and Venusday.

Now, you may have noticed that these are not the days of the week we English speakers are exactly used to, and for that we can blame the Vikings. The planetary method for naming the days of the week was brought to Britain by the Romans, and when they left the Britons held on to the names. However, Britain then spent the next 7 centuries getting repeatedly invaded and conquered by various foreigners, and for most of that time it was the Germanic Vikings and Saxons who fought over the country. Both groups worshipped the same gods, those of Norse mythology (so Thor, Odin and so on), and one of the practices they introduced was to replace the names of four days of the week with those of four of their gods; Tyr’sday, Woden’sday (Woden was the Saxon word for Odin), Thor’sday and Frig’sday replaced Marsday, Mercuryday, Jupiterday and Venusday in England, and soon the fluctuating nature of language renamed the days of the week Saturday, Sunday, Monday, Tuesday, Wednesday, Thursday and Friday.

However, the old planetary names remained in the romance languages (the Spanish translations of the days Tuesday to Friday are Mardi, Mercredi, Jeudi and Vendredi), with one small exception. When the Roman Empire went Christian in the fourth century, the ten commandments dictated they remember the Sabbath day; but, to avoid copying the Jews (whose Sabbath was on Saturday), they chose to make Sunday the Sabbath day. It is for this reason that Monday, the first day of the working week after one’s day of rest, became the start of the week, taking over from the Babylonian’s choice of Saturday, but close to Rome they went one stage further and renamed Sunday ‘Deus Dominici’, or Day Of The Lord. The practice didn’t catch on in Britain, thousands of miles from Rome, but the modern day Spanish, French and Italian words for Sunday are domingo, dimanche and domenica respectively, all of which are locally corrupted forms of ‘Deus Dominici’.

This is one of those posts that doesn’t have a natural conclusion, or even much of a point to it. But hey; I didn’t start writing this because I wanted to make a point, but more to share the kind of stuff I find slightly interesting. Sorry if you didn’t.

NUMBERS

One of the most endlessly charming parts of the human experience is our capacity to see something we can’t describe and just make something up in order to do so, never mind whether it makes any sense in the long run or not. Countless examples have been demonstrated over the years, but the mother lode of such situations has to be humanity’s invention of counting.

Numbers do not, in and of themselves, exist- they are simply a construct designed by our brains to help us get around the awe-inspiring concept of the relative amounts of things. However, this hasn’t prevented this ‘neat little tool’ spiralling out of control to form the vast field that is mathematics. Once merely a diverting pastime designed to help us get more use out of our counting tools, maths (I’m British, live with the spelling) first tentatively applied itself to shapes and geometry before experimenting with trigonometry, storming onwards to algebra, turning calculus into a total mess about four nanoseconds after its discovery of something useful, before just throwing it all together into a melting point of cross-genre mayhem that eventually ended up as a field that it as close as STEM (science, technology, engineering and mathematics) gets to art, in that it has no discernible purpose other than for the sake of its own existence.

This is not to say that mathematics is not a useful field, far from it. The study of different ways of counting lead to the discovery of binary arithmetic and enabled the birth of modern computing, huge chunks of astronomy and classical scientific experiments were and are reliant on the application of geometric and trigonometric principles, mathematical modelling has allowed us to predict behaviour ranging from economics & statistics to the weather (albeit with varying degrees of accuracy) and just about every aspect of modern science and engineering is grounded in the brute logic that is core mathematics. But… well, perhaps the best way to explain where the modern science of maths has lead over the last century is to study the story of i.

One of the most basic functions we are able to perform to a number is to multiply it by something- a special case, when we multiply it by itself, is ‘squaring’ it (since a number ‘squared’ is equal to the area of a square with side lengths of that number). Naturally, there is a way of reversing this function, known as finding the square root of a number (ie square rooting the square of a number will yield the original number). However, convention dictates that a negative number squared makes a positive one, and hence there is no number squared that makes a negative and there is no such thing as the square root of a negative number, such as -1. So far, all I have done is use a very basic application of logic, something a five-year old could understand, to explain a fact about ‘real’ numbers, but maths decided that it didn’t want to not be able to square root a negative number, so had to find a way round that problem. The solution? Invent an entirely new type of number, based on the quantity i (which equals the square root of -1), with its own totally arbitrary and made up way of fitting  on a number line, and which can in no way exist in real life.

Admittedly, i has turned out to be useful. When considering electromagnetic forces, quantum physicists generally assign the electrical and magnetic components real and imaginary quantities in order to identify said different components, but its main purpose was only ever to satisfy the OCD nature of mathematicians by filling a hole in their theorems. Since then, it has just become another toy in the mathematician’s arsenal, something for them to play with, slip into inappropriate situations to try and solve abstract and largely irrelevant problems, and with which they can push the field of maths in ever more ridiculous directions.

A good example of the way mathematics has started to lose any semblance of its grip on reality concerns the most famous problem in the whole of the mathematical world- Fermat’s last theorem. Pythagoras famously used the fact that, in certain cases, a squared plus b squared equals c squared as a way of solving some basic problems of geometry, but it was never known as to whether a cubed plus b cubed could ever equal c cubed if a, b and c were whole numbers. This was also true for all other powers of a, b and c greater than 2, but in 1637 the brilliant French mathematician Pierre de Fermat claimed, in a scrawled note inside his copy of Diohantus’ Arithmetica, to have a proof for this fact ‘that is too large for this margin to contain’. This statement ensured the immortality of the puzzle, but its eventual solution (not found until 1995, leading most independent observers to conclude that Fermat must have made a mistake somewhere in his ‘marvellous proof’) took one man, Andrew Wiles, around a decade to complete. His proof involved showing that the terms involved in the theorem could be expressed in the form of an incredibly weird equation that doesn’t exist in the real world, and that all equations of this type had a counterpart equation of an equally irrelevant type. However, since the ‘Fermat equation’ was too weird to exist in the other format, it could not logically be true.

To a mathematician, this was the holy grail; not only did it finally lay to rest an ages-old riddle, but it linked two hitherto unrelated branches of algebraic mathematics by way of proving what is (now it’s been solved) known as the Taniyama-Shimura theorem. To anyone interested in the real world, this exercise made no contribution to it whatsoever- apart from satisfying a few nerds, nobody’s life was made easier by the solution, it didn’t solve any real-world problem, and it did not make the world a tangibly better place. In this respect then, it was a total waste of time.

However, despite everything I’ve just said, I’m not going to decide that all modern day mathematics is a waste of time; very few human activities ever are. Mathematics is many things; among them ridiculous, confusing, full of contradictions and potential slip-ups and, in a field whose age of winning a major prize is younger than in any other STEM field, apparently full of those likely to belittle you out of future success should you enter the world of serious academia. But, for some people, maths is just what makes the world makes sense, and at its heart that was all it was ever created to do. And if some people want their life to be all about the little symbols that make the world make sense, then well done to the world for making a place for them.

Oh, and there’s a theory doing the rounds of cosmology nowadays that reality is nothing more than a mathematical construct. Who knows in what obscure branch of reverse logarithmic integrals we’ll find answers about that one…

Up one level

In my last post (well, last excepting Wednesday’s little topical deviation), I talked about the real nuts and bolts of a computer, detailing the function of the transistors that are so vital to the workings of a computer. Today, I’m going to take one step up and study a slightly broader picture, this time concerned with the integrated circuits that utilise such components to do the real grunt work of computing.

An integrated circuit is simply a circuit that is not comprised of multiple, separate, electronic components- in effect, whilst a standard circuit might consist of a few bits of metal and plastic connected to one another by wires, in an IC they are all stuck in the same place and all assembled as one. The main advantage of this is that since all the components don’t have to be manually stuck to one another, but are built in circuit form from the start, there is no worrying about the fiddliness of assembly and they can be mass-produced quickly and cheaply with components on a truly microscopic scale. They generally consist of several layers on top of the silicon itself, simply to allow space for all of the metal connecting tracks and insulating materials to run over one another (this pattern is usually, perhaps ironically, worked out on a computer), and the sheer detail required of their manufacture surely makes it one of the marvels of the engineering world.

But… how do they make a computer work? Well, let’s start by looking at a computer’s memory, which in all modern computers takes the form of semiconductor memory. Memory takes the form of millions upon millions of microscopically small circuits known as memory circuits, each of which consists of one or more transistors. Computers are electronic, meaning to only thing they understand is electricity- for the sake of simplicity and reliability, this takes the form of whether the current flowing in a given memory circuit is ‘on’ or ‘off’. If the switch is on, then the circuit is represented as a 1, or a 0 if it is switched off. These memory circuits are generally grouped together, and so each group will consist of an ordered pattern of ones and zeroes, of which there are many different permutations. This method of counting in ones and zeroes is known as binary arithmetic, and is sometimes thought of as the simplest form of counting. On a hard disk, patches of magnetically charged material represent binary information rather than memory circuits.

Each little memory circuit, with its simple on/off value, represents one bit of information. 8 bits grouped together forms a byte, and there may be billions of bytes in a computer’s memory. The key task of a computer programmer is, therefore, to ensure that all the data that a computer needs to process is written in binary form- but this pattern of 1s and 0s might be needed to represent any information from the content of an email to the colour of one pixel of a video. Clearly, memory on its own is not enough, and the computer needs some way of translating the information stored into the appropriate form.

A computer’s tool for doing this is known as a logic gate, a simple electronic device consisting of (you guessed it) yet more transistor switches. This takes one or two inputs, either ‘on’ or ‘off’ binary ones, and translates them into another value. There are three basic types:  AND gates (if both inputs equal 1, output equals 1- otherwise, output equals 0), OR gates (if either input equals 1, output equals 1- if both inputs equal 0, output equals 0), and NOT gates (if input equals 1, output equals 0, if input equals 0, output equals 1). The NOT gate is the only one of these with a single input, and combinations of these gates can perform other functions too, such as NAND (not-and) or XOR (exclusive OR; if either input equals 1, output equals 1, but if both inputs equal 1 or 0, output equals 0) gates. A computer’s CPU (central processing unit) will contain hundreds of these, connected up in such a way as to link various parts of the computer together appropriately, translate the instructions of the memory into what function a given program should be performing, and thus cause the relevant bit (if you’ll pardon the pun) of information to translate into the correct process for the computer to perform.

For example, if you click on an icon on your desktop, your computer will put the position of your mouse and the input of the clicking action through an AND gate to determine that it should first highlight that icon. To do this, it orders the three different parts of each of the many pixels of that symbol to change their shade by a certain degree, and the the part of the computer responsible for the monitor’s colour sends a message to the Arithmetic Logic Unit (ALU), the computer’s counting department, to ask what the numerical values of the old shades plus the highlighting is, to give it the new shades of colour for the various pictures. Oh, and the CPU should also open the program. To do this, its connections send a signal off to the memory to say that program X should open now. Another bit of the computer then searches through the memory to find program X, giving it the master ‘1’ signal that causes it to open. Now that it is open, this program routes a huge amount of data back through the CPU to tell it to change the pattern of pretty colours on the screen again, requiring another slue of data to go through the ALU, and that areas of the screen A, B and C are now all buttons, so if you click there then we’re going to have to go through this business all over again. Basically the CPU’s logical function consists of ‘IF this AND/OR this happens, which signal do I send off to ask the right part of the memory what to do next?’. And it will do all this in a miniscule fraction of a second. Computers are amazing.

Obviously, nobody in their right mind is going to go through the whole business of telling the computer exactly what to do with each individual piece of binary data manually, because if they did nothing would ever get done. For this purpose, therefore, programmers have invented programming languages to translate their wishes into binary, and for a little more detail about them, tune in to my final post on the subject…

SCIENCE!

One book that I always feel like I should understand better than I do (it’s the mechanics concerning light cones that stretch my ability to visualise) is Professor Stephen Hawking’s ‘A Brief History of Time’. The content is roughly what nowadays a Physics or Astronomy student would learn in first year cosmology, but when it was first released the content was close to the cutting edge of modern physics. It is a testament to the great charm of Hawking’s writing, as well as his ability to sell it, that the book has since sold millions of copies, and that Hawking himself is the most famous scientist of our age.

The reason I bring it up now is because of one passage from it that spring to mind the other day (I haven’t read it in over a year, but my brain works like that). In this extract, Hawking claims that some 500 years ago, it would be possible for a (presumably rich, intelligent, well-educated and well-travelled) man to learn everything there was to know about science and technology in his age. This is, when one thinks about it, a rather bold claim, considering the vast scope of what ‘science’ covers- even five centuries ago this would have included medicine, biology, astronomy, alchemy (chemistry not having been really invented), metallurgy and materials, every conceivable branch of engineering from agricultural to mining, and the early frontrunners of physics to name but some. To discover everything would have been quite some task, but I don’t think an entirely impossible one, and Hawking’s point stands: back then, there wasn’t all that much ‘science’ around.

And now look at it. Someone with an especially good memory could perhaps memorise the contents of a year’s worth of New Scientist, or perhaps even a few years of back issues if they were some kind of super-savant with far too much free time on their hands… and they still would have barely scratched the surface. In the last few centuries, and particularly the last hundred or so years, humanity’s collective march of science has been inexorable- we have discovered neurology, psychology, electricity, cosmology, atoms and further subatomic particles, all of modern chemistry, several million new species, the ability to classify species at all, more medicinal and engineering innovations than you could shake a stick at, plastics, composites and carbon nanotubes, palaeontology, relativity, genomes, and even the speed of spontaneous combustion of a burrito (why? well why the f&%$ not?). Yeah, we’ve come a long way.

The basis for all this change occurred during the scientific revolution of the 16th and 17th centuries. The precise cause of this change somewhat unknown- there was no great upheaval, but more of a general feeling that ‘hey, science is great, let’s do something with it!’. Some would argue that the idea that there was any change in the pace of science itself is untrue, and that the groundwork for this period of advancing scientific knowledge was largely done by Muslim astronomers and mathematicians several centuries earlier. Others may say that the increasing political and social changes that came with the Renaissance not only sent society reeling slightly, rendering it more pliable to new ideas and boundary-pushing, but also changed the way that the rich and noble functioned. Instead of barons, dukes and the nobility simply resting on their laurels and raking in the cash as the feudal system had previously allowed them to, an increasing number of them began to contribute to the arts and sciences, becoming agents of change and, in the cases of some, agents in the advancement of science.

It took a long time for science to gain any real momentum. For many a decade, nobody was ever a professional scientist or even engineer, and would generally study in their spare time. Universities were typically run by monks and populated by the sons of the rich or the younger sons of nobles- they were places where you both lived and learned expensively, but were not the centres of research that they are nowadays. They also contained a huge degree of resistance to the ideas put forward by Aristotle and others that had been rediscovered at the start of the revolution, and as such trying to get one’s new ideas taken seriously was a severe task. As such, just as many scientists were merely people who were interested in a subject and rich and intelligent enough to dabble in it as they were people committed to learning. Then there was the notorious religious problem- whilst the Church had no problem with most scientific endeavours, the rise of astronomy began one long and ceaseless feud between the Pope and physics into the fallibility of the bible, and some, such as Galileo and Copernicus, were actively persecuted by the Church for their new claims. Some were even hanged. But by far the biggest stumbling block was the sheer number of potential students of science- most common people were peasants, who would generally work the land at their lord’s will, and had zero chance of gravitating their life prospects higher than that. So- there was hardly anyone to do it, it was really, really hard to make any progress in and you might get killed for trying. And yet, somehow, science just kept on rolling onwards. A new theory here, an interesting experiment here, the odd interesting conversation between intellectuals, and new stuff kept turning up. No huge amount, but it was enough to keep things ticking over.

But, as the industrial revolution swept Europe, things started to change. As revolutions came and went, the power of the people started to rise, slowly squeezing out the influence and control of aristocrats by sheer weight of numbers. Power moved from the monarchy to the masses, from the Lords to the Commons- those with real control were the entrepreneurs and factory owners, not old men sitting in country houses with steadily shrinking lands that they owned. Society began to become more fluid, and anyone (well, more people than previously, anyway), could become the next big fish by inventing something new. Technology began to become of ever-increasing importance, and as such so did its discovery. Research by experiment was ever-more accessible, and science began to gather speed. During the 20th century things really began to motor- two world wars prompted the search for new technologies to enter an even more frenzied pace, the universal schooling of children was breeding a new generation of thinkers, and the idea of a university as a place of learning and research became more cemented in popular culture. Anyone could think of something new, and in that respect everyone was a scientist.

And this, to me, is the key to the world we live in today- a world in which a dozen or so scientific papers are published every day for branches of science relevant largely for their own sake. But this isn’t the true success story of science. The real success lies in the products and concepts we see every day- the iPhone, the pharmaceuticals, the infrastructure. The development of none of these discovered a new effect, a new material, or enabled us to better understand the way our thyroid gland works, and in that respect they are not science- but they required someone to think a little bit, to perhaps try a different way of doing something, to face a challenge. They pushed us forward one, tiny inexorable step, put a little bit more knowledge into the human race, and that, really, is the secret. There are 7 billion of us on this planet right now. Imagine if every single one contributed just one step forward.