Churchill in Wartime

After the disasters of his earlier political career I described in my last post, by 1939 Winston Churchill had once again managed to rise to prominence within the Conservative party and had gathered some considerable support behind his cause of opposition to the government’s appeasement policy. When Britain was finally dragged into war in September of that year, he found himself once again on the front foot of Westminster politics.

Churchill, as the only person mentally prepared for war, was immediately made First Lord of the Admiralty again, and it was only thanks to Neville Chamberlain’s suddenly horrendous reputation attracting blame like a magnet that prevented him getting the blame for a series of naval disasters. After Germany had successfully invaded Norway and Denmark without a hitch*, despite some fantastically idiotic speeches from Chamberlain concerning Germany’s lack of military strength, Chamberlain was forced out of power and the process of trying to hash together a coalition government began.

[*For some reason, Britain took this as a cue to invade Iceland. Why is something of a mystery.]

Chamberlain’s main ally in pursuing appeasement had been Lord Halifax, and he wanted him to head up the wartime government. The only other major candidate for the job was Churchill, who had built up a sizeable base of support within parliament, and all knew that Halifax’s government would only be able to function with Churchill’s support. Churchill, Halifax, Chamberlain and David Margesson, the Conservative chief whip, met on 10 May 1940 and Chamberlain asked Churchill the pivotal question: would he be willing to serve in a government under Halifax? This put Churchill in a dilemma: saying yes would put the government in the hands of an ineffective, pro-appeasement leader, whilst saying no would split the government down the middle and wreak mayhem at a time when strength and unity were of critical importance. Unsure of what to do, he said nothing. Time ticked by. For two full minutes the silence endured unbroken, the other men present equally unsure what they were supposed to do. Finally, Lord Halifax spoke up, whether for purely political reasons or simply our of sheer embarassment, to make possibly the most important statement in the last century of British history: he suggested that it would be difficult for a member of the House of Lords*, such as himself, to govern effectively as opposed to a member of the House of Commons, effectively ruling himself out of the job. At Chamberlain’s recommendation following that meeting, King George VI asked Churchill to be prime minister, and he duly accepted. That pregnant silence would prove to be among the most important two minutes in history.

[*The practice of an elected Prime Minister always coming from the House of Commons is a modern phenomenon not enshrined in law; since she ostensibly chooses who becomes PM, the Queen could in theory just tell a random member of the House of Lords that he was now head of government. That she doesn’t is partly good manners, but mostly because to do so would probably end the British monarchy in under 5 years]

At the time, there were many who thought that, what with Gallipoli and his long history of political failures, the coming of Churchill to power represented the final nail in Britain’s coffin. Unpopular among the MPs and Lord’s alike, the 65 year-old Churchill looked to have all the cards stacked against him. However, Churchill’s drive, energy, superlative public speaking ability and vehement opposition to appeasing the Germans single-handedly changed the face of the war, hardening the opinion of public and parliament alike against the idea of an armistice. In wartime, Churchill was in his element; a natural warmonger whose aggressive tactics were so often disastrous in peacetime, now his pugnacious determination, confidence, and conviction to continue the fight no matter what united the country behind him. It was he who not only organised but inspired the ‘miracle of Dunkirk’, in which thousands of small civilian vessels mobilised to take part in Operation Dynamo, evacuating trapped British and French soldiers from the port of Dunkirk in the face of heavy German fire and aerial attack, he whose many inspiring wartime speeches have gone down in history, he who inspired Londoners and RAF pilots alike to survive the Battle of Britain and the Blitz, ensuring the country remained safe from the threat of German invasion. OK, so the ‘heroic’ events of Dunkirk overlooked the fact that it had been a humiliating retreat and the army had left all their weapons behind, but that wasn’t the point; the British were inspired and weren’t going to stop fighting.

One of the most morally ambiguous yet telling events about the spirit of defiance Churchill inspired within Britain came in July 1940; the French Navy was holed up in Algeria, with the British attempting to negotiate a joining of the two fleets. The negotiations went badly and the French refused to join the British fleet- and in response the British opened fire on their allies in order to prevent their ships falling into enemy hands. 1300 lives were lost. In just about any other situation, this would have been an utterly insane act that would only have caused the Allied war effort to collapse amid bitter argument and infighting, but then, with France all but completely overrun by German forces, it was nothing more or less than a simple statement of British intent. Britain were prepared to do whatever it took to fight off the Germans, and the sheer ruthlessness of this act is said to have convinced the USA that Britain had the stomach to continue fighting no matter what. Is this a moment to be proud of? No; it was a shameless slaughter and a fiasco in more ways than one. Did it make its point? Absolutely.

Some expected Churchill to win a landslide in the first post-war elections, but ’twas not to be; even the massive wave of public goodwill towards him was not enough to overcome the public desire for social change as Clement Attlee became the first ever labour Prime Minister in 1945. To be honest, it’s probably a good thing; Attlee’s government gave us the NHS and finally started to dismantle the badly-run, expensive remnants of the British Empire, whilst Churchill’s second term as PM (1951-55) was largely undistinguished save for some more post-Imperial restlessness. Not that it matters; useless though he may have been in peacetime, in war Churchill was every bit the national hero he is nowadays made out to be. Churchill’s great legacy is not just one of not having grown up speaking German, but in many ways he redefined what it meant to be British. Churchill inspired a return to the ‘stiff upper lip’ British stereotype that we are nowadays all so proud of: a living tribute to the idea to standing up and keeping going in the face of adversity. In many ways, what Winston Churchill stood for can be best summarised by simply reciting possibly the most famous of all his many great speeches:

Even though large tracts of Europe and many old and famous States have fallen or may fall into the grip of the Gestapo and all the odious apparatus of Nazi rule, we shall not flag or fail. We shall go on to the end. We shall fight in France, we shall fight on the seas and oceans, we shall fight with growing confidence and growing strength in the air, we shall defend our island, whatever the cost may be. We shall fight on the beaches, we shall fight on the landing grounds, we shall fight in the fields and in the streets, we shall fight in the hills; we shall never surrender

Advertisements

How Quantum Physics Explains Action Films

One of the key ideas used by cosmologists (yes, physics again, sorry) to explain away questions asked by annoying philosophical types is known as the anthropic principle. This has two forms (strong and weak) but the idea remains the same for both; that the reason for a situation being as it is is because, if it wasn’t, we wouldn’t be around to ask that question. For example, one might ask (as Stephen Hawking did in ‘A Brief History of Time’) why the universe is around 10 billion years old, a decidedly awkward question if ever there was one. The anthropic principle provides the simplest answer, stating that since organic life is such a complicated business and that the early universe was such a chaotic, unfriendly place, it is only after this vast amount of time that life forms capable of asking this question have been able to develop.

This answer of ‘because we’re here’ is a wonderfully useful one, albeit one that should be used with caution to avoid not answering valid question, and can be applied to problems that do not concern themselves directly with physics. One example concerns the origin of the human race, as we are all thought to stem from just a few hundred individuals who lived in East Africa’s Rift valley several million years ago. At that time our knowledge of weapons, fighting and general survival was relatively scant, and coming face to face with any large predator would have been a fairly assured death sentence; the prehistoric equivalent of a smart pride of lions, or even some particularly adverse weather one year, could have wiped out a significant proportion of the human race as it stood at that time in just a few months. Despite the advantages of adaptability and brainpower that we have shown since, the odds of natural selection were still stacked against us; why did we arise to become the dominant multicellular life form on this planet?

This question can be answered by listing all the natural advantages we possess as a species and how they enabled us to continue ‘evolving’ far beyond the mere natural order of things; but such an answer still can’t quite account for the large dose of luck that comes into the bargain. The anthropic principle can, however, account for this; the human race was able to overcome the odds because if we hadn’t, then we wouldn’t be around to ask the question. Isn’t logic wonderful?

In fact, one we start to think about our lives and questions of our existence in terms of the anthropic principle, we realise that our existence as individuals is dependent on an awful lot of historical events having happened the way they did. For example, if the Nazis had triumphed during WWII, then perhaps one or more of my grandparents could have been killed, separated from their spouse, or in some way prevented from raising the family that would include my parents. Even tinier events could have impacted the chance of me turning out as me; perhaps a stray photon bouncing off an atom in the atmosphere in a slightly different way could have struck a DNA molecule, causing it to deform the sperm that would otherwise have given me half my genes and meaning it never even made it to the egg that offered up the other half. This is chaos theory in action, but it illustrates a point; for the universe to have ended up the way it has depends on history having played out exactly as it has done.

The classic example of this in quantum physics is the famous ‘Schrodinger’s Cat’ experiment, in which a theoretical cat was put into a box with a special quantum device that had a 50/50 chance of either doing nothing or releasing a toxic gas that would kill the cat. Schrodinger’s point was that, when the cat is put into the box, two universes emerge; one in which the cat is dead, and one in which it is alive. Until we open the box, we cannot known which of these universes we are in, so the cat must be thought of as simultaneously alive and dead.

However, another thought experiment known as the ‘quantum suicide’ experiment takes the cat’s point of view; imagine that the cat is an experimenter, and that he is working alone. Imagine you are that experimenter, and that you had stayed in the box for five iterations of the 50/50 life/death random event. In 31 out of 32 possible futures, you would have been gassed, for at least once the device would have selected the ‘death’ option; but in just one of these 32 alternative futures, you would still be alive. Moreover, if you had since got out of the box and published your results, the existence of those results is solely dependent on you being that lucky one out of 32.

Or, to put it another way, consider a generic action hero, in the classic scene where he runs through the battlefield gunning down enemies whilst other, lesser soldiers fall about him from bullets and explosions. The enemy fire countless shots at him, but try as they might they can never kill him. They try, but he survives and the film reaches its triumphant conclusion.

Now, assuming that these enemies are not deliberately trying to miss him and can at least vaguely use their weapons, if our action hero tried to pull that ‘running through a hail of bullets’ stunt then 999 times out of a thousand he’d be killed. However, if he was killed then the film would not be able to reach its conclusion, since he would be unable to save the heroine/defeat the baddie/deliver a cliched one-liner, and as such the story would be incomplete.  And, with such a crappy story, there’s no way that a film would get made about it; therefore, the action hero must always be one of the lucky ones.

This idea of always triumphing over the odds, of surviving no matter what because, if you didn’t, you wouldn’t be around to tell the tale or even be conscious of the tale, is known as quantum immortality. And whilst it doesn’t mean you’re going to be safe jumping off buildings any time soon, it does at least give yo a way to bore the pants off the next person who claims that action movies are WAAYYYY too unrealistic.

We Will Remember Them

Four days ago (this post was intended for Monday, when it would have been yesterday, but I was out then- sorry) was Remembrance Sunday; I’m sure you were all aware of that. Yesterday we acknowledged the dead, recognised the sacrifice they made in service of their country, and reflected upon the tragic horrors that war inflicted upon them and our nations. We gave our thanks that “for your tomorrow, we gave our today”.

However, as the greatest wars ever to rack our planet have disappeared towards the realm of being outside living memory, a few dissenting voices have risen about the place of the 11th of November as a day of national mourning and remembrance. They are not loud complaints, as anything that may be seen as an attempt to sully the memories of those who ‘laid so costly a sacrifice on the altar of freedom’ (to quote Saving Private Ryan) is unsurprisingly lambasted and vilified by the majority, but it would be wrong not to recognise that there are some who question the very idea of Remembrance Sunday in its modern incarnation.

‘Remembrance Sunday,’ so goes the argument, ‘is very much centred around the memories of those who died: recognising their act of sacrifice and championing the idea that ‘they died for us’.” This may partly explain why the Church has such strong links with the ceremony; quite apart from religion being approximately 68% about death, the whole concept of sacrificing oneself for the good of others is a direct parallel to the story of Jesus Christ. ‘However,’ continues the argument, ‘the wars that we of the old Allied Powers chiefly celebrate and remember are ones in which we won, and if we had lost them then to argue that they had given their lives in defence of their realm would make it seem like their sacrifice was wasted- thus, this style of remembrance is not exactly fair. Furthermore, by putting the date of our symbolic day of remembrance on the anniversary of the end of the First World War, we invariably make that conflict (and WWII) our main focus of interest. But, it is widely acknowledged that WWI was a horrific, stupid war, in which millions died for next to no material gain and which is generally regarded as a terrible waste of life. We weren’t fighting for freedom against some oppressive power, but because all the European top brass were squaring up to one another in a giant political pissing contest, making the death of 20 million people the result of little more than a game of satisfying egos. This was not a war in which ‘they died for us’ is exactly an appropriate sentiment’.

Such an argument is a remarkably good one, and does call into question the very act of remembrance itself.  It’s perhaps more appropriate to make such an argument with more recent wars- the Second World War was a necessary conflict if ever there was one, and it cannot be said that those soldiers currently fighting in Afghanistan are not trying to make a deeply unstable and rather undemocratic part of the world a better place to live in (I said trying). However, this doesn’t change the plain and simple truth that war is a horrible, unpleasant activity that we ought to be trying to get rid of wherever humanly possible, and remembering soldiers from years gone by as if their going to die in a muddy trench was absolutely the most good and right thing to do does not seem like the best way of going about this- it reminds me of, in the words of Wilfred Owen: “that old lie:/Dulce Et Decorum Est/Pro Patria Mori”.

However, that is not to say that we should not remember the deaths and sacrifices of those dead soldiers, far from it. Not only would it be hideously insensitive to both their memories and families (my family was fortunate enough to not experience any war casualties in the 20th century), but it would also suggest to soldiers currently fighting that their fight is meaningless- something they are definitely not going to take well, which would be rather inadvisable since they have all the guns and explosives. War might be a terrible thing, but that is not to say that it doesn’t take guts and bravery to face the guns and fight for what you believe in (or, alternatively, what your country makes you believe in). As deaths go, it is at least honourable, if not exactly Dulce Et Decorum.

And then, of course, there is the whole point of remembrance, and indeed history itself, to remember. The old adage about ‘study history or else find yourself repeating it’ still holds true, and by learning lessons from the past we stand very little chance of improving on our previous mistakes. Without the great social levelling and anti-imperialist effects of the First World War, then women may never have got the vote, jingoistic ideas about empires,  and the glory of dying in battle may still abound, America may (for good or ill) have not made enough money out of the war to become the economic superpower it is today and wars may, for many years more, have continued to waste lives through persistent use of outdated tactics on a modern battlefield with modern weaponry, to name but the first examples to come into my head- so to ignore the act of remembrance is not just disrespectful, but downright rude.

Perhaps then, the message to learn is not to ignore the sacrifice that those soldiers have made over the years, but rather to remember what they died to teach us. We can argue for all of eternity as to whether the wars that lead to their deaths were ever justified, but we can all agree that the concept of war itself is a wrong one, and that the death and pain it causes are the best reasons to pursue peace wherever we can. This then, should perhaps be the true message of Remembrance Sunday; that over the years, millions upon millions of soldiers have dyed the earth red with their blood, so that we might one day learn the lessons that enable us to enjoy a world in which they no longer have to.

A Continued History

This post looks set to at least begin by following on directly from my last one- that dealt with the story of computers up to Charles Babbage’s difference and analytical engines, whilst this one will try to follow the history along from there until as close to today as I can manage, hopefully getting in a few of the basics of the workings of these strange and wonderful machines.

After Babbage’s death as a relatively unknown and unloved mathematician in 1871, the progress of the science of computing continued to tick over. A Dublin accountant named Percy Ludgate, independently of Babbage’s work, did develop his own programmable, mechanical computer at the turn of the century, but his design fell into a similar degree of obscurity and hardly added anything new to the field. Mechanical calculators had become viable commercial enterprises, getting steadily cheaper and cheaper, and as technological exercises were becoming ever more sophisticated with the invention of the analogue computer. These were, basically a less programmable version of the difference engine- mechanical devices whose various cogs and wheels were so connected up that they would perform one specific mathematical function to a set of data. James Thomson in 1876 built the first, which could solve differential equations by integration (a fairly simple but undoubtedly tedious mathematical task), and later developments were widely used to collect military data and for solving problems concerning numbers too large to solve by human numerical methods. For a long time, analogue computers were considered the future of modern computing, but since they solved and modelled problems using physical phenomena rather than data they were restricted in capability to their original setup.

A perhaps more significant development came in the late 1880s, when an American named Herman Hollerith invented a method of machine-readable data storage in the form of cards punched with holes. These had been around for a while to act rather like programs, such as the holed-paper reels of a pianola or the punched cards used to automate the workings of a loom, but this was the first example of such devices being used to store data (although Babbage had theorised such an idea for the memory systems of his analytical engine). They were cheap, simple, could be both produced and read easily by a machine, and were even simple to dispose of. Hollerith’s team later went on to process the data of the 1890 US census, and would eventually become most of IBM. The pattern of holes on these cards could be ‘read’ by a mechanical device with a set of levers that would go through a hole if there was one present, turning the appropriate cogs to tell the machine to count up one. This system carried on being used right up until the 1980s on IBM systems, and could be argued to be the first programming language.

However, to see the story of the modern computer truly progress we must fast forward to the 1930s. Three interesting people and acheivements came to the fore here: in 1937 George Stibitz, and American working in Bell Labs, built an electromechanical calculator that was the first to process data digitally using on/off binary electrical signals, making it the first digital. In 1936, a bored German engineering student called Konrad Zuse dreamt up a method for processing his tedious design calculations automatically rather than by hand- to this end he devised the Z1, a table-sized calculator that could be programmed to a degree via perforated film and also operated in binary. His parts couldn’t be engineered well enough for it to ever work properly, but he kept at it to eventually build 3 more models and devise the first programming language. However, perhaps the most significant figure of 1930s computing was a young, homosexual, English maths genius called Alan Turing.

Turing’s first contribution to the computing world came in 1936, when he published a revolutionary paper showing that certain computing problems cannot be solved by one general algorithm. A key feature of this paper was his description of a ‘universal computer’, a machine capable of executing programs based on reading and manipulating a set of symbols on a strip of tape. The symbol on the tape currently being read would determine whether the machine would move up or down the strip, how far, and what it would change the symbol to, and Turing proved that one of these machines could replicate the behaviour of any computer algorithm- and since computers are just devices running algorithms, they can replicate any modern computer too. Thus, if a Turing machine (as they are now known) could theoretically solve a problem, then so could a general algorithm, and vice versa if it couldn’t. Not only that, but since modern computers cannot multi-task on the. These machines not only lay the foundations for computability and computation theory, on which nearly all of modern computing is built, but were also revolutionary as they were the first theorised to use the same medium for both data storage and programs, as nearly all modern computers do. This concept is known as a von Neumann architecture, after the man who first pointed out and explained this idea in response to Turing’s work.

Turing machines contributed one further, vital concept to modern computing- that of Turing-completeness. A Turing-complete system was defined as a single Turing machine (known as a Universal Turing machine) capable of replicating the behaviour of any other theoretically possible Turing machine, and thus any possible algorithm or computable sequence. Charles Babbage’s analytical engine would have fallen into that class had it ever been built, in part because it was capable of the ‘if X then do Y’ logical reasoning that characterises a computer rather than a calculator. Ensuring the Turing-completeness of a system is a key part of designing a computer system or programming language to ensure its versatility and that it is capable of performing all the tasks that could be required of it.

Turing’s work had laid the foundations for nearly all the theoretical science of modern computing- now all the world needed was machines capable of performing the practical side of things. However, in 1942 there was a war on, and Turing was being employed by the government’s code breaking unit at Bletchley Park, Buckinghamshire. They had already cracked the German’s Enigma code, but that had been a comparatively simple task since they knew the structure and internal layout of the Enigma machine. However, they were then faced by a new and more daunting prospect: the Lorenz cipher, encoded by an even more complex machine for which they had no blueprints. Turing’s genius, however, apparently knew no bounds, and his team eventually worked out its logical functioning. From this a method for deciphering it was formulated, but it required an iterative process that took hours of mind-numbing calculation to get a result out. A faster method of processing these messages was needed, and to this end an engineer named Tommy Flowers designed and built Colossus.

Colossus was a landmark of the computing world- the first electronic, digital, and partially programmable computer ever to exist. It’s mathematical operation was not highly sophisticated- it used vacuum tubes containing light emission and sensitive detection systems, all of which were state-of-the-art electronics at the time, to read the pattern of holes on a paper tape containing the encoded messages, and then compared these to another pattern of holes generated internally from a simulation of the Lorenz machine in different configurations. If there were enough similarities (the machine could obviously not get a precise matching since it didn’t know the original message content) it flagged up that setup as a potential one for the message’s encryption, which could then be tested, saving many hundreds of man-hours. But despite its inherent simplicity, its legacy is simply one of proving a point to the world- that electronic, programmable computers were both possible and viable bits of hardware, and paved the way for modern-day computing to develop.

The Land of the Red

Nowadays, the country to talk about if you want to be seen as being politically forward-looking is, of course, China. The most populous nation on Earth (containing 1.3 billion souls) with an economy and defence budget second only to the USA in terms of size, it also features a gigantic manufacturing and raw materials extraction industry, the world’s largest standing army and one of only five remaining communist governments. In many ways, this is China’s second boom as a superpower, after its early forays into civilisation and technological innovation around the time of Christ made it the world’s largest economy for most of the intervening time. However, the technological revolution that swept the Western world in the two or three hundred years during and preceding the Industrial Revolution (which, according to QI, was entirely due to the development and use of high-quality glass in Europe, a material almost totally unheard of in China having been invented in Egypt and popularised by the Romans) rather passed China by, leaving it a severely underdeveloped nation by the nineteenth century. After around 100 years of bitter political infighting, during which time the 2000 year old Imperial China was replaced by a republic whose control was fiercely contested between nationalists and communists, the chaos of the Second World War destroyed most of what was left of the system. The Second Sino-Japanese War (as that particular branch of WWII was called) killed around 20 million Chinese civilians, the second biggest loss to a country after the Soviet Union, as a Japanese army fresh from an earlier revolution from Imperial to modern systems went on a rampage of rape, murder and destruction throughout the underdeveloped northern China, where some war leaders still fought with swords. The war also annihilated the nationalists, leaving the communists free to sweep to power after the Japanese surrender and establish the now 63-year old People’s Republic, then lead by former librarian Mao Zedong.

Since then, China has changed almost beyond recognition. During the idolised Mao’s reign, the Chinese population near-doubled in an effort to increase the available worker population, an idea tried far less successfully in other countries around the world with significantly less space to fill. This population was then put to work during Mao’s “Great Leap Forward”, in which he tried to move his country away from its previously agricultural economy and into a more manufacturing-centric system. However, whilst the Chinese government insists to this day that three subsequent years of famine were entirely due to natural disasters such as drought and poor weather, and only killed 15 million people, most external commentators agree that the sudden change in the availability of food thanks to the Great Leap certainly contributed to the death toll estimated to actually be in the region of 20-40 million. Oh, and the whole business was an economic failure, as farmers uneducated in modern manufacturing techniques attempted to produce steel at home, resulting in a net replacement of useful food for useless, low-quality pig iron.

This event in many ways typifies the Chinese way- that if millions of people must suffer in order for things to work out better in the long run and on the numbers sheet, then so be it, partially reflecting the disregard for the value of life historically also common in Japan. China is a country that has said it would, in the event of a nuclear war, consider the death of 90% of their population acceptable losses so long as they won, a country whose main justification for this “Great Leap Forward” was to try and bring about a state of social structure & culture that the government could effectively impose socialism upon, as it tried to do during its “Cultural Revolution” during the mid-sixties. All this served to do was get a lot of people killed, resulted in a decade of absolute chaos, literally destroyed China’s education system and, despite reaffirming Mao’s godlike status (partially thanks to an intensification in the formation of his personality cult), some of his actions rather shamed the governmental high-ups, forcing the party to take the angle that, whilst his guiding thought was of course still the foundation of the People’s Republic and entirely correct in every regard, his actions were somehow separate from that and got rather brushed under the carpet. It did help that, by this point, Mao was now dead and was unlikely to have them all hung for daring to question his actions.

But, despite all this chaos, all the destruction and all the political upheaval (nowadays the government is still liable to arrest anyone who suggests that the Cultural Revolution was a good idea), these things shaped China into the powerhouse it is today. It may have slaughtered millions of people and resolutely not worked for 20 years, but Mao’s focus on a manufacturing economy has now started to bear fruit and give the Chinese economy a stable footing that many countries would dearly love in these days of economic instability. It may have an appalling human rights record and have presided over the large-scale destruction of the Chinese environment, but Chinese communism has allowed for the government to control its labour force and industry effectively, allowing it to escape the worst ravages of the last few economic downturns and preventing internal instability. And the extent to which it has forced itself upon the people of China for decades, forcing them into the party line with an iron fist, has allowed its controls to be gently relaxed in the modern era whilst ensuring the government’s position is secure, to an extent satisfying the criticisms of western commentators. Now, China is rich enough and positioned solidly enough to placate its people, to keep up its education system and build cheap housing for the proletariat. To an accountant, therefore,  this has all worked out in the long run.

But we are not all accountants or economists- we are members of the human race, and there is more for us to consider than just some numbers on a spreadsheet. The Chinese government employs thousands of internet security agents to ensure that ‘dangerous’ ideas are not making their way into the country via the web, performs more executions annually than the rest of the world combined, and still viciously represses every critic of the government and any advocate of a new, more democratic system. China has paid an enormously heavy price for the success it enjoys today. Is that price worth it? Well, the government thinks so… but do you?

The Pursuit of Speed

Recent human history has, as Jeremy Clarkson constantly loves to point out, been dominated by the pursuit of speed. Everywhere we look, we see people hurrying hither and thither, sprinting down escalators, transmitting data at next to lightspeed via their phones and computers, and screaming down the motorway at over a hundred kilometres an hour (or nearly 100mph if you’re the kind of person who habitually uses the fast lane of British motorways). Never is this more apparent than when you consider our pursuit of a new maximum, top speed, something that has, over the centuries, got ever higher and faster. Even in today’s world, where we prize speed of information over speed of movement, this quest goes on, as evidenced by the team behind the ‘Bloodhound’ SSC, tipped to break the world land speed record. So, I thought I might take this opportunity to consider the history of our quest for speed, and see how it has developed over time.

(I will ignore all unmanned human exploits for now, just so I don’t get tangled up in arguments concerning why a satellite may be considered versus something out of the Large Hadron Collider)

Way back when we humans first evolved into the upright, bipedal creatures we are now, we were a fairly primitive race and our top speed was limited by how fast we could run.  Usain Bolt can, with the aid of modern shoes, running tracks and a hundred thousand people screaming his name, max out at around 13 metres per second. We will therefore presume that a fast human in prehistoric times, running on bare feet, hard ground, and the motivation of being chased by a lion, might hit 11m/s, or 43.2 kilometres per hour. Thus our top speed remained for many thousands of years, until, around 6000 years ago, humankind discovered how to domesticate animals, and more specifically horses, in the Eurasian Steppe. This sent our maximum speed soaring to 70km/h or more, a speed that was for the first time sustainable over long distances, especially on the steppe where horses where rarely asked to tow or carry much. Thus things remained for another goodly length of time- in fact, many leading doctors were of the opinion that travelling any faster would be impossible to do without asphyxiating. However, come the industrial revolution, things started to change, and records began tumbling again. The train was invented in the 1800s and quickly transformed from a slow, lumbering beast into a fast, sleek machine capable of hitherto unimaginable speed. In 1848, the Iron Horse took the land speed record away from its flesh and blood cousin, when a train in Boston finally broke the magical 60mph (ie a mile a minute) barrier to send the record shooting up to 96.6 km/h. Records continued to tumble for the next half-century, breaking the 100 mph barrier by 1904, but by then there was a new challenger on the paddock- the car. Whilst early wheel-driven speed records had barely dipped over 35mph, after the turn of the century they really started to pick up the pace. By 1906, they too had broken the 100mph mark, hitting 205km/h in a steam-powered vehicle that laid the locomotives’ claims to speed dominance firmly to bed. However, this was destined to be the car’s only ever outright speed record, and the last one to be set on the ground- by 1924 they had got up to 234km/h, a record that stands to this day as the fastest ever recorded on a public road, but the First World War had by this time been and gone, bringing with it a huge advancement in aircraft technology. In 1920, the record was officially broken in the first post-war attempt, a French pilot clocking 275km/h, and after that there was no stopping it. Records were being broken left, right and centre throughout both the Roaring Twenties and the Great Depression, right up until the breakout of another war in 1939. As during WWI, all records ceased to be officiated for the war’s duration, but, just as the First World War allowed the plane to take over from the car as the top dog in terms of pure speed, so the Second marked the passing of the propellor-driven plane and the coming of the jet & rocket engine. Jet aircraft broke man’s top speed record just 5 times after the war, holding the crown for a total of less than two years, before they gave it up for good and let rockets lead the way.

The passage of records for rocket-propelled craft is hard to track, but Chuck Yeager in 1947 became the first man ever to break the sound barrier in controlled, level flight (plunging screaming to one’s death in a deathly fireball apparently doesn’t count for record purposes), thanks not only to his Bell X-1’s rocket engine but also the realisation that breaking the sound barrier would not tear the wings of so long as they were slanted back at an angle (hence why all jet fighters adopt this design today). By 1953, Yeager was at it again, reaching Mach 2.44 (2608km/h) in the X-1’s cousing, the X-1A. The process, however, nearly killed him when he tilted the craft to try and lose height and prepare to land, at which point a hitherto undiscovered phenomenon known as ‘inertia coupling’ sent the craft spinning wildly out of control and putting Yeager through 8G’s of force before he was able to regain control. The X-1’s successor, the X-2, was even more dangerous- despite pushing the record up to first 3050km/h  one craft exploded and killed its pilot in 1953, before a world record-breaking flight reaching Mach 3.2 (3370 km/h), ended in tragedy when a banking turn at over Mach 3 sent it into another inertia coupling spin that resulted, after an emergency ejection that either crippled or killed him, in the death of pilot Milburn G. Apt. All high-speed research aircraft programs were suspended for another three years, until experiments began with the Bell X-15, the latest and most experimental of these craft. It broke the record 5 times between 1961 and 67, routinely flying above 6000km/h, before another fatal crash, this time concerning pilot Major Michael J Adams in a hypersonic spin, put paid to the program again, and the X-15’s all-time record of 7273km/h remains the fastest for a manned aircraft. But it still doesn’t take the overall title, because during the late 60s the US had another thing on its mind- space.

Astonishingly, manned spacecraft have broken humanity’s top speed record only once, when the Apollo 10 crew achieved the fastest speed to date ever achieved by human beings relative to Earth. It is true that their May 1969 flight did totally smash it, reaching 39 896km/h on their return to earth, but all subsequent space flights, mainly due to having larger modules with greater air resistance, have yet to top this speed. Whether we ever will or not, especially given today’s focus on unmanned probes and the like, is unknown. But people, some brutal abuse of physics is your friend today. Plot all of these records on a graph and add a trendline (OK you might have to get rid of the horse/running ones and fiddle with some numbers), and you have a simple equation for the speed record against time. This can tell us a number of things, but one is of particular interest- that, statistically, we will have a man travelling at the speed of light in 2177. Star Trek fans, get started on that warp drive…