How Quantum Physics Explains Action Films

One of the key ideas used by cosmologists (yes, physics again, sorry) to explain away questions asked by annoying philosophical types is known as the anthropic principle. This has two forms (strong and weak) but the idea remains the same for both; that the reason for a situation being as it is is because, if it wasn’t, we wouldn’t be around to ask that question. For example, one might ask (as Stephen Hawking did in ‘A Brief History of Time’) why the universe is around 10 billion years old, a decidedly awkward question if ever there was one. The anthropic principle provides the simplest answer, stating that since organic life is such a complicated business and that the early universe was such a chaotic, unfriendly place, it is only after this vast amount of time that life forms capable of asking this question have been able to develop.

This answer of ‘because we’re here’ is a wonderfully useful one, albeit one that should be used with caution to avoid not answering valid question, and can be applied to problems that do not concern themselves directly with physics. One example concerns the origin of the human race, as we are all thought to stem from just a few hundred individuals who lived in East Africa’s Rift valley several million years ago. At that time our knowledge of weapons, fighting and general survival was relatively scant, and coming face to face with any large predator would have been a fairly assured death sentence; the prehistoric equivalent of a smart pride of lions, or even some particularly adverse weather one year, could have wiped out a significant proportion of the human race as it stood at that time in just a few months. Despite the advantages of adaptability and brainpower that we have shown since, the odds of natural selection were still stacked against us; why did we arise to become the dominant multicellular life form on this planet?

This question can be answered by listing all the natural advantages we possess as a species and how they enabled us to continue ‘evolving’ far beyond the mere natural order of things; but such an answer still can’t quite account for the large dose of luck that comes into the bargain. The anthropic principle can, however, account for this; the human race was able to overcome the odds because if we hadn’t, then we wouldn’t be around to ask the question. Isn’t logic wonderful?

In fact, one we start to think about our lives and questions of our existence in terms of the anthropic principle, we realise that our existence as individuals is dependent on an awful lot of historical events having happened the way they did. For example, if the Nazis had triumphed during WWII, then perhaps one or more of my grandparents could have been killed, separated from their spouse, or in some way prevented from raising the family that would include my parents. Even tinier events could have impacted the chance of me turning out as me; perhaps a stray photon bouncing off an atom in the atmosphere in a slightly different way could have struck a DNA molecule, causing it to deform the sperm that would otherwise have given me half my genes and meaning it never even made it to the egg that offered up the other half. This is chaos theory in action, but it illustrates a point; for the universe to have ended up the way it has depends on history having played out exactly as it has done.

The classic example of this in quantum physics is the famous ‘Schrodinger’s Cat’ experiment, in which a theoretical cat was put into a box with a special quantum device that had a 50/50 chance of either doing nothing or releasing a toxic gas that would kill the cat. Schrodinger’s point was that, when the cat is put into the box, two universes emerge; one in which the cat is dead, and one in which it is alive. Until we open the box, we cannot known which of these universes we are in, so the cat must be thought of as simultaneously alive and dead.

However, another thought experiment known as the ‘quantum suicide’ experiment takes the cat’s point of view; imagine that the cat is an experimenter, and that he is working alone. Imagine you are that experimenter, and that you had stayed in the box for five iterations of the 50/50 life/death random event. In 31 out of 32 possible futures, you would have been gassed, for at least once the device would have selected the ‘death’ option; but in just one of these 32 alternative futures, you would still be alive. Moreover, if you had since got out of the box and published your results, the existence of those results is solely dependent on you being that lucky one out of 32.

Or, to put it another way, consider a generic action hero, in the classic scene where he runs through the battlefield gunning down enemies whilst other, lesser soldiers fall about him from bullets and explosions. The enemy fire countless shots at him, but try as they might they can never kill him. They try, but he survives and the film reaches its triumphant conclusion.

Now, assuming that these enemies are not deliberately trying to miss him and can at least vaguely use their weapons, if our action hero tried to pull that ‘running through a hail of bullets’ stunt then 999 times out of a thousand he’d be killed. However, if he was killed then the film would not be able to reach its conclusion, since he would be unable to save the heroine/defeat the baddie/deliver a cliched one-liner, and as such the story would be incomplete.  And, with such a crappy story, there’s no way that a film would get made about it; therefore, the action hero must always be one of the lucky ones.

This idea of always triumphing over the odds, of surviving no matter what because, if you didn’t, you wouldn’t be around to tell the tale or even be conscious of the tale, is known as quantum immortality. And whilst it doesn’t mean you’re going to be safe jumping off buildings any time soon, it does at least give yo a way to bore the pants off the next person who claims that action movies are WAAYYYY too unrealistic.

Advertisement

Components of components of components…

By the end of my last post, science had reached the level of GCSE physics/chemistry; the world is made of atoms, atoms consist of electrons orbiting a nucleus, and a nucleus consists of a mixture of positively charged protons and neutrally charged neutrons. Some thought that this was the deepest level things could go; that everything was made simply of these three things and that they were the fundamental particles of the universe. However, others pointed out the enormous size difference between an electron and proton, suggesting that the proton and neutron were not as fundamental as the electron, and that we could look even deeper.

In any case, by this point our model of the inside of a nucleus was incomplete anyway; in 1932 James Chadwick had discovered (and named) the neutron, first theorised about by Ernest Rutherford to act as a ‘glue’ preventing the protons of a nucleus from repelling one another and causing the whole thing to break into pieces. However, nobody actually had any idea exactly how this worked, so in 1934 a concept known as the nuclear force was suggested. This theory, proposed by Hideki Yukawa, held that nucleons (then still considered fundamental particles) emitted particles he called mesons; smaller than nucleons, they acted as carriers of the nuclear force. The physics behind this is almost unintelligible to anyone who isn’t a career academic (as I am not), but this is because there is no equivalent to the nuclear force that we encounter during the day-to-day. We find it very easy to understand electromagnetism because we have all seen magnets attracting and repelling one another and see the effects of electricity everyday, but the nuclear force was something more fundamental; a side effect of the constant exchange of mesons between nucleons*. The meson was finally found (proving Yukawa’s theory) in 1947, and Yukawa won the 1949 Nobel Prize for it. Mesons are now understood to belong to a family of particles called gluons, which all act as the intermediary for the nuclear strong force between various different particles; the name gluon hints at this purpose, coming from the word ‘glue’.

*This, I am told, becomes a lot easier to understand once electromagnetism has been studied from the point of view of two particles exchanging photons, but I’m getting out of my depth here; time to move on.

At this point, the physics world decided to take stock; the list of all the different subatomic particles that had been discovered became known as ‘the particle zoo’, but our understanding of them was still patchy. We knew nothing of what the various nucleons and mesons consisted of, how they were joined together, or what allowed the strong nuclear force to even exist; where did mesons come from? How could these particles, 2/3 the size of a proton, be emitted from one without tearing the thing to pieces?

Nobody really had the answers to these, but when investigating them people began to discover other new particles, of a similar size and mass to the nucleons. Most of these particles were unstable and extremely short-lived, decaying into the undetectable in trillionths of trillionths of a second, but whilst they did exist they could be detected using incredibly sophisticated machinery and their existence, whilst not ostensibly meaning anything, was a tantalising clue for physicists. This family of nucleon-like particles was later called baryons, and in 1961 American physicist Murray Gell-Mann organised the various baryons and mesons that had been discovered into groups of eight, a system that became known as the eightfold way. There were two octets to be considered; one contained the mesons, and all the baryons with a ‘spin’ (a quantum property of subatomic particles that I won’t even try to explain) of 1/2. Other baryons had a spin of 3/2 (or one and a half), and they formed another octet; except that only seven of them had been discovered. Gell-Mann realised that each member of the ‘spin 1/2’ group had a corresponding member in the ‘spin 3/2’ group, and so by extrapolating this principle he was able to theorise about the existence of an eighth ‘spin 3/2’ baryon, which he called the omega baryon. This particle, with properties matching almost exactly those he predicted, was discovered in 1964 by a group experimenting with a particle accelerator (a wonderful device that takes two very small things and throws them at one another in the hope that they will collide and smash to pieces; particle physics is a surprisingly crude business, and few other methods have ever been devised for ‘looking inside’ these weird and wonderful particles), and Gell-Mann took the Nobel prize five years later.

But, before any of this, the principle of the eightfold way had been extrapolated a stage further. Gell-Mann collaborated with George Zweig on a theory concerning entirely theoretical particles known as quarks; they imagined three ‘flavours’ of quark (which they called, completely arbitrarily, the up, down and strange quarks), each with their own properties of spin, electrical charge and such. They theorised that each of the properties of the different hadrons (as mesons and baryons are collectively known) could be explained by the fact that each was made up of a different combination of these quarks, and that the overall properties of  each particle were due, basically, to the properties of their constituent quarks added together. At the time, this was considered somewhat airy-fairy; Zweig and Gell-Mann had absolutely no physical evidence, and their theory was essentially little more than a mathematical construct to explain the properties of the different particles people had discovered. Within a year, supporters of the theory Sheldon Lee Glashow and James Bjorken suggested that a fourth quark, which they called the ‘charm’ quark, should be added to the theory, in order to better explain radioactivity (ask me about the weak nuclear force, go on, I dare you). It was also later realised that the charm quark might explain the existence of the kaon and pion, two particles discovered in cosmic rays 15 years earlier that nobody properly understood. Support for the quark theory grew; and then, in 1968, a team studying deep inelastic scattering (another wonderfully blunt technique that involves firing an electron at a nucleus and studying how it bounces off in minute detail) revealed a proton to consist of three point-like objects, rather than being the solid, fundamental blob of matter it had previously been thought of. Three point-like objects matched exactly Zweig and Gell-Mann’s prediction for the existence of quarks; they had finally moved from the mathematical theory to the physical reality.

(The quarks discovered were of the up and down flavours; the charm quark wouldn’t be discovered until 1974, by which time two more quarks, the top and bottom, had been predicted to account for an incredibly obscure theory concerning the relationship between antimatter and normal matter. No, I’m not going to explain how that works. For the record, the bottom quark was discovered in 1977 and the top quark in 1995)

Nowadays, the six quarks form an integral part of the standard model; physics’ best attempt to explain how everything in the world works, or at least on the level of fundamental interactions. Many consider them, along with the six leptons and four bosons*, to be the fundamental particles that everything is made of; these particles exist, are fundamental, and that’s an end to it. But, the Standard Model is far from complete; it isn’t readily compatible with the theory of relativity and doesn’t explain either gravity or many observed effects in cosmology blamed on ‘dark matter’ or ‘dark energy’- plus it gives rise to a few paradoxical situations that we aren’t sure how to explain. Some say it just isn’t finished yet, and that we just need to think of another theory or two and discover another boson. Others say that we need to look deeper once again and find out what quarks themselves contain…

*A boson is anything, like a gluon, that ‘carries’ a fundamental force; the recently discovered Higgs boson is not really part of the list of fundamental particles since it exists solely to effect the behaviour of the W and Z bosons, giving them mass

The Story of the Atom

Possibly the earliest scientific question we as a race attempted to answer was ‘what is our world made of’. People reasoned that everything had to be made of something- all the machines and things we build have different components in them that we can identify, so it seemed natural that those materials and components were in turn made of some ‘stuff’ or other. Some reasoned that everything was made up of the most common things present in our earth; the classical ‘elements’ of earth, air, fire and water, but throughout the latter stages of the last millennia the burgeoning science of chemistry began to debunk this idea. People sought for a new theory to answer what everything consisted of, what the building blocks were, and hoped to find in this search an answer to several other questions; why chemicals that reacted together did so in fixed ratios, for example. For a solution to this problem, they returned to an idea almost as old as science itself; that everything consisted of tiny blobs of matter, invisible to the naked eye, that joined to one another in special ways. The way they joined together varied depending on the stuff they made up, hence the different properties of different materials, and the changing of these ‘joinings’ was what was responsible for chemical reactions and their behaviour. The earliest scientists who theorised the existence of these things called them corpuscles; nowadays we call them atoms.

By the turn of the twentieth century, thanks to two hundred years of chemistry using atoms to conveniently explain their observations, it was considered common knowledge among the scientific community that an atom was the basic building block of matter, and it was generally considered to be the smallest piece of matter in the universe; everything was made of atoms, and atoms were fundamental and solid. However, in 1897 JJ Thomson discovered the electron, with a small negative charge, and his evidence suggested that electrons were a constituent part of atoms. But atoms were neutrally charged, so there had to be some positive charge present to balance out; Thomson postulated that the negative electrons ‘floated’ within a sea of positive charge, in what became known as the plum pudding model. Atoms were not fundamental at all; even these components of all matter had components themselves. A later experiment by Ernest Rutherford sought to test the theory of the plum pudding model; he bombarded a thin piece of gold foil with positively charged alpha particles, and found that some were deflected at wild angles but that most passed straight through. This suggested, rather than a large uniform area of positive charge, a small area of very highly concentrated positive charge, such that when the alpha particle came close to it it was repelled violently (just like putting two like poles of a magnet together) but that most of the time it would miss this positive charge completely; most of the atom was empty space. So, he thought the atom must be like the solar system, with the negative electrons acting like planets orbiting a central, positive nucleus.

This made sense in theory, but the maths didn’t check out; it predicted the electrons to either spiral into the nucleus and for the whole of creation to smash itself to pieces, or for it all to break apart. It took Niels Bohr to suggest that the electrons might be confined to discrete orbital energy levels (roughly corresponding to distances from the nucleus) for the model of the atom to be complete; these energy levels (or ‘shells’) were later extrapolated to explain why chemical reactions occur, and the whole of chemistry can basically be boiled down to different atoms swapping electrons between energy levels in accordance with the second law of thermodynamics. Bohr’s explanation drew heavily from Max Planck’s recent explanation of quantum theory, which modelled photons of light as having discrete energy levels, and this suggested that electrons were also quantum particles; this ran contrary to people’s previous understanding of them, since they had been presumed to be solid ‘blobs’ of matter. This was but one step along the principle that defines quantum theory; nothing is actually real, everything is quantum, so don’t even try to imagine how it all works.

However, this still left the problem of the nucleus unsolved; what was this area of such great charge density packed  tightly into the centre of each atom, around which the electrons moved? What was it made of? How big was it? How was it able to account for almost all of a substance’s mass, given how little the electrons weighed?

Subsequent experiments have revealed an atomic nucleus to tiny almost beyond imagining; if your hand were the size of the earth, an atom would be roughly one millimetre in diameter, but if an atom were the size of St. Paul’s Cathedral then its nucleus would be the size of a full stop. Imagining the sheer tinyness of such a thing defies human comprehension. However, this tells us nothing about the nucleus’ structure; it took Ernest Rutherford (the guy who had disproved the plum pudding model) to take the first step along this road when he, in 1918, confirmed that the nucleus of a hydrogen atom comprised just one component (or ‘nucleon’ as we collectively call them today). Since this component had a positive charge, to cancel out the one negative electron of a hydrogen atom, he called it a proton, and then (entirely correctly) postulated that all the other positive charges in larger atomic nuclei were caused by more protons stuck together in the nucleus. However, having multiple positive charges all in one place would normally cause them to repel one another, so Rutherford suggested that there might be some neutrally-charged particles in there as well, acting as a kind of electromagnetic glue. He called these neutrons (since they were neutrally charged), and he has since been proved correct; neutrons and protons are of roughly the same size, collectively constitute around 99.95% of any given atom’s mass, and are found in all atomic nuclei. However, even these weren’t quite fundamental subatomic particles, and as the 20th century drew on, scientists began to delve even deeper inside the atom; and I’ll pick up that story next time.

Art vs. Science

All intellectual human activity can be divided into one of three categories; the arts, humanities, and sciences (although these terms are not exactly fully inclusive). Art here covers everything from the painted medium to music, everything that we humans do that is intended to be creative and make our world as a whole a more beautiful place to live in. The precise definition of ‘art’ is a major bone of contention among creative types and it’s not exactly clear where the boundary lies in some cases, but here we can categorise everything intended to be artistic as an art form. Science here covers every one of the STEM disciplines; science (physics, biology, chemistry and all the rest in its vast multitude of forms and subgenres), technology, engineering (strictly speaking those two come under the same branch, but technology is too satisfying a word to leave out of any self-respecting acronym) and mathematics. Certain portions of these fields too could be argued to be entirely self-fulfilling, and others are considered by some beautiful, but since the two rarely overlap the title of art is never truly appropriate. The humanities are an altogether trickier bunch to consider; on one hand they are, collectively, a set of sciences, since they purport to study how the world we live in behaves and functions. However, this particular set of sciences are deemed separate because they deal less with fundamental principles of nature but of human systems, and human interactions with the world around them; hence the title ‘humanities’. Fields as diverse as economics and geography are all blanketed under this title, and are in some ways the most interesting of sciences as they are the most subjective and accessible; the principles of the humanities can be and usually are encountered on a daily basis, so anyone with a keen mind and an eye for noticing the right things can usually form an opinion on them. And a good thing too, otherwise I would be frequently short of blogging ideas.

Each field has its own proponents, supporters and detractors, and all are quite prepared to defend their chosen field to the hilt. The scientists point to the huge advancements in our understanding of the universe and world around us that have been made in the last century, and link these to the immense breakthroughs in healthcare, infrastructure, technology, manufacturing and general innovation and awesomeness that have so increased our quality of life (and life expectancy) in recent years. And it’s not hard to see why; such advances have permanently changed the face of our earth (both for better and worse), and there is a truly vast body of evidence supporting the idea that these innovations have provided the greatest force for making our world a better place in recent times. The artists provide the counterpoint to this by saying that living longer, healthier lives with more stuff in it is all well and good, but without art and creativity there is no advantage to this better life, for there is no way for us to enjoy it. They can point to the developments in film, television, music and design, all the ideas of scientists and engineers tuned to perfection by artists of each field, and even the development in more classical artistic mediums such as poetry or dance, as key features of the 20th century that enabled us to enjoy our lives more than ever before. The humanities have advanced too during recent history, but their effects are far more subtle; innovative strategies in economics, new historical discoveries and perspectives and new analyses of the way we interact with our world have all come, and many have made news, but their effects tend to only be felt in the spheres of influence they directly concern- nobody remembers how a new use of critical path analysis made J. Bloggs Ltd. use materials 29% more efficiently (yes, I know CPA is technically mathematics; deal with it). As such, proponents of humanities tend to be less vocal than those in other fields, although this may have something to do with the fact that the people who go into humanities have a tendency to be more… normal than the kind of introverted nerd/suicidally artistic/stereotypical-in-some-other-way characters who would go into the other two fields.

This bickering between arts & sciences as to the worthiness/beauty/parentage of the other field has lead to something of a divide between them; some commentators have spoken of the ‘two cultures’ of arts and sciences, leaving us with a sect of sciences who find it impossible to appreciate the value of art and beauty, thinking it almost irrelevant compared what their field aims to achieve (to their loss, in my opinion). I’m not entirely sure that this picture is entirely true; what may be more so, however, is the other end of the stick, those artistic figures who dominate our media who simply cannot understand science beyond GCSE level, if that. It is true that quite a lot of modern science is very, very complex in the details, but Albert Einstein was famous for saying that if a scientific principle cannot be explained to a ten-year old then it is almost certainly wrong, and I tend to agree with him. Even the theory behind the existence of the Higgs Boson, right at the cutting edge of modern physics, can be explained by an analogy of a room full of fans and celebrities. Oh look it up, I don’t want to wander off topic here.

The truth is, of course, that no field can sustain a world without the other; a world devoid of STEM would die out in a matter of months, a world devoid of humanities would be hideously inefficient and appear monumentally stupid, and a world devoid of art would be the most incomprehensibly dull place imaginable. Not only that, but all three working in harmony will invariably produce the best results, as master engineer, inventor, craftsman and creator of some of the most famous paintings of all time Leonardo da Vinci so ably demonstrated. As such, any argument between fields as to which is ‘the best’ or ‘the most worthy’ will simply never be won, and will just end up a futile task. The world is an amazing place, but the real source of that awesomeness is the diversity it contains, both in terms of nature and in terms of people. The arts and sciences are not at war, nor should they ever be; for in tandem they can achieve so much more.

Determinism

In the early years of the 19th century, science was on a roll. The dark days of alchemy were beginning to give way to the modern science of chemistry as we know it today, the world of physics and the study of electromagnetism were starting to get going, and the world was on the brink of an industrial revolution that would be powered by scientists and engineers. Slowly, we were beginning to piece together exactly how our world works, and some dared to dream of a day where we might understand all of it. Yes, it would be a long way off, yes there would be stumbling blocks, but maybe, just maybe, so long as we don’t discover anything inconvenient like advanced cosmology, we might one day begin to see the light at the end of the long tunnel of science.

Most of this stuff was the preserve of hopeless dreamers, but in the year 1814 a brilliant mathematician and philosopher, responsible for underpinning vast quantities of modern mathematics and cosmology, called Pierre-Simon Laplace published a bold new article that took this concept to extremes. Laplace lived in the age of ‘the clockwork universe’, a theory that held Newton’s laws of motion to be sacrosanct truths and claimed that these laws of physics caused the universe to just keep on ticking over, just like the mechanical innards of a clock- and just like a clock, the universe was predictable. Just as one hour after five o clock will always be six, presuming a perfect clock, so every result in the world can be predicted from the results. Laplace’s arguments took such theory to its logical conclusion; if some vast intellect were able to know the precise positions of every particle in the universe, and all the forces and motions of them, at a single point in time, then using the laws of physics such an intellect would be able to know everything, see into the past, and predict the future.

Those who believed in this theory were generally disapproved of by the Church for devaluing the role of God and the unaccountable divine, whilst others thought it implied a lack of free will (although these issues are still considered somewhat up for debate to this day). However, among the scientific community Laplace’s ideas conjured up a flurry of debate; some entirely believed in the concept of a predictable universe, in the theory of scientific determinism (as it became known), whilst others pointed out the sheer difficulty in getting any ‘vast intellect’ to fully comprehend so much as a heap of sand as making Laplace’s arguments completely pointless. Other, far later, observers, would call into question some of the axiom’s upon which the model of the clockwork universe was based, such as Newton’s laws of motion (which collapse when one does not take into account relativity at very high velocities); but the majority of the scientific community was rather taken with the idea that they could know everything about something should they choose to. Perhaps the universe was a bit much, but being able to predict everything, to an infinitely precise degree, about a few atoms perhaps, seemed like a very tempting idea, offering a delightful sense of certainty. More than anything, to these scientists there work now had one overarching goal; to complete the laws necessary to provide a deterministic picture of the universe.

However, by the late 19th century scientific determinism was beginning to stand on rather shaky ground; although  the attack against it came from the rather unexpected direction of science being used to support the religious viewpoint. By this time the laws of thermodynamics, detailing the behaviour of molecules in relation to the heat energy they have, had been formulated, and fundamental to the second law of thermodynamics (which is, to this day, one of the fundamental principles of physics) was the concept of entropy.  Entropy (denoted in physics by the symbol S, for no obvious reason) is a measure of the degree of uncertainty or ‘randomness’ inherent in the universe; or, for want of a clearer explanation, consider a sandy beach. All of the grains of sand in the beach can be arranged in a vast number of different ways to form the shape of a disorganised heap, but if we make a giant, detailed sandcastle instead there are far fewer arrangements of the molecules of sand that will result in the same structure. Therefore, if we just consider the two situations separately, it is far, far more likely that we will end up with a disorganised ‘beach’ structure rather than a castle forming of its own accord (which is why sandcastles don’t spring fully formed from the sea), and we say that the beach has a higher degree of entropy than the castle. This increased likelihood of higher entropy situations, on an atomic scale, means that the universe tends to increase the overall level of entropy in it; if we attempt to impose order upon it (by making a sandcastle, rather than waiting for one to be formed purely by chance), we must input energy, which increases the entropy of the surrounding air and thus resulting in a net entropy increase. This is the second law of thermodynamics; entropy always increases, and this principle underlies vast quantities of modern physics and chemistry.

If we extrapolate this situation backwards, we realise that the universe must have had a definite beginning at some point; a starting point of order from which things get steadily more chaotic, for order cannot increase infinitely as we look backwards in time. This suggests some point at which our current universe sprang into being, including all the laws of physics that make it up; but this cannot have occurred under ‘our’ laws of physics that we experience in the everyday universe, as they could not kickstart their own existence. There must, therefore, have been some other, higher power to get the clockwork universe in motion, destroying the image of it as some eternal, unquestionable predictive cycle. At the time, this was seen as vindicating the idea of the existence of God to start everything off; it would be some years before Edwin Hubble would venture the Big Bang Theory, but even now we understand next to nothing about the moment of our creation.

However, this argument wasn’t exactly a death knell for determinism; after all, the laws of physics could still describe our existing universe as a ticking clock, surely? True; the killer blow for that idea would come from Werner Heisenburg in 1927.

Heisenburg was a particle physicist, often described as the person who invented quantum mechanics (a paper which won him a Nobel prize). The key feature of his work here was the concept of uncertainty on a subatomic level; that certain properties, such as the position and momentum of a particle, are impossible to know exactly at any one time. There is an incredibly complicated explanation for this concerning wave functions and matrix algebra, but a simpler way to explain part of the concept concerns how we examine something’s position (apologies in advance to all physics students I end up annoying). If we want to know where something is, then the tried and tested method is to look at the thing; this requires photons of light to bounce off the object and enter our eyes, or hypersensitive measuring equipment if we want to get really advanced. However, at a subatomic level a photon of light represents a sizeable chunk of energy, so when it bounces off an atom or subatomic particle, allowing us to know where it is, it so messes around with the atom’s energy that it changes its velocity and momentum, although we cannot predict how. Thus, the more precisely we try to measure the position of something, the less accurately we are able to know its velocity (and vice versa; I recognise this explanation is incomplete, but can we just take it as red that finer minds than mine agree on this point). Therefore, we cannot ever measure every property of every particle in a given space, never mind the engineering challenge; it’s simply not possible.

This idea did not enter the scientific consciousness comfortably; many scientists were incensed by the idea that they couldn’t know everything, that their goal of an entirely predictable, deterministic universe would forever remain unfulfilled. Einstein was a particularly vocal critic, dedicating the rest of his life’s work to attempting to disprove quantum mechanics and back up his famous statement that ‘God does not play dice with the universe’. But eventually the scientific world came to accept the truth; that determinism was dead. The universe would never seem so sure and predictable again.

3500 calories per pound

This looks set to be the concluding post in this particular little series on the subject of obesity and overweightness. So, to summarise where we’ve been so far- post 1: that there are a lot of slightly chubby people present in the western world leading to statistics supporting a massive obesity problem, and that even this mediocre degree of fatness can be seriously damaging to your health. Post 2: why we have spent recent history getting slightly chubby. And for today, post 3: how one can try to do your bit, especially following the Christmas excesses and the soon-broken promises of New Year, to lose some of that excess poundage.

It was Albert Einstein who first demonstrated that mass was nothing more than stored energy, and although the theory behind that precise idea doesn’t really correlate with biology the principle still stands; fat is your body’s way of storing energy. It’s also a vital body tissue, and is not a 100% bad and evil thing to ingest, but if you want to lose it then the aim should simply be one of ensuring that one’s energy output, in the form of exercise  exceeds one’s energy input, in the form of food. The body’s response to this is to use up some of its fat stores to replace this lost energy (although this process can take up to a week to run its full course; the body is a complicated thing), meaning that the amount of fat in/on your body will gradually decrease over time. Therefore, slimming down is a process that is best approached from two directions; restricting what’s going in, and increasing what’s going out (both at the same time is infinitely more effective than an either/or process). I’ll deal with what’s going in first.

The most important point to make about improving one’s diet, and when considering weight loss generally, is that there are no cheats. There are no wonder pills that will shed 20lb of body fat in a week, and no super-foods or nutritional supplements that will slim you down in a matter of months. Losing weight is always going to be a messy business that will take several months at a minimum (the title of this post refers to the calorie content of body fat, meaning that to lose one pound you must expend 3500 more calories than you ingest over a given period of time), and unfortunately prevention is better than cure; but moping won’t help anyone, so let’s just gather our resolve and move on.

There is currently a huge debate going on concerning the nation’s diet problems of amount versus content; whether people are eating too much, or just the wrong stuff. In most cases it’s probably going to be a mixture of the two, but I tend to favour the latter answer; and in any case, there’s not much I can say about the former beyond ‘eat less stuff’. I am not a good enough cook to offer any great advice on what foods you should or shouldn’t be avoiding, particularly since the consensus appears to change every fortnight, so instead I will concentrate on the one solid piece of advice that I can champion; cook your own stuff.

This is a piece of advice that many people find hard to cope with- as I said in my last post, our body doesn’t want to waste time cooking when it could be eating. When faced with the unknown product of one’s efforts in an hours time, and the surety of a ready meal or fast food within five minutes, the latter option and all the crap that goes in it starts to seem a lot more attractive. The trick is, therefore, to learn how to cook quickly- the best meals should either take less than 10-15 minutes of actual effort to prepare and make, or be able to be made in large amounts and last for a week or more. Or, even better, both. Skilled chefs achieve this by having their skills honed to a fine art and working at a furious rate, but then again they’re getting paid for it; for the layman, a better solution is to know the right dishes. I’m not going to include a full recipe list, but there are thousands online, and there is a skill to reading recipes; it can get easy to get lost between a long list of numbers and a complicated ordering system, but reading between the lines one can often identify which recipes mean ‘chop it all up and chuck in some water for half an hour’.

That’s a very brief touch on the issue, but now I want to move on and look at energy going out; exercise. I personally would recommend sport, particularly team sport, as the most reliably fun way to get fit and enjoy oneself on a weekend- rugby has always done me right. If you’re looking in the right place, age shouldn’t be an issue (I’ve seen a 50 year old play alongside a 19 year old student at a club rugby match near me), and neither should skill so long as you are willing to give it a decent go; but, sport’s not for everyone and can present injury issues so I’ll also look elsewhere.

The traditional form of fat-burning exercise is jogging, but that’s an idea to be taken with a large pinch of salt and caution. Regular joggers will lose weight it’s true, but jogging places an awful lot of stress on one’s joints (swimming, cycling and rowing are all good forms of ‘low-impact exercise’ that avoid this issue), and suffers the crowning flaw of being boring as hell. To me, anyway- it takes up a good chunk of time, during which one’s mind is so filled with the thump of footfalls and aching limbs that one is forced to endure the experience rather than enjoy it. I’ll put up with that for strength exercises, but not for weight loss when two far better techniques present themselves; intensity sessions and walking.

Intensity sessions is just a posh name for doing very, very tiring exercise for a short period of time; they’re great for burning fat & building fitness, but I’ll warn you now that they are not pleasant. As the name suggest, these involve very high-intensity exercise (as a general rule, you not be able to talk throughout high-intensity work) performed either continuously or next to continuously for relatively short periods of time- an 8 minute session a few times a week should be plenty. This exercise can take many forms; shuttle runs (sprinting back and forth as fast as possible between two marked points or lines), suicides (doing shuttle runs between one ‘base’ line and a number of different lines at different distances from the base, such that one’s runs change in length after each set) and tabata sets (picking an easily repeatable exercise, such as squats, performing them as fast as possible for 20 seconds, followed by 10 seconds of rest, then another 20 seconds of exercise, and so on for 4-8 minute) are just three examples. Effective though these are, it’s difficult to find an area of empty space to perform them without getting awkward looks and the odd spot of abuse from passers-by or neighbours, so they may not be ideal for many people (tabata sets or other exercises such as press ups are an exception, and can generally be done in a bedroom; Mark Lauren’s excellent ‘You Are Your Own Gym’ is a great place to start for anyone interested in pursuing this route to lose weight & build muscle). This leaves us with one more option; walking.

To my mind, if everyone ate properly and walked 10,000 steps per day, the scare stats behind the media’s obesity fix would disappear within a matter of months. 10,000 steps may seem a lot, and for many holding office jobs it may seem impossible, but walking is a wonderful form of exercise since it allows you to lose oneself in thought or music, whichever takes your fancy. Even if you don’t have time for a separate walk, with a pedometer in hand (they are built into many modern iPods, and free pedometer apps are available for both iPhone and Android) and a target in mind (10k is the standard) then after a couple of weeks it’s not unusual to find yourself subtly changing the tiny aspects of your day (stairs instead of lift, that sort of thing) to try and hit your target; and the results will follow. As car ownership, an office economy and lack of free time have all grown in the last few decades, we as a nation do not walk as much as we used to. It’s high time that changed.

The Conquest of Air

Everybody in the USA, and in fact just about everyone across the world, has heard of Orville and Wilbur Wright. Two of the pioneers of aviation, when their experimental biplane Flyer achieved the first ever manned, powered, heavier-than-air flight on the morning of December 17, 1903, they had finally achieved one of man’s long-held dreams; control and mastery of air travel.

However, what is often puzzling when considering the Wright brothers’ story is the number of misconceptions surrounding them. Many, for instance, are under the impression that they were the first people to fly at all, inventing all the various technicalities of lift, aerofoil structures and control that are now commonplace in today’s aircraft. In fact, the story of flight, perhaps the oldest and maddest of human ambitions, an idea inspired by every time someone has looked up in wonder at the graceful flight of a bird, is a good deal older than either of them.

Our story begins, as does nearly all technological innovation, in imperial China, around 300 BC (the Greek scholar Archytas had admittedly made a model wooden pigeon ‘fly’ some 100 years previously, but nobody is sure exactly how he managed it). The Chinese’s first contribution was the invention of the kite, an innovation that would be insignificant if it wasn’t for whichever nutter decided to build one big enough to fly in. However, being strapped inside a giant kite and sent hurtling skywards not only took some balls, but was heavily dependent on wind conditions, heinously dangerous and dubiously useful, so in the end the Chinese gave up on manned flight and turned instead to unmanned ballooning, which they used for both military signalling and ceremonial purposes. It isn’t actually known if they ever successfully put a man into the air using a kite, but they almost certainly gave it a go. The Chinese did have one further attempt, this time at inventing the rocket engine, some years later, in which a young and presumably mental man theorised that if you strapped enough fireworks to a chair then they would send the chair and its occupants hurtling into the night sky. His prototype (predictably) exploded, and it wasn’t for two millennia, after the passage of classical civilisation, the Dark Ages and the Renaissance, that anyone tried flight again.

That is not to say that the idea didn’t stick around. The science was, admittedly beyond most people, but as early as 1500 Leonardo da Vinci, after close examination of bird wings, had successfully deduced the principle of lift and made several sketches showing designs for a manned glider. The design was never tested, and not fully rediscovered for many hundreds of years after his death (Da Vinci was not only a controversial figure and far ahead of his time, but wrote his notebooks in a code that it took centuries to decipher), but modern-day experiments have shown that his design would probably have worked. Da Vinci also put forward the popular idea of ornithopters, aircraft powered by flapping motion as in bird wings, and many subsequent attempts at flight attempted to emulate this method of motion. Needless to say, these all failed (not least because very few of the inventors concerned actually understood aerodynamics).

In fact, it wasn’t until the late 18th century that anyone started to really make any headway in the pursuit of flight. In 1783, a Parisian physics professor, Jacques Charles, built on the work of several Englishmen concerning the newly discovered hydrogen gas and the properties and behaviour of gases themselves. Theorising that, since hydrogen was less dense than air, it should follow Archimedes’ principle of buoyancy and rise, thus enabling it to lift a balloon, he launched the world’s first hydrogen balloon from the Champs du Mars on August 27th. The balloon was only small, and there were significant difficulties encountered in building it, but in the design process Charles, aided by his engineers the Roberts brothers, invented a method of treating silk to make it airtight, spelling the way for future pioneers of aviation. Whilst Charles made some significant headway in the launch of ever-larger hydrogen balloons, he was beaten to the next significant milestones by the Montgolfier brothers, Joseph-Michel and Jacques-Etienne. In that same year, their far simpler hot-air balloon designs not only put the first living things (a sheep, rooster and duck) into the atmosphere, but, just a month later, a human too- Jacques-Etienne was the first European, and probably the first human, ever to fly.

After that, balloon technology took off rapidly (no pun intended). The French rapidly became masters of the air, being the first to cross the English Channel and creators of the first steerable and powered balloon flights. Finally settling on Charles’ hydrogen balloons as a preferable method of flight, blimps and airships began, over the next century or so, to become an accepted method of travel, and would remain so right up until the Hindenburg disaster of 1937, which rather put people off the idea. For some scientists and engineers, humankind had made it- we could now fly, could control where we were going at least partially independent of the elements, and any attempt to do so with a heavier-than-air machine was both a waste of time and money, the preserve of dreamers. Nonetheless, to change the world, you sometimes have to dream big, and that was where Sir George Cayley came in.

Cayley was an aristocratic Yorkshireman, a skilled engineer and inventor, and a magnanimous, generous man- he offered all of his inventions for the public good and expected no payment for them. He dabbled in a number of fields, including seatbelts, lifeboats, caterpillar tracks, prosthetics, ballistics and railway signalling. In his development of flight, he even reinvented the wheel- he developed the idea of holding a wheel in place using thin metal spokes under tension rather than solid ones under compression, in an effort to make the wheels lighter, and is thus responsible for making all modern bicycles practical to use. However, he is most famous for being the first man ever, in 1853, to put somebody into the air using a heavier-than-air glider (although Cayley may have put a ten-year old in a biplane four years earlier).

The man in question was Cayley’s chauffeur (or butler- historical sources differ widely), who was (perhaps understandably) so hesitant to go in his boss’ mental contraption that he handed in his notice upon landing after his flight across Brompton Dale, stating  as his reason that ‘I was hired to drive, not fly’. Nonetheless, Cayley had shown that the impossible could be done- man could fly using just wings and wheels. He had also designed the aerofoil from scratch, identified the forces of thrust, lift, weight and drag that control an aircraft’s movements, and paved the way for the true pioneer of ‘heavy’ flight- Otto Lilienthal.

Lilienthal (aka ‘The Glider King’) was another engineer, making 25 patents in his life, including a revolutionary new engine design. But his fame comes from a world without engines- the world of the sky, with which he was obsessed. He was just a boy when he first strapped wings to his arms in an effort to fly (which obviously failed completely), and later published works detailing the physics of bird flight. It wasn’t until 1891, aged 43, once his career and financial position was stable and he had finished fighting in the Franco-Prussian War, that he began to fly in earnest, building around 12 gliders over a 5-year period (of which 6 still survive). It might have taken him a while, but once he started there was no stopping him, as he made over 2000 flights in just 5 years (averaging more than one every day). During this time he was only able to rack up 5 hours of flight time (meaning his average flight time was just 9 seconds), but his contribution to his field was enormous. He was the first to be able to control and manoeuvre his machines by varying his position and weight distribution, a factor whose importance he realised was absolutely paramount, and also recognised that a proper understanding of how to achieve powered flight (a pursuit that had been proceeding largely unsuccessfully for the past 50 years) could not be achieved without a basis in unpowered glider flight, in recognising that one must work in harmony with aerodynamic forces. Tragically, one of Lilienthal’s gliders crashed in 1896, and he died after two days in hospital. But his work lived on, and the story of his exploits and his death reached across the world, including to a pair of brothers living in Dayton, Ohio, USA, by the name of Wright. Together, the Wright brothers made huge innovations- they redesigned the aerofoil more efficiently, revolutionised aircraft control using wing warping technology (another idea possibly invented by da Vinci), conducted hours of testing in their own wind tunnel, built dozens of test gliders and brought together the work of Cayley, Lilienthal, da Vinci and a host of other, mostly sadly dead, pioneers of the air.  The Wright brothers are undoubtedly the conquerors of the air, being the first to show that man need not be constrained by either gravity or wind, but can use the air as a medium of travel unlike any other. But the credit is not theirs- it is a credit shared between all those who have lived and died in pursuit of the dream of fling like birds. To quote Lilienthal’s dying words, as he lay crippled by mortal injuries from his crash, ‘Sacrifices must be made’.

SCIENCE!

One book that I always feel like I should understand better than I do (it’s the mechanics concerning light cones that stretch my ability to visualise) is Professor Stephen Hawking’s ‘A Brief History of Time’. The content is roughly what nowadays a Physics or Astronomy student would learn in first year cosmology, but when it was first released the content was close to the cutting edge of modern physics. It is a testament to the great charm of Hawking’s writing, as well as his ability to sell it, that the book has since sold millions of copies, and that Hawking himself is the most famous scientist of our age.

The reason I bring it up now is because of one passage from it that spring to mind the other day (I haven’t read it in over a year, but my brain works like that). In this extract, Hawking claims that some 500 years ago, it would be possible for a (presumably rich, intelligent, well-educated and well-travelled) man to learn everything there was to know about science and technology in his age. This is, when one thinks about it, a rather bold claim, considering the vast scope of what ‘science’ covers- even five centuries ago this would have included medicine, biology, astronomy, alchemy (chemistry not having been really invented), metallurgy and materials, every conceivable branch of engineering from agricultural to mining, and the early frontrunners of physics to name but some. To discover everything would have been quite some task, but I don’t think an entirely impossible one, and Hawking’s point stands: back then, there wasn’t all that much ‘science’ around.

And now look at it. Someone with an especially good memory could perhaps memorise the contents of a year’s worth of New Scientist, or perhaps even a few years of back issues if they were some kind of super-savant with far too much free time on their hands… and they still would have barely scratched the surface. In the last few centuries, and particularly the last hundred or so years, humanity’s collective march of science has been inexorable- we have discovered neurology, psychology, electricity, cosmology, atoms and further subatomic particles, all of modern chemistry, several million new species, the ability to classify species at all, more medicinal and engineering innovations than you could shake a stick at, plastics, composites and carbon nanotubes, palaeontology, relativity, genomes, and even the speed of spontaneous combustion of a burrito (why? well why the f&%$ not?). Yeah, we’ve come a long way.

The basis for all this change occurred during the scientific revolution of the 16th and 17th centuries. The precise cause of this change somewhat unknown- there was no great upheaval, but more of a general feeling that ‘hey, science is great, let’s do something with it!’. Some would argue that the idea that there was any change in the pace of science itself is untrue, and that the groundwork for this period of advancing scientific knowledge was largely done by Muslim astronomers and mathematicians several centuries earlier. Others may say that the increasing political and social changes that came with the Renaissance not only sent society reeling slightly, rendering it more pliable to new ideas and boundary-pushing, but also changed the way that the rich and noble functioned. Instead of barons, dukes and the nobility simply resting on their laurels and raking in the cash as the feudal system had previously allowed them to, an increasing number of them began to contribute to the arts and sciences, becoming agents of change and, in the cases of some, agents in the advancement of science.

It took a long time for science to gain any real momentum. For many a decade, nobody was ever a professional scientist or even engineer, and would generally study in their spare time. Universities were typically run by monks and populated by the sons of the rich or the younger sons of nobles- they were places where you both lived and learned expensively, but were not the centres of research that they are nowadays. They also contained a huge degree of resistance to the ideas put forward by Aristotle and others that had been rediscovered at the start of the revolution, and as such trying to get one’s new ideas taken seriously was a severe task. As such, just as many scientists were merely people who were interested in a subject and rich and intelligent enough to dabble in it as they were people committed to learning. Then there was the notorious religious problem- whilst the Church had no problem with most scientific endeavours, the rise of astronomy began one long and ceaseless feud between the Pope and physics into the fallibility of the bible, and some, such as Galileo and Copernicus, were actively persecuted by the Church for their new claims. Some were even hanged. But by far the biggest stumbling block was the sheer number of potential students of science- most common people were peasants, who would generally work the land at their lord’s will, and had zero chance of gravitating their life prospects higher than that. So- there was hardly anyone to do it, it was really, really hard to make any progress in and you might get killed for trying. And yet, somehow, science just kept on rolling onwards. A new theory here, an interesting experiment here, the odd interesting conversation between intellectuals, and new stuff kept turning up. No huge amount, but it was enough to keep things ticking over.

But, as the industrial revolution swept Europe, things started to change. As revolutions came and went, the power of the people started to rise, slowly squeezing out the influence and control of aristocrats by sheer weight of numbers. Power moved from the monarchy to the masses, from the Lords to the Commons- those with real control were the entrepreneurs and factory owners, not old men sitting in country houses with steadily shrinking lands that they owned. Society began to become more fluid, and anyone (well, more people than previously, anyway), could become the next big fish by inventing something new. Technology began to become of ever-increasing importance, and as such so did its discovery. Research by experiment was ever-more accessible, and science began to gather speed. During the 20th century things really began to motor- two world wars prompted the search for new technologies to enter an even more frenzied pace, the universal schooling of children was breeding a new generation of thinkers, and the idea of a university as a place of learning and research became more cemented in popular culture. Anyone could think of something new, and in that respect everyone was a scientist.

And this, to me, is the key to the world we live in today- a world in which a dozen or so scientific papers are published every day for branches of science relevant largely for their own sake. But this isn’t the true success story of science. The real success lies in the products and concepts we see every day- the iPhone, the pharmaceuticals, the infrastructure. The development of none of these discovered a new effect, a new material, or enabled us to better understand the way our thyroid gland works, and in that respect they are not science- but they required someone to think a little bit, to perhaps try a different way of doing something, to face a challenge. They pushed us forward one, tiny inexorable step, put a little bit more knowledge into the human race, and that, really, is the secret. There are 7 billion of us on this planet right now. Imagine if every single one contributed just one step forward.

The Age of Reason

Science is a wonderful thing- particularly in the modern age where the more adventurous (or more willing to tempt fate, depending on your point of view) like to think that most of science is actually pretty well done and dusted. I mean, yes there are a lot of the little details we have yet to work out, but the big stuff, the major hows and whys, have been basically sorted out. We know why there are rainbows, why quantum tunnelling composite appears to defy basic logic, and even why you always seem to pick the slowest queue- science appears to have got it pretty much covered.

[I feel I must take this opportunity to point out one of my favourite stories about the world of science- at the start of the 20th century, there was a prevailing attitude among physicists that physics was going to last, as an advanced science, for about another 20 years or so. They basically presumed that they had worked almost everything out, and now all they had to do was to tie up all the loose ends. However, one particular loose end, the photoelectric effect, simply refused to budge by their classical scientific laws. The only person to come up with a solution was Max Planck who, by modelling light (which everyone knew was a wave) as a particle instead, opened the door to the modern age of quantum theory. Physics as a whole took one look at all the new questions this proposed and, as one, took a collective facepalm.]

In any case, we are now at such an advanced stage of the scientific revolution, that there appears to be nothing, in everyday life at least, that we cannot, at least in part, explain. We might not know, for example, exactly how the brain is wired up, but we still have enough of an understanding to have a pretty accurate guess as to what part of it isn’t working properly when somebody comes in with brain damage. We don’t get exactly why or how photons appear to defy the laws of logic, but we can explain enough of it to tell you why a lens focuses light onto a point. You get the idea.

Any scientist worth his salt will scoff at this- a chemist will bang on about the fact that nanotubes were only developed a decade ago and will revolutionise the world in another, a biologist will tell you about all the myriad of species we know next to nothing about, and the myriad more that we haven’t discovered yet, and a theoretical physicist will start quoting logical impossibilities and make you feel like a complete fool. But this is all, really, rather high-level science- the day-to-day stuff is all pretty much done. Right?

Well… it’s tempting to think so. But in reality all the scientists are pretty correct- Newton’s great ocean of truth remains very much a wild and unexplored place, and not just in all the nerdy places that nobody without 3 separate doctorates can understand. There are some things that everybody, from the lowliest man in the street to the cleverest scientists, can comprehend completely and not understand in the slightest.

Take, for instance, the case of Sugar the cat. Sugar was a part-Persian with a hip deformity who often got uncomfortable in cars. As such when her family moved house, they opted to leave her with a neighbour. After a couple of weeks, Sugar disappeared, before reappearing 14 months later… at her family’s new house. What makes this story even more remarkable? The fact that Silky’s owners had moved from California to Oklahoma, and that a cat with a severe hip problem had trekked 1500 miles, over 100 a month,  to a place she had never even seen. How did she manage it? Nobody has a sodding clue.

This isn’t the only story of long-distance cat return, although Sugar holds the distance record. But an ability to navigate that a lot of sat navs would be jealous of isn’t the only surprising oddity in the world of nature. Take leopards, for example. The most common, and yet hardest to find and possibly deadliest of ‘The Big Five’, everyone knows that they are born killers. Humans, by contrast, are in many respects born prey- we are slow over short distances, have no horns, claws, long teeth or other natural defences, are fairly poor at hiding and don’t even live in herds for safety in numbers. Especially vulnerable are, of course, babies and young children, who by animal standards take an enormously long time to even stand upright, let alone mature. So why exactly, in 1938, were a leopard and her cubs found with a near-blind human child who she had carried off as a baby five years ago. Even more remarkable was the superlative sense of smell the child had, being able to differentiate between different people and even objects with nothing more than a good sniff- which also reminds me of a video I saw a while ago of a blind Scottish boy who can tell what material something is made of and how far away it is (well enough to play basketball) simply by making a clicking sound with his mouth.

I’m not really sure what I’m trying to say in this post- I have a sneaking suspicion my subconscious simply wanted to give me an excuse to share some of the weirdest stories I have yet to see on Cracked.com. So, to round off, I’ll leave you with a final one. In 1984 a hole was found in a farm in Washington State, about 3 metres by 2 and around 60cm deep. 25 metres away, the three tons of grass-covered earth that had previously filled the hole was found- completely intact, in a single block. One person described it as looking like it had been cut away with ‘a gigantic cookie cutter’, but this failed to explain why all of the roots hanging off it were intact. There were no tracks or any distinguishing feature apart from a dribble of earth leading between hole and divot, and the closest thing anyone had to an explanation was to lamely point out that there had been a minor earthquake 20 miles ago a week beforehand.

When I invent a time machine, forget killing Hitler- the first thing I’m doing is going back to find out what the &*^% happened with that hole.

Time is a funny old thing…

Today I am rather short on time- the work I have to do is beginning to mount up despite (and partially because) of a long weekend. To most people this is a perfectly good reason to put up an apologetic cop-out of a post to prevent them having to work on it, but for me, it is a perfectly good excuse for my bloodymindedness to take over, so I thought I might write something about time.
As such a strange and almost abstract concept as it is, time can be viewed from a number of perspectives- the physical sense, the thermodynamic sense, and the human sense are the three obvious ones that spring to mind. To a physicist, time is a dimension much like width and length, and is far from unique- in fact there is a large sector of theoretical physics open to the idea that in the big bang there were many millions of dimensions, only 4 of which (3 spacial and one temporal) opened up into the rest of the universe, the other dimensions only existing on a microscopic, atomic scale (which might explain why the quantum world is so plain weird. Hey- I’m no physicist, and the web can probably tell you more). The really special thing about time compared to spacial dimensions to a physicist (among a long list, that are confusing and difficult to describe), is that it is the only dimension with an obvious direction. People often talk of ‘the arrow of time’, but the idea of any other dimension having an arrow is only a sort of arbitrary point of reference (north & south, up & down are only relative to our own earth and so are merely convenient reference points. This idea of time having an irreversible arrow annoys a lot of physicists as there appears to be little, fundamentally, that means we couldn’t travel in time in the other direction- the theory of relativity, for example, shows how fluid time can be. The idea of time’s direction has a lot to do with thermodynamics, which is where the second perspective of time comes from.
Really I am using the word thermodynamic very loosely, as what I am really thinking of is more to do with the psychological arrow of time. To quickly paraphrase what I mean by thermodynamics, the second law of thermodynamics states that the universe’s level of entropy, or randomness, will always increase or stay the same, never decrease, because a more random, chaotic system is more stable. One way of thinking of this is like a beach- the large swathes of sand can be arranged in a huge number of configurations and still seem the same, but if there are lots of sandcastles over it, there is a lot less randomness. One can seemingly reverse this process by building more sandcastles, making the universe more ordered, but to do this requires energy which, on a universal level, increases the universe’s entropic level. It is for this reason that a smashed pot will always have been preceded, but not followed by, the same pot all in one piece.
The thing is, the psychological and thermodynamic arrows of time point in the same direction, and their link on one another is integral. Our whole view of the passing of time is influenced by the idea of events that have irrevocably ‘happened’ and are ‘over’, hence our eternal fascination with ‘what if’s’. We persistently worry about past mistakes, what could have been, and what things were like, but never can be- hence the popularity of historical stories, ruins, nostalgia and grumbling about teenagers. For a better explanation of the various ‘arrows of time’, try Stephen Hawking’s ‘A Brief History of Time’- it is somewhat out of date now and it is fashionable now to think of it as overly simplistic, but it’s still a good source of a grounding in high-level physics
The final, and to me most interesting, perspective of time I want to talk about is deeply linked to the psychological arrow and our thoughts of the passing of time, but brings its own, uniquely relative, view- the human perspective (notice how it is always people I seem to find the most interesting.) We humans view time in a way that, when thought about, paints a weirdly fluid portrait of the behaviour of time. There is never enough time to work, too much time spent waiting, not enough time spent on holidays or relaxing, too much time spent out of work, too little time spent eating the cake and too much spent washing up. There are the awkward pauses in conversation that seem to drag on for an eternity, especially when they come just after the moment when the entire room goes silent for no accountable reason, enabling everyone to hear the most embarrassing part of your conversation. There are those hours spent doing things you love that you just gobble up, revelling in your own happiness, and the bitter, painful minutes of deep personal pain.
Popular culture and everyday life often mentions or features these weirdly human notions of time being so relative to the scenario- Albert Einstein himself described relativity thus: “When you are talking to a nice girl, an hour seems like a second. When you have your hand on a bar of red-hot iron, a second seems like an hour”. In fact, when you think about it, it is almost as if time appears to be a living thing, at least in the context of our references to it. This, I think anyway, is the nub of the matter- time is something that we encounter, in its more thermodynamic form, every day of our lives, and just like pet owners will tend to anthropomorphise their pets’ facial expressions, so the human race has personified time in general conversation (well, at least in the western world- I cannot speak for anywhere non English-speaking as a certainty). Time is almost one of the family- ever-present, ever-around, ever-referred to, until it becomes as human as a long-lost friend, in its own little way.
Finally, on the subject of time, Mr Douglas Adams: “Time is an illusion; lunchtime doubly so”