The Value of Transparency

Once you start looking for it, it can be quite staggering to realise just how much of our modern world is, quite literally, built on glass. The stuff is manufactured in vast quantities, coating our windows, lights, screens, skyscrapers and countless other uses. Some argue that it is even responsible for the entire development of the world, particularly in the west, as we know it; it’s almost a wonder we take it for granted so.

Technically, out commonplace use of the word ‘glass’ rather oversimplifies the term; glasses are in fact a family of materials that all exhibit the same amorphous structure and behaviour under heating whilst not actually all being made from the same stuff. The member of this family that we are most familiar with and will commonly refer to as simply ‘glass’ is soda-lime glass, made predominantly from silica dioxide with a few other additives to make it easier to produce. But I’m getting ahead of myself; let me tell the story from the beginning.

Like all the best human inventions, glass was probably discovered by accident. Archaeological evidence suggests glassworking was probably an Egyptian invention in around the third millennia BC, Egypt (or somewhere nearby) being just about the only place on earth at the time where the three key ingredients needed for glass production occured naturally and in the same place: silica dioxide (aka sand), sodium carbonate (aka soda, frequently found as a mineral or from plant ashes) and a relatively civilised group of people capable of building a massive great fire. When Egyptian metalworkers got sand and soda in their furnaces by accident, when removed they discovered the two had fused to form a hard, semi-transparent, almost alien substance; the first time glass had been produced anywhere on earth.

This type of glass was far from perfect; for one thing, adding soda has the unfortunate side-effect of making silica glass water-soluble, and for another they couldn’t yet work out how to make the glass clear. Then there were the problems that came with trying to actually make anything from the stuff. The only glass forming technique at the time was called core forming, a moderately effective but rather labour-intensive process illustrated well in this video. Whilst good for small, decorative pieces, it became exponentially more difficult to produce an item by this method the larger it needed to be, not to mention the fact that it couldn’t produce flat sheets of glass for use as windows or whatever.

Still, onwards and upwards and all that, and developments were soon being made in the field of glass technology. Experimentation with various additives soon yielded the discovery that adding lime (calcium oxide) plus a little aluminium and magnesium oxide made soda glass insoluble, and thus modern soda-lime glass was discovered. In the first century BC, an even more significant development came along with the discovery of glass blowing as a production method. Glass blowing was infinitely more flexible than core forming, opening up an entirely new avenue for glass as a material, but crucially it allowed glass products to be produced faster and thus be cheaper than pottery equivalents . By this time, the Eastern Mediterranean coast where these discoveries took place was part of the Roman Empire, and the Romans took to glass like a dieter to chocolate; glass containers and drinking vessels spread across the Empire from the glassworks of Alexandria, and that was before they discovered manganese dioxide could produce clear glass and that it was suddenly suitable for architectural work.

Exactly why glass took off on quite such a massive scale in Europe yet remained little more than a crude afterthought in the east and China (the other great superpower of the age) is somewhat unclear. Pottery remained the material of choice throughout the far east, and they got very skilled at making it too; there’s a reason we in the west today call exceptionally fine, high-quality pottery ‘china’. I’ve only heard one explanation for why this should be so, and it centres around alcohol.

Both the Chinese and Roman empires loved wine, but did so in different ways. To the Chinese, alcohol was a deeply spiritual thing, and played an important role in their religious procedures. This attitude was not unheard of in the west (the Egyptians, for example, believed the god Osiris invented beer, and both Greeks and Romans worshipped a god of wine), but the Roman Empire thought of wine in a secular as well as religious sense; in an age where water was often unsafe to drink, wine became the drink of choice for high society in all situations. One of the key features of wine to the Roman’s was its appearance, hence why the introduction of clear vessels allowing them to admire this colour was so attractive to them. By contrast, the Chinese day-to-day drink of choice was tea. whose appearance was of far less importance than the ability of its container to dissipate heat (as fine china is very good at). The introduction of clear drinking vessels would, therefore, have met with only a limited market in the east, and hence it never really took off. I’m not entirely sure that this argument holds up under scrutiny, but it’s quite a nice idea.

Whatever the reason, the result was unequivocal; only in Europe was glassmaking technology used and advanced over the years. Stained glass was one major discovery, and crown glass (a method for producing large, flat sheets) another. However, the crucial developments would be made in the early 14th century, not long after the Republic of Venice (already a centre for glassmaking) ordered all its glassmakers to move out to the island of Murano to reduce the risk of fire (which does seem ever so slightly strange for a city founded, quite literally, on water).  On Murano, the local quartz pebbles offered glassmakers silica of hitherto unprecedented purity which, combined with exclusive access to a source of soda ash, allowed for the production of exceptionally high-quality glassware. The Murano glassmakers became masters of the art, producing glass products of astounding quality, and from here onwards the technological revolution of glass could begin. The Venetians worked out how to make lenses, in turn allowing for the discovery of the telescope (forming the basis of the work of both Copernicus and Galileo) and spectacles (extending the working lifespan of scribes and monks across the western world). The widespread introduction of windows (as opposed to fabric-covered holes in the wall) to many houses, particularly in the big cities, dramatically improved the health of their occupants by both keeping the house warmer and helping keep out disease. Perhaps most crucially, the production of high-quality glass vessels was not only to revolutionise biology, and in turn medicine, as a discipline, but to almost single-handedly create the modern science of chemistry, itself the foundation stone upon which most of modern physics is based. These discoveries would all, given enough time and quite a lot of social upheaval, pave the way for the massive technological advancements that would characterise the western world in the centuries to come, and which would finally allow the west to take over from the Chinese and Arabs and become the world’s leading technological superpowers.* Nowadays, of course, glass has been taken even further, being widely used as a building material (its strength-to-weight ratio far exceeds that of concrete, particularly when it is made to ‘building grade’ standard), in televisions, and fibre optic cables (which may yet revolutionise our communications infrastructure).

Glass is, of course, not the only thing to have catalysed the technological breakthroughs that were to come; similar arguments have been made regarding gunpowder and the great social and political changes that were to grip Europe between roughly 1500 and 1750. History is never something that one can place a single cause on (the Big Bang excepted), but glass was undoubtedly significant in the western world’s rise to prominence during the second half of the last millennia, and the Venetians probably deserve a lot more credit than they get for creating our modern world.

*It is probably worth mentioning that China is nowadays the world’s largest producer of glass.

Advertisement

A Short History of Blurriness

I am short sighted; have been since I was about eight. It was glasses for a few years, but then it started to get bad and taking it off for rugby matches ceased to be a feasible strategy if I wanted to be able to catch the ball. So the contact lenses came in, firstly only for match days and subsequently the whole time. Nowadays, quite a lot of my mates are completely unaware that I wake up each morning to a blurry vision of my ceiling, which I guess is a tribute to the general awesomeness of modern technology

The reasons for poor vision concern the mechanics of the eye; eyes consist of (among other things) a lens made from some squishy substance that means its shape can change, and the retina, a patch of light-sensitive cells at the back of the eye. The aim is to bend light, emanating from a source, so that it all focuses onto one point right on the retina. The extent to which this bending must occur depends how far away the source is. How much the light is bent depends on the thickness of the lens- if it is thicker, the light is bent to a greater degree, which is preferable if the object is close to you, and vice-versa for objects further away. Your body is able to control the thickness of the lens thanks to a couple of suspensory ligaments running around the top and bottom of the eye, which pull at the lens to stretch it out. If they pull harder, then the lens gets thinner and light is bent less, allowing us to focus on far away objects. The degree to which these ligaments pull is controlled by the ciliary muscle; when the ciliary muscle pulls, the ligaments slacken, and vice-versa. If the lens was kept at this thickness, then light coming from a source close to us would not be focused onto the retina, and instead of a nice, clean, crisp picture then we would instead see a blurry image. All this, it should be pointed out, is working on the scale of fractions of millimetres, and it’s all a very finely-tuned balance.

In the majority of people, this is no problem at all- their eye muscles work fine and keep the lens at the thickness it needs to be. However, amongst the short-sighted, the ciliary muscle is too big and so cannot relax to the extent that it can in a normal eye. This means that the suspensory ligaments do not have quite the range that they should, and are unable to pull really hard to get the lens out to its thinnest setting. When viewing objects up close, this is no problem at all; the light needs to be bent a lot and it all lines up nicely over the retina, producing a lovely, clear image. However, once objects get further away, try as the ligaments might, they just can’t get the lens thin enough to do its job properly. The end result is that light from faraway objects is bent too much, focusing it onto a point just in front of the retina rather than actually on it, and resulting in a blurry image. In some ways, it’s quite an amusing paradox; the need to wear glasses, so often stereotypically associated with nerdery and physical weakness, comes about as a result of a muscle being too big.

In long-sighted people, the situation is reversed; the ciliary muscle is too small, and is unable to exert the required force to make the lens sufficiently thick to see close-up objects. This causes light to be focused behind the eye, resulting in the same kind of blurriness and requiring the person concerned to wear reading glasses or similar for dealing with nearby objects.

And whilst we’re on the subject of reading glasses, let us pause and consider glasses and contact lenses in general. In many ways, glasses were humankind’s first tentative step into the field of biomechanics, and I am occasionally amazed that they have been around long enough for us to take them for granted so. Somehow, I find it endlessly amazing that, by looking through some special glass, I can suddenly see things properly; it all feels suspiciously like witchcraft, even if it takes only simple science and geometry to understand. It’s a commonly known fact that light, when passing through glass, slows down and bends.  If we mess around looking at the geometry of the problem and apply that to light passing through a convex or concave shape, we arrive at an interesting conclusion- that a convex lens causes light to ‘turn inwards’, focusing initially parallel rays of light onto a point, and that a concave lens will do the reverse, causing light waves to spread out.

As we have seen, our eye has a convex lens built into it already to focus light onto the retina but we have already seen how this system can fail if all the finely-tuned controls are out of sorts. However, if we place another lens in front of our ‘broken’ lens, we can correct the flaws in it; if, for example, our original lens is too thick and bends light too much (as in short-sighted people), then by putting a concave lens in front of it we can bend the incoming light outwards, necessitating the light to be bent by a greater degree by the eye’s lens and allowing it to do its job properly. This, in effect, causes the light rays to be set at such an angle that it acts as if the object were positioned closer to the eye (my apologies if that sentence made no sense whatsoever), and a similar system using convex lenses can be utilised by long-sighted people. This is the principle upon which both glasses and contact lenses operate.

Then there’s laser eye surgery, in which the surgeon cuts open the eye, fires a laser at the cornea (the bit of the eye containing the lens and all the other refracting equipment) in order to reshape it, and then re-seals it. Now, if you will excuse me, I have to go and huddle under my duvet as a direct result of that image…

F=ma

On Christmas Day 1642, a baby boy was born to a well-off Lincolnshire family in Woolsthorpe Manor. His childhood was somewhat chaotic; his father had died before he was born, and his mother remarried (to a stepfather he came to acutely dislike) when he was three. He was later to run away from school, discovered he hated the farming alternative and returned to become the school’s top pupil. He was also to later attend Trinity College Cambridge; oh, and became arguably the greatest scientist and mathematician of all time. His name was Isaac Newton.

Newton started off in a small way, developing binomial theorem; a technique used to expand powers of polynomials, which is a kind of fundamental technique used pretty much everywhere in modern science and mathematics; the advanced mathematical equivalent of knowing that 2 x 4 = 8. Oh, and did I mention that he was still a student at this point? Taking a break from his Cambridge career for a couple of years due to the minor inconvenience of the Great Plague, he whiled away the hours inventing calculus, which he finalised upon his return to Cambridge. Calculus is the collective name for differentiating and integrating, which allows one to find out the rate at which something is occurring, the gradient of a graph and the area under it algebraically; plus enabling us to reverse all of the above processes. This makes it sound like rather a neat and useful gimmick, but belies the fact that it allows us to mathematically describe everything from water flowing through a pipe to how aeroplanes fly (the Euler equations mentioned in my aerodynamics posts come from advanced calculus), and the discovery of it alone would have been enough to warrant Newton’s place in the history books. OK, and Leibniz who discovered pretty much the same thing at roughly the same time, but he got there later than Newton. So there.

However, discovering the most important mathematical tool to modern scientists and engineers was clearly not enough to occupy Newton’s prodigious mind during his downtime, so he also turned his attention to optics, aka the behaviour of light. He began by discovering that white light was comprised of all colours, revolutionising all contemporary scientific understanding of light itself by suggesting that coloured objects did not create their own colour, but reflected only certain portions of already coloured light. He combined this with discovering diffraction; that light shone through glass or another transparent material at an angle will bend. This then lead him to explain how telescopes worked, why the existing designs (based around refracting light through a lens) were flawed, and to design an entirely new type of telescope (the reflecting telescope) that is used in all modern astronomical equipment, allowing us to study, look at and map the universe like never before. Oh, and he also took the time to theorise the existence of photons (he called them corpuscles), which wouldn’t be discovered for another 250 years.

When that got boring, Newton turned his attention to a subject that he had first fiddled around with during his calculus time: gravity. Nowadays gravity is a concept taught to every schoolchild, but in Newton’s day the idea that objects fall to earth was barely even considered. Aristotle’s theories dictated that every object ‘wanted’ to be in a state of stillness on the ground unless disturbed, and Newton was the first person to make a serious challenge to that theory in nearly two millennia (whether an apple tree was involved in his discovery is heavily disputed). Not only did he and colleague Robert Hooke define the force of gravity, but they also discovered the inverse-square law for its behaviour (aka if you multiply the distance you are away from a planet by 2, then you will decrease the gravitational force on you by 2 squared, or 4) and turned it into an equation (F=-GMm/r^2). This single equation would explain Kepler’s work on celestial mechanics, accurately predict the orbit of the ****ing planets (predictions based, just to remind you, on the thoughts of one bloke on earth with little technology more advanced than a pen and paper) and form the basis of his subsequent book: “Philosophiæ Naturalis Principia Mathematica”.

Principia, as it is commonly known, is probably the single most important piece of scientific writing ever written. Not only does it set down all Newton’s gravitational theories and explore their consequences (in minute detail; the book in its original Latin is bigger than a pair of good-sized bricks), but he later defines the concepts of mass, momentum and force properly for the first time; indeed, his definitions survive to this day and have yet to be improved upon.  He also set down his three laws of motion: velocity is constant unless a force acts upon an object, the acceleration of an object is proportional to the force acting on it and the object’s mass (summarised in the title of this post) and action and reaction are equal and opposite. These three laws not only tore two thousand years of scientific theory to shreds, but nowadays underlie everything we understand about object mechanics; indeed, no flaw was found in Newton’s equations until relativity was discovered 250 years later, which only really applies to objects travelling at around 100,000 kilometres per second or greater; not something Newton was ever likely to come across.

Isaac Newton’s life outside science was no less successful; he was something of an amateur alchemist and when he was appointed Master of the Royal Mint (a post he held for 30 years until his death; there is speculation his alchemical meddling may have resulted in mercury poisoning) he used those skills to great affect in assessing coinage, in an effort to fight Britain’s massive forgery problem. He was successful in this endeavour and later became the first man to put Britain onto the gold, rather than silver, standard, reflecting his knowledge of the superior chemical qualities of the latter metal (see another previous post). He is still considered by many to be the greatest genius who ever lived, and I can see where those people are coming from.

However, the reason I find Newton especially interesting concerns his private life. Newton was a notoriously hard man to get along with; he never married, almost certainly died a virgin and is reported to have only laughed once in his life (when somebody asked him what was the point in studying Euclid. The joke is somewhat highbrow, I’ll admit). His was a lonely existence, largely friendless, and he lived, basically for his work (he has been posthumously diagnosed with everything from bipolar disorder to Asperger’s syndrome). In an age when we are used to such charismatic scientists as Richard Feynman and Stephen Hawking, Newton’s cut-off, isolated existence with only his prodigious intellect for company seems especially alien. That the approach was effective is most certainly not in doubt; every one of his scientific discoveries would alone be enough to place him in science’s hall of fame, and to have done all of them puts him head and shoulders above all of his compatriots. In many ways, Newton’s story is one of the price of success. Was Isaac Newton a successful man? Undoubtedly, in almost every field he turned his hand to. Was he a happy man? We don’t know, but it would appear not. Given the choice between success and happiness, where would you fall?

There is an art, or rather, a knack, to flying…

The aerofoil is one of the greatest inventions mankind has come up with in the last 150 years; in the late 19th century, aristocratic Yorkshireman (as well as inventor, philanthropist, engineer and generally quite cool dude) George Cayley identified the way bird wings generated lift merely by moving through the air (rather than just by flapping), and set about trying to replicate this lift force. To this end, he built a ‘whirling arm’ to test wings and measure the upwards lift force they generated, and found that a cambered wing shape (as in modern aerofoils) similar to that of birds was more efficient at generating lift than one with flat surfaces. This was enough for him to engineer the first manned, sustained flight, sending his coachman across Brompton Dale in 1863 in a homemade glider (the coachman reportedly handed in his notice upon landing with the immortal line “I was hired to drive, not fly”), but he still didn’t really have a proper understanding of how his wing worked.

Nowadays, lift is understood better by both science and the general population; but many people who think they know how a wing works don’t quite understand the full principle. There are two incomplete/incorrect theories that people commonly believe in; the ‘skipping stone’ theory and the ‘equal transit time’ theory.

The ‘equal transit time’ theory is popular because it sounds very sciency and realistic; because a wing is a cambered shape, the tip-tail distance following the wing shape is longer over the top of the wing than it is when following the bottom surface. Therefore, air travelling over the top of the wing has to travel further than the air going underneath. Now, since the aircraft is travelling at a constant speed, all the air must surely be travelling past the aircraft at the same rate; so, regardless of what path the air takes, it must take the same time to travel the same lateral distance. Since speed=distance/time, and air going over the top of the wing has to cover a greater distance, it will be travelling faster than the air going underneath the wing. Bernoulli’s principle tells us that if air travels faster, the air pressure is lower; this means the air on top of the wing is at a lower pressure than the air underneath it, and this difference in pressure generates an upwards force. This force is lift.

The key flaw in this theory is the completely wrong assumption that the air over the top and bottom of the wing must take the same time to travel across it. If we analyse the airspeed at various points over a wing we find that air going over the top does, in fact, travel faster than air going underneath it (the reason for this comes from Euler’s fluid dynamics equations, which can be used to derive the Navier-Stokes equations for aerofoil behaviour. Please don’t ask me to explain them). However, this doesn’t mean that the two airflows necessarily coincide at the same point when we reach the trailing edge of the wing, so the theory doesn’t correctly calculate the amount of lift generated by the wing. This is compounded by the theory not explaining any of the lift generated from the bottom face of the wing, or why the angle wing  is set at (the angle of attack) affects the lift it generates, or how one is able to generate some lift from just a flat sheet set at an angle (or any other symmetrical wing profile), or how aircraft fly upside-down.

Then we have the (somewhat simpler) ‘skipping stone’ theory, which attempts to explain the lift generated from the bottom surface of the wing. Its basic postulate concerns the angle of attack; with an angled wing, the bottom face of the wing strikes some of the incoming air, causing air molecules to bounce off it. This is like the bottom of the wing being continually struck by lots of tiny ball bearings, sort of the same thing that happens when a skimming stone bounces off the surface of the water, and it generates a net force; lift. Not only that, but this theory claims to explain the lower pressure found on top of the wing; since air is blocked by the tilted wing, not so much gets to the area immediately above/behind it. This means there are less air molecules in a given space, giving rise to a lower pressure; another way of explaining the lift generated.

There isn’t much fundamentally wrong with this theory, but once again the mathematics don’t check out; it also does not accurately predict the amount of lift generated by a wing. It also fails to explain why a cambered wing set at a zero angle of attack is still able to generate lift; but actually it provides a surprisingly good model when we consider supersonic flight.

Lift can be explained as a combination of these two effects, but to do so is complex and unnecessary  we can find a far better explanation just by considering the shape the airflow makes when travelling over the wing. Air when passing over an aerofoil tends to follow the shape of its surface (Euler again), meaning it deviates from its initially straight path to follow a curved trajectory. This curve-shaped motion means the direction of the airflow must be changing; and since velocity is a vector quantity, any change in the direction of the air’s movement represents a change in its overall velocity, regardless of any change in airspeed (which contributes separately). Any change in velocity corresponds to the air being accelerated, and since Force = mass x acceleration this acceleration generates a net force; this force is what corresponds to lift. This ‘turning’ theory not only describes lift generation on both the top and bottom wing surfaces, since air is turned upon meeting both, but also why changing the angle off attack affects lift; a steeper angle means the air has to turn more when following the wing’s shape, meaning more lift is generated. Go too steep however, and the airflow breaks away from the wing and undergoes a process called flow separation… but I’m getting ahead of myself.

This explanation works fine so long as our aircraft is travelling at less than the speed of sound. However, as we approach Mach 1, strange things start to happen, as we shall find out next time…

Shining Curtains

When the Vikings swept across Europe in the 7th and 8th centuries, they brought with them many stories; stories of their Gods, of the birth of the world, of Asgard, of Valhalla, of Jormundur the world-serpent, of Loki the trickster, Odin the father and of Ragnarok- the end of this world and the beginning of the next. However, the reason I mention the Vikings today is in reference to one particular set of stories they brought with them; of shining curtains of brilliant, heavenly fire, dancing across the northern sky as the Gods fought with one another. Such lights were not common in Europe, but they were certainly known, and throughout history have provoked terror at the anger of the various Gods that was clearly being displayed across the heavens. Now, we know these shining curtains as the aurora borealis (Aurora was the Roman goddess of the dawn, whilst boreas was the Greek name for the north wind (because the aurora was only observed in the far north- a similar feature known as the aurora australis is seen near the south pole). The name was acquired in 1621).

Nowadays, we know that the auroras are an electromagnetic effect, which was demonstrated quite spectacularly in 1859. On the 28th of August and 2nd of September that year, spectacular auroras erupted across much of the northern hemisphere, reaching their peak at one o’clock in the morning EST, and as far south as Boston the light was enough to read by. However, the feature I am interested here concerns the American Telegraph Line, stretching almost due north between Boston, Massachusetts, and Portland, Maine. Because of the great length and orientation of this line, the electromagnetic field generated by the aurora was sufficient to induce a current in the telegraph, to the extent that operators at both ends of the line communicated to decide to switch off their batteries (which were only interfering) and operate solely on aurora-power for around two hours. Aside from a gentle fluctuation of current, no problems were reported with this system.

We now know that the ultimate cause of the aurorae is our sun, and that two loads of exceptional solar activity were responsible for the 1859 aurora. We all know the sun emits a great deal of energy from the nuclear fusion going on in its core, but it also emits a whole lot of other stuff; including a lot of ionised (charged) gas, or plasma. This outflow of charged particles forms what is known as the solar wind, flowing out into space in all directions; it is this solar wind that generates the tail on comets, and is why such a tail always points directly away from the sun. However, things get interesting when the solar wind hits a planet such as Earth, which has a magnetic field surrounding it. Earth’s magnetic field looks remarkably similar to that of a large, three-dimensional bar magnet (this picture demonstrates it’s shape well), and when a large amount of charged particles passes through this magnetic field it is subject to something known as the motor effect. As every GCSE physics student knows, it is this effect that allows us to generate motion from electricity, and the same thing happens here; the large mass of moving charge acts as a current, and this cuts across the earth’s magnetic field. This generates a force (this is basically what the motor effect does), and this force points sideways, pushing the solar wind sideways. However, as it moves, so does the direction of the ‘current’, and thus the direction of the force changes too; this process ends up causing the charge to spin around the magnetic field lines of the earth, causing it to spiral as this mass of charged particles moves along them. Following these field lines, the charge will end up spiralling towards the poles of the earth, at which point the field lines bend and start going into the earth itself. As the plasma follows these lines therefore, it will come into contact with the Earth’s atmosphere, and one section of it in particular; the magnetosphere.

The magnetosphere is a region of our atmosphere that covers the upper level of our ionosphere which has a strong magnetic field. Here, the magnetic fields of both the charged plasma and the magnetosphere itself combine in a rather complicated process known as magnetic reconnection, the importance of which will be discussed later. Now, let us consider the contents of the plasma, all these charged particles and in particular high energy electrons that are now bumping into atoms of air in the ionosphere. This bumping into atoms gives them energy, which an atom deals with by having electrons within the atoms jump up energy levels and enter an excited state. After a short while, the atoms ‘cool down’ by having electrons drop down energy levels again, releasing packets of electromagnetic energy as they do so. We observe this release of EM radiation as visible light, and hey presto! we can see the aurorae. What colour the aurora ends up being depends on what atoms we are interacting with; oxygen is more common higher up and generates green and red aurorae depending on height, so these are the most common colours. If the solar wind is able to get further down in the atmosphere, it can interact with nitrogen and produce blue and purple aurorae.

The shape of the aurorae can be put down to the whole business of spiralling around field lines; this causes, as the field lines bend in towards the earth’s poles, them to describe roughly circular paths around the north and south poles. However, plasma does not conduct electricity very well between magnetic field lines, as this pattern is, so we would not expect the aurora to be very bright under normal circumstances. The reason this is not the case, and that aurorae are as visible and beautiful as they are, can be put down to the process of magnetic reconnection, which makes the plasma more conductive and allows these charged particles to flow more easily around in a circular path. This circular path around the poles causes the aurora to follow approximately east-west lines into the far distance, and thus we get the effect of ‘curtains’ of light following (roughly) this east-west pattern. The flickery, wavy nature of these aurora is, I presume, due to fluctuations in the solar wind and/or actual winds in the upper atmosphere. The end result? Possibly the most beautiful show Earth has to offer us. I love science.

“Lies, damn lies, and statistics”

Ours is the age of statistics; of number-crunching, of quantifying, of defining everything by what it means in terms of percentages and comparisons. Statistics crop up in every walk of life, to some extent or other, in fields as widespread as advertising and sport. Many people’s livelihoods now depend on their ability to crunch the numbers, to come up with data and patterns, and much of our society’s increasing ability to do awesome things can be traced back to someone making the numbers dance.

In fact, most of what we think of as ‘statistics’ are not really statistics at all, but merely numbers; to a pedantic mathematician, a statistic is defined as a mathematical function of a sample of data, not the whole ‘population’ we are considering. We use statistics when it would be impractical to measure the whole population, usually because it’s too large, and when we instead are trying to mathematically model the whole population based on a small sample of it. Thus, next to no sporting ‘statistics’ are in fact true statistics as they tend to cover the whole game; if I heard during a rugby match that “Leicester had 59% of the possession”, that is nothing more than a number; or, to use the mathematical term, a parameter. A statistic would be to say “From our sample [of one game] we can conclude that Leicester control an average of 59% of the possession when they play rugby”, but this is quite evidently not true since we couldn’t extrapolate Leicester’s normal behaviour from a single match. It is for this reason that complex mathematical formulae are used to determine the uncertainty of a conclusion drawn from a statistical test, and these are based on the size of the sample we are testing compared to the overall size of the population we are trying to model. These uncertainty levels are often brushed under the carpet when pseudoscientists try to make dramatic, sweeping claims about something, but they are possibly the most important feature of modern statistics.

Another weapon for the poor statistician can be the mis-application of the idea of correlation. Correlation is basically what it means when you take two variables, plot them against one another on a graph, and find you get a nice neat line joining them, suggesting that the two are in some way related. Correlation tends to get scientists very excited, since if two things are linked then it suggests that you can make one thing happen by doing another, an often advantageous concept, and this is known as a causal relationship. However, whilst correlation and causation are rarely not intertwined, the first lesson every statistician learns is this; correlation DOES NOT imply causation.

Imagine, for instance, you have a cold. You feel like crap, your head is spinning, you’re dehydrated and you can’t breath through your nose. If we were, during the period before, during and after your cold, to plot a graph of one’s relative ability to breath through the nose against the severity of your headache (yeah, not very scientific I know), these two facts would both correlate, since they happen at the same time due to the cold. However, if I were to decide that this correlation implies causation, then I would draw the conclusion that all I need to do to give you a terrible headache is to plug your nose with tissue paper so you can’t breath through it. In this case, I have ignored the possibility (and, as it transpires, the eventuality) of there being a third variable (the cold virus) that causes both of the other two variables, and this is very hard to investigate without poking our head out of the numbers and looking at the real world. There are statistical techniques that enable us to do this, but they are for another time.

Whilst this example was more childish than anything, mis-extrapolation of a correlation can have deadly consequences. One example, explored in Ben Goldacre’s Bad Science, concerns beta-carotene, an antioxidant found in carrots, and in 1981 an epidemiologist called Richard Peto published a meta-analysis (post for another time) of a series of scientific studies that suggested people with high beta-carotene levels showed a reduced risk of cancer. At the time, antioxidants were considered the wonder-substance of the nutrition, and everyone got on board with the idea that beta-carotene was awesome stuff. However, all of the studies examined were observational ones; taking a lot of different people, seeing what their beta-carotene levels were and then examining whether or not they had cancer or developed it in later life. None of the studies actually gave their subjects beta-carotene and then saw if that affected their cancer risk, and this prompted the editor of Nature magazine (the scientific journal in which Peto’s paper was published) to include a footnote reading:

Unwary readers (if such there are) should not take the accompanying article as a sign that the consumption of large quantities of carrots (or other dietary sources of beta-carotene) is necessarily protective against cancer.

The editor’s footnote quickly proved a well-judged one; a study conducted in Finland some time afterwards actually gave participants at high risk of lung cancer beta-carotene and found their risk of both getting the cancer and of death were higher than for the ‘placebo’ control group. A later study, named CARET (Carotene And Retinol Efficiency Trial), also tested groups at a high risk of lung cancer, giving half of them a mixture of beta-carotene and vitamin A and the other half placebos. The idea was to run the trial for six years and see how many illnesses/deaths each group ended up with; but after preliminary data found that those having the antioxidant tablets were 46% more likely to die from lung cancer, they decided it would be unethical to continue the trial and it was terminated early. Had the Nature article been allowed to get out of hand before this research was done, then it could have put thousands of people who hadn’t read the article properly at risk; and all because of the dangers of assuming correlation=causation.

This wasn’t really the gentle ramble through statistics I originally intended it to be, but there you go; stats. Next time, something a little less random. Maybe

Components of components of components…

By the end of my last post, science had reached the level of GCSE physics/chemistry; the world is made of atoms, atoms consist of electrons orbiting a nucleus, and a nucleus consists of a mixture of positively charged protons and neutrally charged neutrons. Some thought that this was the deepest level things could go; that everything was made simply of these three things and that they were the fundamental particles of the universe. However, others pointed out the enormous size difference between an electron and proton, suggesting that the proton and neutron were not as fundamental as the electron, and that we could look even deeper.

In any case, by this point our model of the inside of a nucleus was incomplete anyway; in 1932 James Chadwick had discovered (and named) the neutron, first theorised about by Ernest Rutherford to act as a ‘glue’ preventing the protons of a nucleus from repelling one another and causing the whole thing to break into pieces. However, nobody actually had any idea exactly how this worked, so in 1934 a concept known as the nuclear force was suggested. This theory, proposed by Hideki Yukawa, held that nucleons (then still considered fundamental particles) emitted particles he called mesons; smaller than nucleons, they acted as carriers of the nuclear force. The physics behind this is almost unintelligible to anyone who isn’t a career academic (as I am not), but this is because there is no equivalent to the nuclear force that we encounter during the day-to-day. We find it very easy to understand electromagnetism because we have all seen magnets attracting and repelling one another and see the effects of electricity everyday, but the nuclear force was something more fundamental; a side effect of the constant exchange of mesons between nucleons*. The meson was finally found (proving Yukawa’s theory) in 1947, and Yukawa won the 1949 Nobel Prize for it. Mesons are now understood to belong to a family of particles called gluons, which all act as the intermediary for the nuclear strong force between various different particles; the name gluon hints at this purpose, coming from the word ‘glue’.

*This, I am told, becomes a lot easier to understand once electromagnetism has been studied from the point of view of two particles exchanging photons, but I’m getting out of my depth here; time to move on.

At this point, the physics world decided to take stock; the list of all the different subatomic particles that had been discovered became known as ‘the particle zoo’, but our understanding of them was still patchy. We knew nothing of what the various nucleons and mesons consisted of, how they were joined together, or what allowed the strong nuclear force to even exist; where did mesons come from? How could these particles, 2/3 the size of a proton, be emitted from one without tearing the thing to pieces?

Nobody really had the answers to these, but when investigating them people began to discover other new particles, of a similar size and mass to the nucleons. Most of these particles were unstable and extremely short-lived, decaying into the undetectable in trillionths of trillionths of a second, but whilst they did exist they could be detected using incredibly sophisticated machinery and their existence, whilst not ostensibly meaning anything, was a tantalising clue for physicists. This family of nucleon-like particles was later called baryons, and in 1961 American physicist Murray Gell-Mann organised the various baryons and mesons that had been discovered into groups of eight, a system that became known as the eightfold way. There were two octets to be considered; one contained the mesons, and all the baryons with a ‘spin’ (a quantum property of subatomic particles that I won’t even try to explain) of 1/2. Other baryons had a spin of 3/2 (or one and a half), and they formed another octet; except that only seven of them had been discovered. Gell-Mann realised that each member of the ‘spin 1/2’ group had a corresponding member in the ‘spin 3/2’ group, and so by extrapolating this principle he was able to theorise about the existence of an eighth ‘spin 3/2’ baryon, which he called the omega baryon. This particle, with properties matching almost exactly those he predicted, was discovered in 1964 by a group experimenting with a particle accelerator (a wonderful device that takes two very small things and throws them at one another in the hope that they will collide and smash to pieces; particle physics is a surprisingly crude business, and few other methods have ever been devised for ‘looking inside’ these weird and wonderful particles), and Gell-Mann took the Nobel prize five years later.

But, before any of this, the principle of the eightfold way had been extrapolated a stage further. Gell-Mann collaborated with George Zweig on a theory concerning entirely theoretical particles known as quarks; they imagined three ‘flavours’ of quark (which they called, completely arbitrarily, the up, down and strange quarks), each with their own properties of spin, electrical charge and such. They theorised that each of the properties of the different hadrons (as mesons and baryons are collectively known) could be explained by the fact that each was made up of a different combination of these quarks, and that the overall properties of  each particle were due, basically, to the properties of their constituent quarks added together. At the time, this was considered somewhat airy-fairy; Zweig and Gell-Mann had absolutely no physical evidence, and their theory was essentially little more than a mathematical construct to explain the properties of the different particles people had discovered. Within a year, supporters of the theory Sheldon Lee Glashow and James Bjorken suggested that a fourth quark, which they called the ‘charm’ quark, should be added to the theory, in order to better explain radioactivity (ask me about the weak nuclear force, go on, I dare you). It was also later realised that the charm quark might explain the existence of the kaon and pion, two particles discovered in cosmic rays 15 years earlier that nobody properly understood. Support for the quark theory grew; and then, in 1968, a team studying deep inelastic scattering (another wonderfully blunt technique that involves firing an electron at a nucleus and studying how it bounces off in minute detail) revealed a proton to consist of three point-like objects, rather than being the solid, fundamental blob of matter it had previously been thought of. Three point-like objects matched exactly Zweig and Gell-Mann’s prediction for the existence of quarks; they had finally moved from the mathematical theory to the physical reality.

(The quarks discovered were of the up and down flavours; the charm quark wouldn’t be discovered until 1974, by which time two more quarks, the top and bottom, had been predicted to account for an incredibly obscure theory concerning the relationship between antimatter and normal matter. No, I’m not going to explain how that works. For the record, the bottom quark was discovered in 1977 and the top quark in 1995)

Nowadays, the six quarks form an integral part of the standard model; physics’ best attempt to explain how everything in the world works, or at least on the level of fundamental interactions. Many consider them, along with the six leptons and four bosons*, to be the fundamental particles that everything is made of; these particles exist, are fundamental, and that’s an end to it. But, the Standard Model is far from complete; it isn’t readily compatible with the theory of relativity and doesn’t explain either gravity or many observed effects in cosmology blamed on ‘dark matter’ or ‘dark energy’- plus it gives rise to a few paradoxical situations that we aren’t sure how to explain. Some say it just isn’t finished yet, and that we just need to think of another theory or two and discover another boson. Others say that we need to look deeper once again and find out what quarks themselves contain…

*A boson is anything, like a gluon, that ‘carries’ a fundamental force; the recently discovered Higgs boson is not really part of the list of fundamental particles since it exists solely to effect the behaviour of the W and Z bosons, giving them mass

The Story of the Atom

Possibly the earliest scientific question we as a race attempted to answer was ‘what is our world made of’. People reasoned that everything had to be made of something- all the machines and things we build have different components in them that we can identify, so it seemed natural that those materials and components were in turn made of some ‘stuff’ or other. Some reasoned that everything was made up of the most common things present in our earth; the classical ‘elements’ of earth, air, fire and water, but throughout the latter stages of the last millennia the burgeoning science of chemistry began to debunk this idea. People sought for a new theory to answer what everything consisted of, what the building blocks were, and hoped to find in this search an answer to several other questions; why chemicals that reacted together did so in fixed ratios, for example. For a solution to this problem, they returned to an idea almost as old as science itself; that everything consisted of tiny blobs of matter, invisible to the naked eye, that joined to one another in special ways. The way they joined together varied depending on the stuff they made up, hence the different properties of different materials, and the changing of these ‘joinings’ was what was responsible for chemical reactions and their behaviour. The earliest scientists who theorised the existence of these things called them corpuscles; nowadays we call them atoms.

By the turn of the twentieth century, thanks to two hundred years of chemistry using atoms to conveniently explain their observations, it was considered common knowledge among the scientific community that an atom was the basic building block of matter, and it was generally considered to be the smallest piece of matter in the universe; everything was made of atoms, and atoms were fundamental and solid. However, in 1897 JJ Thomson discovered the electron, with a small negative charge, and his evidence suggested that electrons were a constituent part of atoms. But atoms were neutrally charged, so there had to be some positive charge present to balance out; Thomson postulated that the negative electrons ‘floated’ within a sea of positive charge, in what became known as the plum pudding model. Atoms were not fundamental at all; even these components of all matter had components themselves. A later experiment by Ernest Rutherford sought to test the theory of the plum pudding model; he bombarded a thin piece of gold foil with positively charged alpha particles, and found that some were deflected at wild angles but that most passed straight through. This suggested, rather than a large uniform area of positive charge, a small area of very highly concentrated positive charge, such that when the alpha particle came close to it it was repelled violently (just like putting two like poles of a magnet together) but that most of the time it would miss this positive charge completely; most of the atom was empty space. So, he thought the atom must be like the solar system, with the negative electrons acting like planets orbiting a central, positive nucleus.

This made sense in theory, but the maths didn’t check out; it predicted the electrons to either spiral into the nucleus and for the whole of creation to smash itself to pieces, or for it all to break apart. It took Niels Bohr to suggest that the electrons might be confined to discrete orbital energy levels (roughly corresponding to distances from the nucleus) for the model of the atom to be complete; these energy levels (or ‘shells’) were later extrapolated to explain why chemical reactions occur, and the whole of chemistry can basically be boiled down to different atoms swapping electrons between energy levels in accordance with the second law of thermodynamics. Bohr’s explanation drew heavily from Max Planck’s recent explanation of quantum theory, which modelled photons of light as having discrete energy levels, and this suggested that electrons were also quantum particles; this ran contrary to people’s previous understanding of them, since they had been presumed to be solid ‘blobs’ of matter. This was but one step along the principle that defines quantum theory; nothing is actually real, everything is quantum, so don’t even try to imagine how it all works.

However, this still left the problem of the nucleus unsolved; what was this area of such great charge density packed  tightly into the centre of each atom, around which the electrons moved? What was it made of? How big was it? How was it able to account for almost all of a substance’s mass, given how little the electrons weighed?

Subsequent experiments have revealed an atomic nucleus to tiny almost beyond imagining; if your hand were the size of the earth, an atom would be roughly one millimetre in diameter, but if an atom were the size of St. Paul’s Cathedral then its nucleus would be the size of a full stop. Imagining the sheer tinyness of such a thing defies human comprehension. However, this tells us nothing about the nucleus’ structure; it took Ernest Rutherford (the guy who had disproved the plum pudding model) to take the first step along this road when he, in 1918, confirmed that the nucleus of a hydrogen atom comprised just one component (or ‘nucleon’ as we collectively call them today). Since this component had a positive charge, to cancel out the one negative electron of a hydrogen atom, he called it a proton, and then (entirely correctly) postulated that all the other positive charges in larger atomic nuclei were caused by more protons stuck together in the nucleus. However, having multiple positive charges all in one place would normally cause them to repel one another, so Rutherford suggested that there might be some neutrally-charged particles in there as well, acting as a kind of electromagnetic glue. He called these neutrons (since they were neutrally charged), and he has since been proved correct; neutrons and protons are of roughly the same size, collectively constitute around 99.95% of any given atom’s mass, and are found in all atomic nuclei. However, even these weren’t quite fundamental subatomic particles, and as the 20th century drew on, scientists began to delve even deeper inside the atom; and I’ll pick up that story next time.

The Red Flower

Fire is, without a doubt, humanity’s oldest invention and its greatest friend; to many, the fundamental example what separates us from other animals. The abilities to keep warm through the coldest nights and harshest winters, to scare away predators by harnessing this strange force of nature, and to cook a joint of meat because screw it, it tastes better that way, are incredibly valuable ones, and they have seen us through many a tough moment. Over the centuries, fire in one form or another has been used for everything from being a weapon of war to furthering science, and very grateful we are for it too.

However, whilst the social history of fire is interesting, if I were to do a post on it then you dear readers would be faced with 1000 words of rather repetitive and somewhat boring myergh (technical term), so instead I thought I would take this opportunity to resort to my other old friend in these matters: science, as well as a few things learned from several years of very casual outdoorsmanship.

Fire is the natural product of any sufficiently exothermic reaction (ie one that gives out heat, rather than taking it in). These reactions can be of any type, but since fire can only form in air most of such reactions we are familiar with tend to be oxidation reactions; oxygen from the air bonding chemically with the substance in question (although there are exceptions;  a sample of potassium placed in water will float on the top and react with the water itself, become surrounded surrounded by a lilac flame sufficiently hot to melt it, and start fizzing violently and pushing itself around the container. A larger dose of potassium, or a more reactive alkali metal such as rubidium, will explode). The emission of heat causes a relatively gentle warming effect for the immediate area, but close to the site of the reaction itself a very large amount of heat is emitted in a small area. This excites the molecules of air close to the reaction and causes them to vibrate violently, emitting photons of electromagnetic radiation as they do so in the form of heat & light (among other things). These photons cause the air to glow brightly, creating the visible flame we can see; this large amount of thermal energy also ionises a lot of atoms and molecules in the area of the flame, meaning that a flame has a slight charge and is more conductive than the surrounding air. Because of this, flame probes are sometimes used to get rid of the excess charge in sensitive electromagnetic experiments, and flamethrowers can be made to fire lightning. Most often the glowing flame results in the characteristic reddy/orange colour of fire, but some reactions, such as the potassium one mentioned, cause them to emit radiation of other frequencies for a variety of reasons (chief among them the temperature of the flame and the spectral properties of the material in question), causing the flames to be of different colours, whilst a white-hot area of a fire is so hot that the molecules don’t care what frequency the photons they’re emitting are at so long as they can get rid of the things fast enough. Thus, light of all wavelengths gets emitted, and we see white light. The flickery nature of a flame is generally caused by the excited hot air moving about rapidly, until it gets far enough away from the source of heat to cool down and stop glowing; this process happens all the time with hundreds of packets of hot air, causing them to flicker back and forth.

However, we must remember that fires do not just give out heat, but must take some in too. This is to do with the way the chemical reaction to generate the heat in question works; the process requires the bonds between atoms to be broken, which uses up energy, before they can be reformed into a different pattern to release energy, and the energy needed to break the bonds and get the reaction going is known as the activation energy. Getting the molecules of the stuff you’re trying to react to the activation energy is the really hard part of lighting a fire, and different reactions (involving the burning of different stuff) have different activation energies, and thus different ‘ignition temperatures’ for the materials involved. Paper, for example, famously has an ignition temperature of 451 Fahrenheit (which means, incidentally, that you can cook with it if you’re sufficiently careful and not in a hurry to eat), whilst wood’s is only a little higher at around 300 degrees centigrade, both of which are less than that of a spark or flame. However, we must remember that neither fuel will ignite if it is wet, as water is not a fuel that can be burnt, meaning that it often takes a while to dry wood out sufficiently for it to catch, and that big, solid blocks of wood take quite a bit of energy to heat up.

From all of this information we can extrapolate the first rule that everybody learns about firelighting; that in order to catch a fire needs air, dry fuel and heat (the air provides the oxygen, the fuel the stuff it reacts with and the heat the activation energy). When one of these is lacking, one must make up for it by providing an excess of at least one of the other two, whilst remembering not to let the provision of the other ingredients suffer; it does no good, for example, to throw tons of fuel onto a new, small fire since it will snuff out its access to the air and put the fire out. Whilst fuel and air are usually relatively easy to come by when starting a fire, heat is always the tricky thing; matches are short lived, sparks even more so, and the fact that most of your fuel is likely to be damp makes the job even harder.

Provision of heat is also the main reason behind all of our classical methods of putting a fire out; covering it with cold water cuts it off from both heat and oxygen, and whilst blowing on a fire will provide it with more oxygen, it will also blow away the warm air close to the fire and replace it with cold, causing small flames like candles to be snuffed out (it is for this reason that a fire should be blown on very gently if you are trying to get it to catch and also why doing so will cause the flames, which are caused by hot air remember, to disappear but the embers to glow more brightly and burn with renewed vigour once you have stopped blowing).  Once a fire has sufficient heat, it is almost impossible to put out and blowing on it will only provide it with more oxygen and cause it to burn faster, as was ably demonstrated during the Great Fire of London. I myself have once, with a few friends, laid a fire that burned for 11 hours straight; many times it was reduced to a few humble embers, but it was so hot that all we had to do was throw another log on it and it would instantly begin to burn again. When the time came to put it out, it took half an hour for the embers to dim their glow.

Determinism

In the early years of the 19th century, science was on a roll. The dark days of alchemy were beginning to give way to the modern science of chemistry as we know it today, the world of physics and the study of electromagnetism were starting to get going, and the world was on the brink of an industrial revolution that would be powered by scientists and engineers. Slowly, we were beginning to piece together exactly how our world works, and some dared to dream of a day where we might understand all of it. Yes, it would be a long way off, yes there would be stumbling blocks, but maybe, just maybe, so long as we don’t discover anything inconvenient like advanced cosmology, we might one day begin to see the light at the end of the long tunnel of science.

Most of this stuff was the preserve of hopeless dreamers, but in the year 1814 a brilliant mathematician and philosopher, responsible for underpinning vast quantities of modern mathematics and cosmology, called Pierre-Simon Laplace published a bold new article that took this concept to extremes. Laplace lived in the age of ‘the clockwork universe’, a theory that held Newton’s laws of motion to be sacrosanct truths and claimed that these laws of physics caused the universe to just keep on ticking over, just like the mechanical innards of a clock- and just like a clock, the universe was predictable. Just as one hour after five o clock will always be six, presuming a perfect clock, so every result in the world can be predicted from the results. Laplace’s arguments took such theory to its logical conclusion; if some vast intellect were able to know the precise positions of every particle in the universe, and all the forces and motions of them, at a single point in time, then using the laws of physics such an intellect would be able to know everything, see into the past, and predict the future.

Those who believed in this theory were generally disapproved of by the Church for devaluing the role of God and the unaccountable divine, whilst others thought it implied a lack of free will (although these issues are still considered somewhat up for debate to this day). However, among the scientific community Laplace’s ideas conjured up a flurry of debate; some entirely believed in the concept of a predictable universe, in the theory of scientific determinism (as it became known), whilst others pointed out the sheer difficulty in getting any ‘vast intellect’ to fully comprehend so much as a heap of sand as making Laplace’s arguments completely pointless. Other, far later, observers, would call into question some of the axiom’s upon which the model of the clockwork universe was based, such as Newton’s laws of motion (which collapse when one does not take into account relativity at very high velocities); but the majority of the scientific community was rather taken with the idea that they could know everything about something should they choose to. Perhaps the universe was a bit much, but being able to predict everything, to an infinitely precise degree, about a few atoms perhaps, seemed like a very tempting idea, offering a delightful sense of certainty. More than anything, to these scientists there work now had one overarching goal; to complete the laws necessary to provide a deterministic picture of the universe.

However, by the late 19th century scientific determinism was beginning to stand on rather shaky ground; although  the attack against it came from the rather unexpected direction of science being used to support the religious viewpoint. By this time the laws of thermodynamics, detailing the behaviour of molecules in relation to the heat energy they have, had been formulated, and fundamental to the second law of thermodynamics (which is, to this day, one of the fundamental principles of physics) was the concept of entropy.  Entropy (denoted in physics by the symbol S, for no obvious reason) is a measure of the degree of uncertainty or ‘randomness’ inherent in the universe; or, for want of a clearer explanation, consider a sandy beach. All of the grains of sand in the beach can be arranged in a vast number of different ways to form the shape of a disorganised heap, but if we make a giant, detailed sandcastle instead there are far fewer arrangements of the molecules of sand that will result in the same structure. Therefore, if we just consider the two situations separately, it is far, far more likely that we will end up with a disorganised ‘beach’ structure rather than a castle forming of its own accord (which is why sandcastles don’t spring fully formed from the sea), and we say that the beach has a higher degree of entropy than the castle. This increased likelihood of higher entropy situations, on an atomic scale, means that the universe tends to increase the overall level of entropy in it; if we attempt to impose order upon it (by making a sandcastle, rather than waiting for one to be formed purely by chance), we must input energy, which increases the entropy of the surrounding air and thus resulting in a net entropy increase. This is the second law of thermodynamics; entropy always increases, and this principle underlies vast quantities of modern physics and chemistry.

If we extrapolate this situation backwards, we realise that the universe must have had a definite beginning at some point; a starting point of order from which things get steadily more chaotic, for order cannot increase infinitely as we look backwards in time. This suggests some point at which our current universe sprang into being, including all the laws of physics that make it up; but this cannot have occurred under ‘our’ laws of physics that we experience in the everyday universe, as they could not kickstart their own existence. There must, therefore, have been some other, higher power to get the clockwork universe in motion, destroying the image of it as some eternal, unquestionable predictive cycle. At the time, this was seen as vindicating the idea of the existence of God to start everything off; it would be some years before Edwin Hubble would venture the Big Bang Theory, but even now we understand next to nothing about the moment of our creation.

However, this argument wasn’t exactly a death knell for determinism; after all, the laws of physics could still describe our existing universe as a ticking clock, surely? True; the killer blow for that idea would come from Werner Heisenburg in 1927.

Heisenburg was a particle physicist, often described as the person who invented quantum mechanics (a paper which won him a Nobel prize). The key feature of his work here was the concept of uncertainty on a subatomic level; that certain properties, such as the position and momentum of a particle, are impossible to know exactly at any one time. There is an incredibly complicated explanation for this concerning wave functions and matrix algebra, but a simpler way to explain part of the concept concerns how we examine something’s position (apologies in advance to all physics students I end up annoying). If we want to know where something is, then the tried and tested method is to look at the thing; this requires photons of light to bounce off the object and enter our eyes, or hypersensitive measuring equipment if we want to get really advanced. However, at a subatomic level a photon of light represents a sizeable chunk of energy, so when it bounces off an atom or subatomic particle, allowing us to know where it is, it so messes around with the atom’s energy that it changes its velocity and momentum, although we cannot predict how. Thus, the more precisely we try to measure the position of something, the less accurately we are able to know its velocity (and vice versa; I recognise this explanation is incomplete, but can we just take it as red that finer minds than mine agree on this point). Therefore, we cannot ever measure every property of every particle in a given space, never mind the engineering challenge; it’s simply not possible.

This idea did not enter the scientific consciousness comfortably; many scientists were incensed by the idea that they couldn’t know everything, that their goal of an entirely predictable, deterministic universe would forever remain unfulfilled. Einstein was a particularly vocal critic, dedicating the rest of his life’s work to attempting to disprove quantum mechanics and back up his famous statement that ‘God does not play dice with the universe’. But eventually the scientific world came to accept the truth; that determinism was dead. The universe would never seem so sure and predictable again.