Determinism

In the early years of the 19th century, science was on a roll. The dark days of alchemy were beginning to give way to the modern science of chemistry as we know it today, the world of physics and the study of electromagnetism were starting to get going, and the world was on the brink of an industrial revolution that would be powered by scientists and engineers. Slowly, we were beginning to piece together exactly how our world works, and some dared to dream of a day where we might understand all of it. Yes, it would be a long way off, yes there would be stumbling blocks, but maybe, just maybe, so long as we don’t discover anything inconvenient like advanced cosmology, we might one day begin to see the light at the end of the long tunnel of science.

Most of this stuff was the preserve of hopeless dreamers, but in the year 1814 a brilliant mathematician and philosopher, responsible for underpinning vast quantities of modern mathematics and cosmology, called Pierre-Simon Laplace published a bold new article that took this concept to extremes. Laplace lived in the age of ‘the clockwork universe’, a theory that held Newton’s laws of motion to be sacrosanct truths and claimed that these laws of physics caused the universe to just keep on ticking over, just like the mechanical innards of a clock- and just like a clock, the universe was predictable. Just as one hour after five o clock will always be six, presuming a perfect clock, so every result in the world can be predicted from the results. Laplace’s arguments took such theory to its logical conclusion; if some vast intellect were able to know the precise positions of every particle in the universe, and all the forces and motions of them, at a single point in time, then using the laws of physics such an intellect would be able to know everything, see into the past, and predict the future.

Those who believed in this theory were generally disapproved of by the Church for devaluing the role of God and the unaccountable divine, whilst others thought it implied a lack of free will (although these issues are still considered somewhat up for debate to this day). However, among the scientific community Laplace’s ideas conjured up a flurry of debate; some entirely believed in the concept of a predictable universe, in the theory of scientific determinism (as it became known), whilst others pointed out the sheer difficulty in getting any ‘vast intellect’ to fully comprehend so much as a heap of sand as making Laplace’s arguments completely pointless. Other, far later, observers, would call into question some of the axiom’s upon which the model of the clockwork universe was based, such as Newton’s laws of motion (which collapse when one does not take into account relativity at very high velocities); but the majority of the scientific community was rather taken with the idea that they could know everything about something should they choose to. Perhaps the universe was a bit much, but being able to predict everything, to an infinitely precise degree, about a few atoms perhaps, seemed like a very tempting idea, offering a delightful sense of certainty. More than anything, to these scientists there work now had one overarching goal; to complete the laws necessary to provide a deterministic picture of the universe.

However, by the late 19th century scientific determinism was beginning to stand on rather shaky ground; although  the attack against it came from the rather unexpected direction of science being used to support the religious viewpoint. By this time the laws of thermodynamics, detailing the behaviour of molecules in relation to the heat energy they have, had been formulated, and fundamental to the second law of thermodynamics (which is, to this day, one of the fundamental principles of physics) was the concept of entropy.  Entropy (denoted in physics by the symbol S, for no obvious reason) is a measure of the degree of uncertainty or ‘randomness’ inherent in the universe; or, for want of a clearer explanation, consider a sandy beach. All of the grains of sand in the beach can be arranged in a vast number of different ways to form the shape of a disorganised heap, but if we make a giant, detailed sandcastle instead there are far fewer arrangements of the molecules of sand that will result in the same structure. Therefore, if we just consider the two situations separately, it is far, far more likely that we will end up with a disorganised ‘beach’ structure rather than a castle forming of its own accord (which is why sandcastles don’t spring fully formed from the sea), and we say that the beach has a higher degree of entropy than the castle. This increased likelihood of higher entropy situations, on an atomic scale, means that the universe tends to increase the overall level of entropy in it; if we attempt to impose order upon it (by making a sandcastle, rather than waiting for one to be formed purely by chance), we must input energy, which increases the entropy of the surrounding air and thus resulting in a net entropy increase. This is the second law of thermodynamics; entropy always increases, and this principle underlies vast quantities of modern physics and chemistry.

If we extrapolate this situation backwards, we realise that the universe must have had a definite beginning at some point; a starting point of order from which things get steadily more chaotic, for order cannot increase infinitely as we look backwards in time. This suggests some point at which our current universe sprang into being, including all the laws of physics that make it up; but this cannot have occurred under ‘our’ laws of physics that we experience in the everyday universe, as they could not kickstart their own existence. There must, therefore, have been some other, higher power to get the clockwork universe in motion, destroying the image of it as some eternal, unquestionable predictive cycle. At the time, this was seen as vindicating the idea of the existence of God to start everything off; it would be some years before Edwin Hubble would venture the Big Bang Theory, but even now we understand next to nothing about the moment of our creation.

However, this argument wasn’t exactly a death knell for determinism; after all, the laws of physics could still describe our existing universe as a ticking clock, surely? True; the killer blow for that idea would come from Werner Heisenburg in 1927.

Heisenburg was a particle physicist, often described as the person who invented quantum mechanics (a paper which won him a Nobel prize). The key feature of his work here was the concept of uncertainty on a subatomic level; that certain properties, such as the position and momentum of a particle, are impossible to know exactly at any one time. There is an incredibly complicated explanation for this concerning wave functions and matrix algebra, but a simpler way to explain part of the concept concerns how we examine something’s position (apologies in advance to all physics students I end up annoying). If we want to know where something is, then the tried and tested method is to look at the thing; this requires photons of light to bounce off the object and enter our eyes, or hypersensitive measuring equipment if we want to get really advanced. However, at a subatomic level a photon of light represents a sizeable chunk of energy, so when it bounces off an atom or subatomic particle, allowing us to know where it is, it so messes around with the atom’s energy that it changes its velocity and momentum, although we cannot predict how. Thus, the more precisely we try to measure the position of something, the less accurately we are able to know its velocity (and vice versa; I recognise this explanation is incomplete, but can we just take it as red that finer minds than mine agree on this point). Therefore, we cannot ever measure every property of every particle in a given space, never mind the engineering challenge; it’s simply not possible.

This idea did not enter the scientific consciousness comfortably; many scientists were incensed by the idea that they couldn’t know everything, that their goal of an entirely predictable, deterministic universe would forever remain unfulfilled. Einstein was a particularly vocal critic, dedicating the rest of his life’s work to attempting to disprove quantum mechanics and back up his famous statement that ‘God does not play dice with the universe’. But eventually the scientific world came to accept the truth; that determinism was dead. The universe would never seem so sure and predictable again.

Advertisement

The End of The World

As everyone who understands the concept of buying a new calendar when the old one runs out should be aware, the world is emphatically due to not end on December 21st this year thanks to a Mayan ‘prophecy’ that basically amounts to one guy’s arm getting really tired and deciding ‘sod carving the next year in, it’s ages off anyway’. Most of you should also be aware of the kind of cosmology theories that talk about the end of the world/the sun’s expansion/the universe committing suicide that are always hastily suffixed with an ‘in 200 billion years or so’, making the point that there’s really no need to worry and that the world is probably going to be fine for the foreseeable future; or at least, that by the time anything serious does happen we’re probably not going to be in a position to complain.

However, when thinking about this, we come across a rather interesting, if slightly macabre, gap; an area nobody really wants to talk about thanks to a mixture of lack of certainty and simple fear. At some point in the future, we as a race and a culture will surely not be here. Currently, we are. Therefore, between those two points, the human race is going to die.

Now, from a purely biological perspective there is nothing especially surprising or worrying about this; species die out all the time (in fact we humans are getting so good at inadvertent mass slaughter that between 2 and 20 species are going extinct every day), and others evolve and adapt to slowly change the face of the earth. We humans and our few thousand years of existence, and especially our mere two or three thousand of organised mass society, are the merest blip in the earth’s long and varied history. But we are also unique in more ways than one; the first race to, to a very great extent, remove ourselves from the endless fight for survival and start taking control of events once so far beyond our imagination as to be put down to the work of gods. If the human race is to die, as it surely will one day, we are simply getting too smart and too good at thinking about these things for it to be the kind of gradual decline & changing of a delicate ecosystem that characterises most ‘natural’ extinctions. If we are to go down, it’s going to be big and it’s going to be VERY messy.

In short, with the world staying as it is and as it has for the past few millennia we’re not going to be dying out very soon. However, this is also not very biologically unusual, for when a species go extinct it is usually the result of either another species with which they are engaging in direct competition out-competing them and causing them to starve, or a change in environmental conditions meaning they are no longer well-adapted for the environment they find themselves in. But once again, human beings appear to be showing a semblance of being rather above this; having carved out what isn’t so much an ecological niche as a categorical redefining of the way the world works there is no other creature that could be considered our biological competitor, and the thing that has always set humans apart ecologically is our ability to adapt. From the ice ages where we hunted mammoth, to the African deserts where the San people still live in isolation, there are very few things the earth can throw at us that are beyond the wit of humanity to live through. Especially a human race that is beginning to look upon terraforming and cultured food as a pretty neat idea.

So, if our environment is going to change sufficiently for us to begin dying out, things are going to have to change not only in the extreme, but very quickly as well (well, quickly in geographical terms at least). This required pace of change limits the number of potential extinction options to a very small, select list. Most of these you could make a disaster film out of (and in most cases one has), but one that is slightly less dramatic (although they still did end up making a film about it) is global warming.

Some people are adamant that global warming is either a) a myth, b) not anything to do with human activity or c) both (which kind of seems a contradiction in terms, but hey). These people can be safely categorized under ‘don’t know what they’re *%^&ing talking about’, as any scientific explanation that covers all the available facts cannot fail to reach the conclusion that global warming not only exists, but that it’s our fault. Not only that, but it could very well genuinely screw up the world- we are used to the idea that, in the long run, somebody will sort it out, we’ll come up with a solution and it’ll all be OK, but one day we might have to come to terms with a state of affairs where the combined efforts of our entire race are simply not enough. It’s like the way cancer always happens to someone else, until one morning you find a lump. One day, we might fail to save ourselves.

The extent to which global warming looks set to screw around with our climate is currently unclear, but some potential scenarios are extreme to say the least. Nothing is ever quite going to match up to the picture portrayed in The Day After Tomorrow (for the record, the Gulf Stream will take around a decade to shut down if/when it does so), but some scenarios are pretty horrific. Some predict the flooding of vast swathes of the earth’s surface, including most of our biggest cities, whilst others predict mass desertification, a collapse of many of the ecosystems we rely on, or the polar regions swarming across Northern Europe. The prospect of the human population being decimated is a very real one.

But destroyed? Totally? After thousands of years of human society slowly getting the better of and dominating all that surrounds it? I don’t know about you, but I find that quite unlikely- at the very least, it at least seems to me like it’s going to take more than just one wave of climate change to finish us off completely. So, if climate change is unlikely to kill us, then what else is left?

Well, in rather a nice, circular fashion, cosmology may have the answer, even if we don’t some how manage to pull off a miracle and hang around long enough to let the sun’s expansion get us. We may one day be able to blast asteroids out of existence. We might be able to stop the super-volcano that is Yellowstone National Park blowing itself to smithereens when it erupts as it is due to in the not-too-distant future (we also might fail at both of those things, and let either wipe us out, but ho hum). But could we ever prevent the sun emitting a gamma ray burst at us, of a power sufficient to cause the third largest extinction in earth’s history last time it happened? Well, we’ll just have to wait and see…

NUMBERS

One of the most endlessly charming parts of the human experience is our capacity to see something we can’t describe and just make something up in order to do so, never mind whether it makes any sense in the long run or not. Countless examples have been demonstrated over the years, but the mother lode of such situations has to be humanity’s invention of counting.

Numbers do not, in and of themselves, exist- they are simply a construct designed by our brains to help us get around the awe-inspiring concept of the relative amounts of things. However, this hasn’t prevented this ‘neat little tool’ spiralling out of control to form the vast field that is mathematics. Once merely a diverting pastime designed to help us get more use out of our counting tools, maths (I’m British, live with the spelling) first tentatively applied itself to shapes and geometry before experimenting with trigonometry, storming onwards to algebra, turning calculus into a total mess about four nanoseconds after its discovery of something useful, before just throwing it all together into a melting point of cross-genre mayhem that eventually ended up as a field that it as close as STEM (science, technology, engineering and mathematics) gets to art, in that it has no discernible purpose other than for the sake of its own existence.

This is not to say that mathematics is not a useful field, far from it. The study of different ways of counting lead to the discovery of binary arithmetic and enabled the birth of modern computing, huge chunks of astronomy and classical scientific experiments were and are reliant on the application of geometric and trigonometric principles, mathematical modelling has allowed us to predict behaviour ranging from economics & statistics to the weather (albeit with varying degrees of accuracy) and just about every aspect of modern science and engineering is grounded in the brute logic that is core mathematics. But… well, perhaps the best way to explain where the modern science of maths has lead over the last century is to study the story of i.

One of the most basic functions we are able to perform to a number is to multiply it by something- a special case, when we multiply it by itself, is ‘squaring’ it (since a number ‘squared’ is equal to the area of a square with side lengths of that number). Naturally, there is a way of reversing this function, known as finding the square root of a number (ie square rooting the square of a number will yield the original number). However, convention dictates that a negative number squared makes a positive one, and hence there is no number squared that makes a negative and there is no such thing as the square root of a negative number, such as -1. So far, all I have done is use a very basic application of logic, something a five-year old could understand, to explain a fact about ‘real’ numbers, but maths decided that it didn’t want to not be able to square root a negative number, so had to find a way round that problem. The solution? Invent an entirely new type of number, based on the quantity i (which equals the square root of -1), with its own totally arbitrary and made up way of fitting  on a number line, and which can in no way exist in real life.

Admittedly, i has turned out to be useful. When considering electromagnetic forces, quantum physicists generally assign the electrical and magnetic components real and imaginary quantities in order to identify said different components, but its main purpose was only ever to satisfy the OCD nature of mathematicians by filling a hole in their theorems. Since then, it has just become another toy in the mathematician’s arsenal, something for them to play with, slip into inappropriate situations to try and solve abstract and largely irrelevant problems, and with which they can push the field of maths in ever more ridiculous directions.

A good example of the way mathematics has started to lose any semblance of its grip on reality concerns the most famous problem in the whole of the mathematical world- Fermat’s last theorem. Pythagoras famously used the fact that, in certain cases, a squared plus b squared equals c squared as a way of solving some basic problems of geometry, but it was never known as to whether a cubed plus b cubed could ever equal c cubed if a, b and c were whole numbers. This was also true for all other powers of a, b and c greater than 2, but in 1637 the brilliant French mathematician Pierre de Fermat claimed, in a scrawled note inside his copy of Diohantus’ Arithmetica, to have a proof for this fact ‘that is too large for this margin to contain’. This statement ensured the immortality of the puzzle, but its eventual solution (not found until 1995, leading most independent observers to conclude that Fermat must have made a mistake somewhere in his ‘marvellous proof’) took one man, Andrew Wiles, around a decade to complete. His proof involved showing that the terms involved in the theorem could be expressed in the form of an incredibly weird equation that doesn’t exist in the real world, and that all equations of this type had a counterpart equation of an equally irrelevant type. However, since the ‘Fermat equation’ was too weird to exist in the other format, it could not logically be true.

To a mathematician, this was the holy grail; not only did it finally lay to rest an ages-old riddle, but it linked two hitherto unrelated branches of algebraic mathematics by way of proving what is (now it’s been solved) known as the Taniyama-Shimura theorem. To anyone interested in the real world, this exercise made no contribution to it whatsoever- apart from satisfying a few nerds, nobody’s life was made easier by the solution, it didn’t solve any real-world problem, and it did not make the world a tangibly better place. In this respect then, it was a total waste of time.

However, despite everything I’ve just said, I’m not going to decide that all modern day mathematics is a waste of time; very few human activities ever are. Mathematics is many things; among them ridiculous, confusing, full of contradictions and potential slip-ups and, in a field whose age of winning a major prize is younger than in any other STEM field, apparently full of those likely to belittle you out of future success should you enter the world of serious academia. But, for some people, maths is just what makes the world makes sense, and at its heart that was all it was ever created to do. And if some people want their life to be all about the little symbols that make the world make sense, then well done to the world for making a place for them.

Oh, and there’s a theory doing the rounds of cosmology nowadays that reality is nothing more than a mathematical construct. Who knows in what obscure branch of reverse logarithmic integrals we’ll find answers about that one…

SCIENCE!

One book that I always feel like I should understand better than I do (it’s the mechanics concerning light cones that stretch my ability to visualise) is Professor Stephen Hawking’s ‘A Brief History of Time’. The content is roughly what nowadays a Physics or Astronomy student would learn in first year cosmology, but when it was first released the content was close to the cutting edge of modern physics. It is a testament to the great charm of Hawking’s writing, as well as his ability to sell it, that the book has since sold millions of copies, and that Hawking himself is the most famous scientist of our age.

The reason I bring it up now is because of one passage from it that spring to mind the other day (I haven’t read it in over a year, but my brain works like that). In this extract, Hawking claims that some 500 years ago, it would be possible for a (presumably rich, intelligent, well-educated and well-travelled) man to learn everything there was to know about science and technology in his age. This is, when one thinks about it, a rather bold claim, considering the vast scope of what ‘science’ covers- even five centuries ago this would have included medicine, biology, astronomy, alchemy (chemistry not having been really invented), metallurgy and materials, every conceivable branch of engineering from agricultural to mining, and the early frontrunners of physics to name but some. To discover everything would have been quite some task, but I don’t think an entirely impossible one, and Hawking’s point stands: back then, there wasn’t all that much ‘science’ around.

And now look at it. Someone with an especially good memory could perhaps memorise the contents of a year’s worth of New Scientist, or perhaps even a few years of back issues if they were some kind of super-savant with far too much free time on their hands… and they still would have barely scratched the surface. In the last few centuries, and particularly the last hundred or so years, humanity’s collective march of science has been inexorable- we have discovered neurology, psychology, electricity, cosmology, atoms and further subatomic particles, all of modern chemistry, several million new species, the ability to classify species at all, more medicinal and engineering innovations than you could shake a stick at, plastics, composites and carbon nanotubes, palaeontology, relativity, genomes, and even the speed of spontaneous combustion of a burrito (why? well why the f&%$ not?). Yeah, we’ve come a long way.

The basis for all this change occurred during the scientific revolution of the 16th and 17th centuries. The precise cause of this change somewhat unknown- there was no great upheaval, but more of a general feeling that ‘hey, science is great, let’s do something with it!’. Some would argue that the idea that there was any change in the pace of science itself is untrue, and that the groundwork for this period of advancing scientific knowledge was largely done by Muslim astronomers and mathematicians several centuries earlier. Others may say that the increasing political and social changes that came with the Renaissance not only sent society reeling slightly, rendering it more pliable to new ideas and boundary-pushing, but also changed the way that the rich and noble functioned. Instead of barons, dukes and the nobility simply resting on their laurels and raking in the cash as the feudal system had previously allowed them to, an increasing number of them began to contribute to the arts and sciences, becoming agents of change and, in the cases of some, agents in the advancement of science.

It took a long time for science to gain any real momentum. For many a decade, nobody was ever a professional scientist or even engineer, and would generally study in their spare time. Universities were typically run by monks and populated by the sons of the rich or the younger sons of nobles- they were places where you both lived and learned expensively, but were not the centres of research that they are nowadays. They also contained a huge degree of resistance to the ideas put forward by Aristotle and others that had been rediscovered at the start of the revolution, and as such trying to get one’s new ideas taken seriously was a severe task. As such, just as many scientists were merely people who were interested in a subject and rich and intelligent enough to dabble in it as they were people committed to learning. Then there was the notorious religious problem- whilst the Church had no problem with most scientific endeavours, the rise of astronomy began one long and ceaseless feud between the Pope and physics into the fallibility of the bible, and some, such as Galileo and Copernicus, were actively persecuted by the Church for their new claims. Some were even hanged. But by far the biggest stumbling block was the sheer number of potential students of science- most common people were peasants, who would generally work the land at their lord’s will, and had zero chance of gravitating their life prospects higher than that. So- there was hardly anyone to do it, it was really, really hard to make any progress in and you might get killed for trying. And yet, somehow, science just kept on rolling onwards. A new theory here, an interesting experiment here, the odd interesting conversation between intellectuals, and new stuff kept turning up. No huge amount, but it was enough to keep things ticking over.

But, as the industrial revolution swept Europe, things started to change. As revolutions came and went, the power of the people started to rise, slowly squeezing out the influence and control of aristocrats by sheer weight of numbers. Power moved from the monarchy to the masses, from the Lords to the Commons- those with real control were the entrepreneurs and factory owners, not old men sitting in country houses with steadily shrinking lands that they owned. Society began to become more fluid, and anyone (well, more people than previously, anyway), could become the next big fish by inventing something new. Technology began to become of ever-increasing importance, and as such so did its discovery. Research by experiment was ever-more accessible, and science began to gather speed. During the 20th century things really began to motor- two world wars prompted the search for new technologies to enter an even more frenzied pace, the universal schooling of children was breeding a new generation of thinkers, and the idea of a university as a place of learning and research became more cemented in popular culture. Anyone could think of something new, and in that respect everyone was a scientist.

And this, to me, is the key to the world we live in today- a world in which a dozen or so scientific papers are published every day for branches of science relevant largely for their own sake. But this isn’t the true success story of science. The real success lies in the products and concepts we see every day- the iPhone, the pharmaceuticals, the infrastructure. The development of none of these discovered a new effect, a new material, or enabled us to better understand the way our thyroid gland works, and in that respect they are not science- but they required someone to think a little bit, to perhaps try a different way of doing something, to face a challenge. They pushed us forward one, tiny inexorable step, put a little bit more knowledge into the human race, and that, really, is the secret. There are 7 billion of us on this planet right now. Imagine if every single one contributed just one step forward.