Blubber

Fat is a much-maligned substance in the twenty-first century world we find ourselves in; exhortations for it to be burnt or exhumed from one’s diet abound from all sides, and indeed entire industries are now founded on dealing with the unwanted stuff in one form or another. However, fat is not, in fact, some demonic hate figure designed specifically to kill all that is good and beautiful about our world, and since it is at least relatively interesting I thought it might be worth investigating a few bits and pieces surrounding it over the course of a post.

All fats are based upon a molecule called glycerol, or propan-1,2,3-triol to give it its technical IUPAC name. Glycerol is a very interesting substance used for a wide range of purposes both in the body and commercially; it can be broken down to form sugar, can be used as a laxative, is an effective antifreeze, a useful solvent, a sweetener, is a key ingredient in the production of dynamite and, of course, can be used to store energy in fatty form. Glycerol is, technically speaking, an alcohol, but unlike most everyday alcohols (such as the ethanol upon which many of our favourite drinks are based) each glycerol molecule contains not one but three alcohol functional groups. In a fat, these alcohol groups act like sticking points, allowing three different long-chain carboxylic acid molecules known as ‘fatty acids’ to attach to each glycerol molecule. For this reason, fats are also known as ‘triglycerides’, and precisely which fat is formed from this structure depends on the structure of these fatty acids.

Fatty acids consisting of shorter chains of carbon atoms have less atoms with which to interact with their surroundings,  and thus the intermolecular forces between the fatty acid chains and other molecules are weaker for shorter-chain acids. This has a number of effects on the properties of the final product, but one of the most obvious concerns its melting point; shorter-chain fatty acids generally result in a product that is liquid at room temperature, and such products are designated as ‘oils’ rather than fats. Thus, not all triglycerides are, technically speaking, fats, and even triglycerides are part of a larger chemical family of fat-like substances known as ‘lipids’ (organic chemistry can be confusing). As a general rule, plants tend to produce oils and animals produce fats (presumably for reasons of storage), which is why you get stuff like duck fat and olive oil rather than the reverse.

The structure of the fatty acids is also important in an important dietary consideration surrounding fats; whether they are saturated or unsaturated. In chemistry, carbon atoms are bonded to one another by covalent bonds, consisting of a shared pair of electrons (each atom providing one electron of the pair) that keeps the two atoms bonded together. Most of the time, only one pair of electrons forms the bond (known as a single bond), but sometimes the relevant carbon atoms have a surfeit of electrons and will create another shared pair, forming a double covalent bond. The nature of double bonds means that the carbon atoms involved can accept more hydrogen atoms (or other electrophiles such as bromine; bromine water is a good test for double bonds) whereas a molecule made up entirely of singly-bonded atoms couldn’t accept any more and would be said to be saturated with hydrogen. Thus, molecules (including fats and fatty acids) with only single bonds are described as saturated, whilst those with double bonds are known as unsaturated*. A mixture of the food industry and chemical fraternity has developed a whole host of more specific descriptive terms that give you more detail as to the chemical structure of your fats (stuff like monounsaturated and such), and has also subdivided unsaturated fats into two more categories, cis- and trans-fats (the names refer to the molecules’ arrangement in space about the double bond, not their gender orientation).

With all these different labels, it’s no wonder people have so much trouble remembering, much less identifying, which fats they are ‘supposed to avoid’. Saturated and trans-unsaturated fats (which occur rarely in nature due to enzyme structure and are usually manufactured artificially) are apparently bad, mono-unsaturated (cis-) fats are good, and poly-unsaturated (cis-) fats good in moderation.

The extent to which these fats are ‘good’ and ‘healthy’ does not refer to the effect they will have on your waistline; all fats you eat are first broken down by your digestive process, and the resulting calories produced are then either used to power your body or turned into other sorts of fat that take up belly space. This process is the same for all types of energy-containing food and I shall come onto a few details about it in a paragraph or two. No, the relative health risk of these different fat types refers instead to the production of another type of lipid; cholesterol, which has such a complex, confusing structure and synthesis that I’m not even going to try to describe it. Cholesterol is a substance produced intentionally by the body and is very useful; it is used in the production of all sorts of hormones and vitamins, is a key ingredient of bile and is used in helping cells rebuild themselves. It is transported through the body by two different substances known as LDL (low-density lipoprotein) and HDL (take a wild guess) that carry it via the bloodstream; and this is where problems arise. The precise mechanism behind it is not known, but an increased consumption of trans-fats and other ‘bad’ triglycerides leads to an increase in the amount of cholesterol and LDL in the bloodstream. If this stuff is allowed to build up, cholesterol can start to ‘stick’ to the sides of one’s blood vessels, slowly reducing the effective size of the blood vessel until it is almost completely shut. This greatly reduces the flow of blood through these vessels, and this can have particularly dramatic consequences if the large, important blood vessels close to or supplying the heart are affected, leading to coronary heart disease and a greatly increased risk of heart attacks. HDL, for some reason, doesn’t apparently contribute to this affect, leading HDL to be (misleadingly, since it’s not actually cholesterol) dubbed ‘good cholesterol’ and LDL as ‘bad cholesterol’.

Clearly, then, having too much of these ‘bad fats’ can have some pretty serious consequences, but public realisation of this has lead all fat to be considered as a disgusting thing to be shunned. Frankly, this is just plain old not true, and it is far easier to live a healthy life with a bit of meat** on the bones than to go down the super-skinny angle. Fat is a vital body tissue, required for insulation, vitamin transport, to store energy, to prevent the disease and provides many essential nutrients; omega-3, the ‘essential oil’ (meaning it is not produced by the body) found in fish that is thought  to play a role in brain development and other bodily functions, is nothing more than an unusual fatty acid.

If you want further evidence as to the importance fat plays in one’s body, I refer you to a condition known as lipodystrophy, in which one’s body cannot produce or store fat properly. In some cases this is localised and relatively harmless, but in incredibly rare cases it manifests itself as a hereditary condition that causes abnormal bone and muscle growth, facial disfigurement and requires an incredibly strict diet (in direct contravention of the massive appetite the condition gives you) in order to control one’s levels of cholesterol and carbohydrate intake. In many cases, sufferers of this horrible condition will not live past twenty, if they even get that far.

*Vegetable oils tend to be more frequently unsaturated than fats, as this is another factor that reduces their melting point and makes them liquid. A key process involved in producing margarine involves taking these vegetable oils and adding hydrogen to these double bonds, a process known as hydrogenation, in order to raise their melting point and make the margarine solid and spreadable. Chemistry!

**Although, as anyone who likes their bacon skinny will tell you, fat is most certainly not meat. In fact, it’s not even alive.

Advertisement

Aurum Potestas Est

We as a race and a culture have a massive love affair with gold. It is the basis of our currency, the definitive mark of wealth and status, in some ways the bedrock of our society. We hoard it, we covet it, we hide it away except for special occasions, but we never really use it.

This is perhaps the strangest thing about gold; for something around which we have based our economy on, it is remarkably useless. To be sure, gold has many advantageous properties; it is the best thermal and electrical conductor and is pretty easy to shape, leading it to be used widely in contacts for computing and on the engine cover for the McLaren F1 supercar. But other than these, relatively minor, uses, gold is something we keep safe rather than make use of; it has none of the ubiquity nor usefulness of such metals as steel or copper. So why are we on the gold standard? Why not base our economy around iron, around copper, around praseodymium (a long shot, I will admit), something a bit more functional? What makes gold so special?

In part we can blame gold’s chemical nature; as a transition metal it is hard, tough, and a solid at room temperature, making it able to be mined, extracted, transported and used with ease and without degenerating and breaking too easily. It is also very malleable, meaning it can be shaped easily to form coins and jewellery; shaping into coins is especially important in order to standardise the weight of meta worth a particular amount. However, by far its most defining chemical feature is its reactivity; gold is very chemically stable in its pure, unionised, ‘native’ form, meaning it is unreactive, particularly with such common substances as; for this reason it is often referred to as a noble metal. This means gold is usually found native, making it easier to identify and mine, but is also means that gold products take millennia to oxidise and tarnish, if they do so at all. Therefore, gold holds its purity like no other chemical (shush, helium & co.), and this means it holds its value like nothing else. Even silver, another noble and comparatively precious metal, will blacken eventually and lose its perfection, but not gold. To an economist, gold is eternal, and this makes it the most stable and safe of all potential investments. Nothing can replace it, it is always a safe bet; a fine thing to base an economy on.

However, just as important as gold’s refusal to tarnish and protect is beauty is the simple presence of a beauty to protect. This is partly put down to the uniqueness of its colour; in the world around us there are many greens, blues, blacks, browns and whites, as well as the odd purple. However, red and yellow are (fire and a few types of fish and flower excepted) comparatively rare, and only four chemical elements that we commonly come across are red or yellow in colour; phosphorus, sulphur, copper and gold. And rusty iron but… just no. Of the others, phosphorus (red) is rather dangerous given its propensity to burst into flames, is also commonly found as a boring old white element, and is rather reactive, meaning it is not often found in its reddish form. Sulphur is also reactive, also burns and also readily forms compounds; but these compounds have the added bonus of stinking to high heaven. It is partly for this reason, and partly for the fact that it turns blood-red when molten, that brimstone (aka sulphur) is heavily associated with hell, punishment and general sinfulness in the Bible and that it would be rather an unpopular choice to base an economy on. In any case, the two non-metals do not have any of the properties that the transition metals of copper and gold do; those of being malleable, hard, having a high melting point, and being shiny and pwettiful. Gold edged out over copper partly for its unreactivity as explored above (after time copper loses its reddish beauty and takes on a, but also because of its deep, beautiful, lustrous finish. That beauty made it precious to us, made it something we desired and lusted after, and (combined with gold’s relative rarity, which could be an entire section of its own) made it valuable. This value allows relatively small amounts of gold to represent large quantities of worth and value, and justifies its use as coinage, bullion and an economic standard.

However, for me the key feature of gold’s place as our defining scale of value concerns its relative uselessness. Consider the following scenario; in the years preceding the birth of Christ, the technology, warfare and overall political situation of the day was governed by one material, bronze. It was used to make swords, armour, jewellery, the lot; until one day some smartarse figured out how to smelt iron. Iron was easier to work than bronze, allowing better stuff to be made, and with some skill it could be turned into steel. Steel was stronger as well as more malleable than bronze, and could be tempered to change its properties; over time, skilled metalsmiths even learned how to make the edge of a sword blade harder than the centre, making it better at cutting whilst the core absorbed the impact. This was all several hundred years in the future, but in the end the result was the same; bronze fell from grace and its societal value slumped. It is still around today, but it will never again enjoy its place as the metal that ruled the world.

Now, consider if that metal had, instead of bronze, been gold. Something that had been ultra-precious, the king of all metals, reduced to something that was merely valuable. It had been trumped by iron, and iron would have this connotation of being better than it; gold’s value would have dropped. In any economic system, even a primitive one, having the value of the substance around which your economy is based change in value would be catastrophic; when Mansa Musa travelled from Mali on a pilgrimage to Mecca, he stopped off in Cairo, then the home of the world’s foremost gold trade, and spent so much gold that the non-Malian world had never known about that the price of gold collapsed and it took more than a decade for the Egyptian economy to recover. If gold were to have a purpose, it could be usurped; we might find something better, we might decide we don’t need that any more, and thus gold’s value, once supported by those wishing to buy it for this purpose, would drop. Gold is used so little that this simply doesn’t happen, making it the most economically stable substance; it is valuable precisely and solely because we want it to be and, strange though it may seem, gold is always in fashion. Economically as well as chemically, gold is uniquely stable- the perfect choice around which to base a global economy.

Components of components of components…

By the end of my last post, science had reached the level of GCSE physics/chemistry; the world is made of atoms, atoms consist of electrons orbiting a nucleus, and a nucleus consists of a mixture of positively charged protons and neutrally charged neutrons. Some thought that this was the deepest level things could go; that everything was made simply of these three things and that they were the fundamental particles of the universe. However, others pointed out the enormous size difference between an electron and proton, suggesting that the proton and neutron were not as fundamental as the electron, and that we could look even deeper.

In any case, by this point our model of the inside of a nucleus was incomplete anyway; in 1932 James Chadwick had discovered (and named) the neutron, first theorised about by Ernest Rutherford to act as a ‘glue’ preventing the protons of a nucleus from repelling one another and causing the whole thing to break into pieces. However, nobody actually had any idea exactly how this worked, so in 1934 a concept known as the nuclear force was suggested. This theory, proposed by Hideki Yukawa, held that nucleons (then still considered fundamental particles) emitted particles he called mesons; smaller than nucleons, they acted as carriers of the nuclear force. The physics behind this is almost unintelligible to anyone who isn’t a career academic (as I am not), but this is because there is no equivalent to the nuclear force that we encounter during the day-to-day. We find it very easy to understand electromagnetism because we have all seen magnets attracting and repelling one another and see the effects of electricity everyday, but the nuclear force was something more fundamental; a side effect of the constant exchange of mesons between nucleons*. The meson was finally found (proving Yukawa’s theory) in 1947, and Yukawa won the 1949 Nobel Prize for it. Mesons are now understood to belong to a family of particles called gluons, which all act as the intermediary for the nuclear strong force between various different particles; the name gluon hints at this purpose, coming from the word ‘glue’.

*This, I am told, becomes a lot easier to understand once electromagnetism has been studied from the point of view of two particles exchanging photons, but I’m getting out of my depth here; time to move on.

At this point, the physics world decided to take stock; the list of all the different subatomic particles that had been discovered became known as ‘the particle zoo’, but our understanding of them was still patchy. We knew nothing of what the various nucleons and mesons consisted of, how they were joined together, or what allowed the strong nuclear force to even exist; where did mesons come from? How could these particles, 2/3 the size of a proton, be emitted from one without tearing the thing to pieces?

Nobody really had the answers to these, but when investigating them people began to discover other new particles, of a similar size and mass to the nucleons. Most of these particles were unstable and extremely short-lived, decaying into the undetectable in trillionths of trillionths of a second, but whilst they did exist they could be detected using incredibly sophisticated machinery and their existence, whilst not ostensibly meaning anything, was a tantalising clue for physicists. This family of nucleon-like particles was later called baryons, and in 1961 American physicist Murray Gell-Mann organised the various baryons and mesons that had been discovered into groups of eight, a system that became known as the eightfold way. There were two octets to be considered; one contained the mesons, and all the baryons with a ‘spin’ (a quantum property of subatomic particles that I won’t even try to explain) of 1/2. Other baryons had a spin of 3/2 (or one and a half), and they formed another octet; except that only seven of them had been discovered. Gell-Mann realised that each member of the ‘spin 1/2’ group had a corresponding member in the ‘spin 3/2’ group, and so by extrapolating this principle he was able to theorise about the existence of an eighth ‘spin 3/2’ baryon, which he called the omega baryon. This particle, with properties matching almost exactly those he predicted, was discovered in 1964 by a group experimenting with a particle accelerator (a wonderful device that takes two very small things and throws them at one another in the hope that they will collide and smash to pieces; particle physics is a surprisingly crude business, and few other methods have ever been devised for ‘looking inside’ these weird and wonderful particles), and Gell-Mann took the Nobel prize five years later.

But, before any of this, the principle of the eightfold way had been extrapolated a stage further. Gell-Mann collaborated with George Zweig on a theory concerning entirely theoretical particles known as quarks; they imagined three ‘flavours’ of quark (which they called, completely arbitrarily, the up, down and strange quarks), each with their own properties of spin, electrical charge and such. They theorised that each of the properties of the different hadrons (as mesons and baryons are collectively known) could be explained by the fact that each was made up of a different combination of these quarks, and that the overall properties of  each particle were due, basically, to the properties of their constituent quarks added together. At the time, this was considered somewhat airy-fairy; Zweig and Gell-Mann had absolutely no physical evidence, and their theory was essentially little more than a mathematical construct to explain the properties of the different particles people had discovered. Within a year, supporters of the theory Sheldon Lee Glashow and James Bjorken suggested that a fourth quark, which they called the ‘charm’ quark, should be added to the theory, in order to better explain radioactivity (ask me about the weak nuclear force, go on, I dare you). It was also later realised that the charm quark might explain the existence of the kaon and pion, two particles discovered in cosmic rays 15 years earlier that nobody properly understood. Support for the quark theory grew; and then, in 1968, a team studying deep inelastic scattering (another wonderfully blunt technique that involves firing an electron at a nucleus and studying how it bounces off in minute detail) revealed a proton to consist of three point-like objects, rather than being the solid, fundamental blob of matter it had previously been thought of. Three point-like objects matched exactly Zweig and Gell-Mann’s prediction for the existence of quarks; they had finally moved from the mathematical theory to the physical reality.

(The quarks discovered were of the up and down flavours; the charm quark wouldn’t be discovered until 1974, by which time two more quarks, the top and bottom, had been predicted to account for an incredibly obscure theory concerning the relationship between antimatter and normal matter. No, I’m not going to explain how that works. For the record, the bottom quark was discovered in 1977 and the top quark in 1995)

Nowadays, the six quarks form an integral part of the standard model; physics’ best attempt to explain how everything in the world works, or at least on the level of fundamental interactions. Many consider them, along with the six leptons and four bosons*, to be the fundamental particles that everything is made of; these particles exist, are fundamental, and that’s an end to it. But, the Standard Model is far from complete; it isn’t readily compatible with the theory of relativity and doesn’t explain either gravity or many observed effects in cosmology blamed on ‘dark matter’ or ‘dark energy’- plus it gives rise to a few paradoxical situations that we aren’t sure how to explain. Some say it just isn’t finished yet, and that we just need to think of another theory or two and discover another boson. Others say that we need to look deeper once again and find out what quarks themselves contain…

*A boson is anything, like a gluon, that ‘carries’ a fundamental force; the recently discovered Higgs boson is not really part of the list of fundamental particles since it exists solely to effect the behaviour of the W and Z bosons, giving them mass

The Story of the Atom

Possibly the earliest scientific question we as a race attempted to answer was ‘what is our world made of’. People reasoned that everything had to be made of something- all the machines and things we build have different components in them that we can identify, so it seemed natural that those materials and components were in turn made of some ‘stuff’ or other. Some reasoned that everything was made up of the most common things present in our earth; the classical ‘elements’ of earth, air, fire and water, but throughout the latter stages of the last millennia the burgeoning science of chemistry began to debunk this idea. People sought for a new theory to answer what everything consisted of, what the building blocks were, and hoped to find in this search an answer to several other questions; why chemicals that reacted together did so in fixed ratios, for example. For a solution to this problem, they returned to an idea almost as old as science itself; that everything consisted of tiny blobs of matter, invisible to the naked eye, that joined to one another in special ways. The way they joined together varied depending on the stuff they made up, hence the different properties of different materials, and the changing of these ‘joinings’ was what was responsible for chemical reactions and their behaviour. The earliest scientists who theorised the existence of these things called them corpuscles; nowadays we call them atoms.

By the turn of the twentieth century, thanks to two hundred years of chemistry using atoms to conveniently explain their observations, it was considered common knowledge among the scientific community that an atom was the basic building block of matter, and it was generally considered to be the smallest piece of matter in the universe; everything was made of atoms, and atoms were fundamental and solid. However, in 1897 JJ Thomson discovered the electron, with a small negative charge, and his evidence suggested that electrons were a constituent part of atoms. But atoms were neutrally charged, so there had to be some positive charge present to balance out; Thomson postulated that the negative electrons ‘floated’ within a sea of positive charge, in what became known as the plum pudding model. Atoms were not fundamental at all; even these components of all matter had components themselves. A later experiment by Ernest Rutherford sought to test the theory of the plum pudding model; he bombarded a thin piece of gold foil with positively charged alpha particles, and found that some were deflected at wild angles but that most passed straight through. This suggested, rather than a large uniform area of positive charge, a small area of very highly concentrated positive charge, such that when the alpha particle came close to it it was repelled violently (just like putting two like poles of a magnet together) but that most of the time it would miss this positive charge completely; most of the atom was empty space. So, he thought the atom must be like the solar system, with the negative electrons acting like planets orbiting a central, positive nucleus.

This made sense in theory, but the maths didn’t check out; it predicted the electrons to either spiral into the nucleus and for the whole of creation to smash itself to pieces, or for it all to break apart. It took Niels Bohr to suggest that the electrons might be confined to discrete orbital energy levels (roughly corresponding to distances from the nucleus) for the model of the atom to be complete; these energy levels (or ‘shells’) were later extrapolated to explain why chemical reactions occur, and the whole of chemistry can basically be boiled down to different atoms swapping electrons between energy levels in accordance with the second law of thermodynamics. Bohr’s explanation drew heavily from Max Planck’s recent explanation of quantum theory, which modelled photons of light as having discrete energy levels, and this suggested that electrons were also quantum particles; this ran contrary to people’s previous understanding of them, since they had been presumed to be solid ‘blobs’ of matter. This was but one step along the principle that defines quantum theory; nothing is actually real, everything is quantum, so don’t even try to imagine how it all works.

However, this still left the problem of the nucleus unsolved; what was this area of such great charge density packed  tightly into the centre of each atom, around which the electrons moved? What was it made of? How big was it? How was it able to account for almost all of a substance’s mass, given how little the electrons weighed?

Subsequent experiments have revealed an atomic nucleus to tiny almost beyond imagining; if your hand were the size of the earth, an atom would be roughly one millimetre in diameter, but if an atom were the size of St. Paul’s Cathedral then its nucleus would be the size of a full stop. Imagining the sheer tinyness of such a thing defies human comprehension. However, this tells us nothing about the nucleus’ structure; it took Ernest Rutherford (the guy who had disproved the plum pudding model) to take the first step along this road when he, in 1918, confirmed that the nucleus of a hydrogen atom comprised just one component (or ‘nucleon’ as we collectively call them today). Since this component had a positive charge, to cancel out the one negative electron of a hydrogen atom, he called it a proton, and then (entirely correctly) postulated that all the other positive charges in larger atomic nuclei were caused by more protons stuck together in the nucleus. However, having multiple positive charges all in one place would normally cause them to repel one another, so Rutherford suggested that there might be some neutrally-charged particles in there as well, acting as a kind of electromagnetic glue. He called these neutrons (since they were neutrally charged), and he has since been proved correct; neutrons and protons are of roughly the same size, collectively constitute around 99.95% of any given atom’s mass, and are found in all atomic nuclei. However, even these weren’t quite fundamental subatomic particles, and as the 20th century drew on, scientists began to delve even deeper inside the atom; and I’ll pick up that story next time.

Art vs. Science

All intellectual human activity can be divided into one of three categories; the arts, humanities, and sciences (although these terms are not exactly fully inclusive). Art here covers everything from the painted medium to music, everything that we humans do that is intended to be creative and make our world as a whole a more beautiful place to live in. The precise definition of ‘art’ is a major bone of contention among creative types and it’s not exactly clear where the boundary lies in some cases, but here we can categorise everything intended to be artistic as an art form. Science here covers every one of the STEM disciplines; science (physics, biology, chemistry and all the rest in its vast multitude of forms and subgenres), technology, engineering (strictly speaking those two come under the same branch, but technology is too satisfying a word to leave out of any self-respecting acronym) and mathematics. Certain portions of these fields too could be argued to be entirely self-fulfilling, and others are considered by some beautiful, but since the two rarely overlap the title of art is never truly appropriate. The humanities are an altogether trickier bunch to consider; on one hand they are, collectively, a set of sciences, since they purport to study how the world we live in behaves and functions. However, this particular set of sciences are deemed separate because they deal less with fundamental principles of nature but of human systems, and human interactions with the world around them; hence the title ‘humanities’. Fields as diverse as economics and geography are all blanketed under this title, and are in some ways the most interesting of sciences as they are the most subjective and accessible; the principles of the humanities can be and usually are encountered on a daily basis, so anyone with a keen mind and an eye for noticing the right things can usually form an opinion on them. And a good thing too, otherwise I would be frequently short of blogging ideas.

Each field has its own proponents, supporters and detractors, and all are quite prepared to defend their chosen field to the hilt. The scientists point to the huge advancements in our understanding of the universe and world around us that have been made in the last century, and link these to the immense breakthroughs in healthcare, infrastructure, technology, manufacturing and general innovation and awesomeness that have so increased our quality of life (and life expectancy) in recent years. And it’s not hard to see why; such advances have permanently changed the face of our earth (both for better and worse), and there is a truly vast body of evidence supporting the idea that these innovations have provided the greatest force for making our world a better place in recent times. The artists provide the counterpoint to this by saying that living longer, healthier lives with more stuff in it is all well and good, but without art and creativity there is no advantage to this better life, for there is no way for us to enjoy it. They can point to the developments in film, television, music and design, all the ideas of scientists and engineers tuned to perfection by artists of each field, and even the development in more classical artistic mediums such as poetry or dance, as key features of the 20th century that enabled us to enjoy our lives more than ever before. The humanities have advanced too during recent history, but their effects are far more subtle; innovative strategies in economics, new historical discoveries and perspectives and new analyses of the way we interact with our world have all come, and many have made news, but their effects tend to only be felt in the spheres of influence they directly concern- nobody remembers how a new use of critical path analysis made J. Bloggs Ltd. use materials 29% more efficiently (yes, I know CPA is technically mathematics; deal with it). As such, proponents of humanities tend to be less vocal than those in other fields, although this may have something to do with the fact that the people who go into humanities have a tendency to be more… normal than the kind of introverted nerd/suicidally artistic/stereotypical-in-some-other-way characters who would go into the other two fields.

This bickering between arts & sciences as to the worthiness/beauty/parentage of the other field has lead to something of a divide between them; some commentators have spoken of the ‘two cultures’ of arts and sciences, leaving us with a sect of sciences who find it impossible to appreciate the value of art and beauty, thinking it almost irrelevant compared what their field aims to achieve (to their loss, in my opinion). I’m not entirely sure that this picture is entirely true; what may be more so, however, is the other end of the stick, those artistic figures who dominate our media who simply cannot understand science beyond GCSE level, if that. It is true that quite a lot of modern science is very, very complex in the details, but Albert Einstein was famous for saying that if a scientific principle cannot be explained to a ten-year old then it is almost certainly wrong, and I tend to agree with him. Even the theory behind the existence of the Higgs Boson, right at the cutting edge of modern physics, can be explained by an analogy of a room full of fans and celebrities. Oh look it up, I don’t want to wander off topic here.

The truth is, of course, that no field can sustain a world without the other; a world devoid of STEM would die out in a matter of months, a world devoid of humanities would be hideously inefficient and appear monumentally stupid, and a world devoid of art would be the most incomprehensibly dull place imaginable. Not only that, but all three working in harmony will invariably produce the best results, as master engineer, inventor, craftsman and creator of some of the most famous paintings of all time Leonardo da Vinci so ably demonstrated. As such, any argument between fields as to which is ‘the best’ or ‘the most worthy’ will simply never be won, and will just end up a futile task. The world is an amazing place, but the real source of that awesomeness is the diversity it contains, both in terms of nature and in terms of people. The arts and sciences are not at war, nor should they ever be; for in tandem they can achieve so much more.

The Red Flower

Fire is, without a doubt, humanity’s oldest invention and its greatest friend; to many, the fundamental example what separates us from other animals. The abilities to keep warm through the coldest nights and harshest winters, to scare away predators by harnessing this strange force of nature, and to cook a joint of meat because screw it, it tastes better that way, are incredibly valuable ones, and they have seen us through many a tough moment. Over the centuries, fire in one form or another has been used for everything from being a weapon of war to furthering science, and very grateful we are for it too.

However, whilst the social history of fire is interesting, if I were to do a post on it then you dear readers would be faced with 1000 words of rather repetitive and somewhat boring myergh (technical term), so instead I thought I would take this opportunity to resort to my other old friend in these matters: science, as well as a few things learned from several years of very casual outdoorsmanship.

Fire is the natural product of any sufficiently exothermic reaction (ie one that gives out heat, rather than taking it in). These reactions can be of any type, but since fire can only form in air most of such reactions we are familiar with tend to be oxidation reactions; oxygen from the air bonding chemically with the substance in question (although there are exceptions;  a sample of potassium placed in water will float on the top and react with the water itself, become surrounded surrounded by a lilac flame sufficiently hot to melt it, and start fizzing violently and pushing itself around the container. A larger dose of potassium, or a more reactive alkali metal such as rubidium, will explode). The emission of heat causes a relatively gentle warming effect for the immediate area, but close to the site of the reaction itself a very large amount of heat is emitted in a small area. This excites the molecules of air close to the reaction and causes them to vibrate violently, emitting photons of electromagnetic radiation as they do so in the form of heat & light (among other things). These photons cause the air to glow brightly, creating the visible flame we can see; this large amount of thermal energy also ionises a lot of atoms and molecules in the area of the flame, meaning that a flame has a slight charge and is more conductive than the surrounding air. Because of this, flame probes are sometimes used to get rid of the excess charge in sensitive electromagnetic experiments, and flamethrowers can be made to fire lightning. Most often the glowing flame results in the characteristic reddy/orange colour of fire, but some reactions, such as the potassium one mentioned, cause them to emit radiation of other frequencies for a variety of reasons (chief among them the temperature of the flame and the spectral properties of the material in question), causing the flames to be of different colours, whilst a white-hot area of a fire is so hot that the molecules don’t care what frequency the photons they’re emitting are at so long as they can get rid of the things fast enough. Thus, light of all wavelengths gets emitted, and we see white light. The flickery nature of a flame is generally caused by the excited hot air moving about rapidly, until it gets far enough away from the source of heat to cool down and stop glowing; this process happens all the time with hundreds of packets of hot air, causing them to flicker back and forth.

However, we must remember that fires do not just give out heat, but must take some in too. This is to do with the way the chemical reaction to generate the heat in question works; the process requires the bonds between atoms to be broken, which uses up energy, before they can be reformed into a different pattern to release energy, and the energy needed to break the bonds and get the reaction going is known as the activation energy. Getting the molecules of the stuff you’re trying to react to the activation energy is the really hard part of lighting a fire, and different reactions (involving the burning of different stuff) have different activation energies, and thus different ‘ignition temperatures’ for the materials involved. Paper, for example, famously has an ignition temperature of 451 Fahrenheit (which means, incidentally, that you can cook with it if you’re sufficiently careful and not in a hurry to eat), whilst wood’s is only a little higher at around 300 degrees centigrade, both of which are less than that of a spark or flame. However, we must remember that neither fuel will ignite if it is wet, as water is not a fuel that can be burnt, meaning that it often takes a while to dry wood out sufficiently for it to catch, and that big, solid blocks of wood take quite a bit of energy to heat up.

From all of this information we can extrapolate the first rule that everybody learns about firelighting; that in order to catch a fire needs air, dry fuel and heat (the air provides the oxygen, the fuel the stuff it reacts with and the heat the activation energy). When one of these is lacking, one must make up for it by providing an excess of at least one of the other two, whilst remembering not to let the provision of the other ingredients suffer; it does no good, for example, to throw tons of fuel onto a new, small fire since it will snuff out its access to the air and put the fire out. Whilst fuel and air are usually relatively easy to come by when starting a fire, heat is always the tricky thing; matches are short lived, sparks even more so, and the fact that most of your fuel is likely to be damp makes the job even harder.

Provision of heat is also the main reason behind all of our classical methods of putting a fire out; covering it with cold water cuts it off from both heat and oxygen, and whilst blowing on a fire will provide it with more oxygen, it will also blow away the warm air close to the fire and replace it with cold, causing small flames like candles to be snuffed out (it is for this reason that a fire should be blown on very gently if you are trying to get it to catch and also why doing so will cause the flames, which are caused by hot air remember, to disappear but the embers to glow more brightly and burn with renewed vigour once you have stopped blowing).  Once a fire has sufficient heat, it is almost impossible to put out and blowing on it will only provide it with more oxygen and cause it to burn faster, as was ably demonstrated during the Great Fire of London. I myself have once, with a few friends, laid a fire that burned for 11 hours straight; many times it was reduced to a few humble embers, but it was so hot that all we had to do was throw another log on it and it would instantly begin to burn again. When the time came to put it out, it took half an hour for the embers to dim their glow.

Determinism

In the early years of the 19th century, science was on a roll. The dark days of alchemy were beginning to give way to the modern science of chemistry as we know it today, the world of physics and the study of electromagnetism were starting to get going, and the world was on the brink of an industrial revolution that would be powered by scientists and engineers. Slowly, we were beginning to piece together exactly how our world works, and some dared to dream of a day where we might understand all of it. Yes, it would be a long way off, yes there would be stumbling blocks, but maybe, just maybe, so long as we don’t discover anything inconvenient like advanced cosmology, we might one day begin to see the light at the end of the long tunnel of science.

Most of this stuff was the preserve of hopeless dreamers, but in the year 1814 a brilliant mathematician and philosopher, responsible for underpinning vast quantities of modern mathematics and cosmology, called Pierre-Simon Laplace published a bold new article that took this concept to extremes. Laplace lived in the age of ‘the clockwork universe’, a theory that held Newton’s laws of motion to be sacrosanct truths and claimed that these laws of physics caused the universe to just keep on ticking over, just like the mechanical innards of a clock- and just like a clock, the universe was predictable. Just as one hour after five o clock will always be six, presuming a perfect clock, so every result in the world can be predicted from the results. Laplace’s arguments took such theory to its logical conclusion; if some vast intellect were able to know the precise positions of every particle in the universe, and all the forces and motions of them, at a single point in time, then using the laws of physics such an intellect would be able to know everything, see into the past, and predict the future.

Those who believed in this theory were generally disapproved of by the Church for devaluing the role of God and the unaccountable divine, whilst others thought it implied a lack of free will (although these issues are still considered somewhat up for debate to this day). However, among the scientific community Laplace’s ideas conjured up a flurry of debate; some entirely believed in the concept of a predictable universe, in the theory of scientific determinism (as it became known), whilst others pointed out the sheer difficulty in getting any ‘vast intellect’ to fully comprehend so much as a heap of sand as making Laplace’s arguments completely pointless. Other, far later, observers, would call into question some of the axiom’s upon which the model of the clockwork universe was based, such as Newton’s laws of motion (which collapse when one does not take into account relativity at very high velocities); but the majority of the scientific community was rather taken with the idea that they could know everything about something should they choose to. Perhaps the universe was a bit much, but being able to predict everything, to an infinitely precise degree, about a few atoms perhaps, seemed like a very tempting idea, offering a delightful sense of certainty. More than anything, to these scientists there work now had one overarching goal; to complete the laws necessary to provide a deterministic picture of the universe.

However, by the late 19th century scientific determinism was beginning to stand on rather shaky ground; although  the attack against it came from the rather unexpected direction of science being used to support the religious viewpoint. By this time the laws of thermodynamics, detailing the behaviour of molecules in relation to the heat energy they have, had been formulated, and fundamental to the second law of thermodynamics (which is, to this day, one of the fundamental principles of physics) was the concept of entropy.  Entropy (denoted in physics by the symbol S, for no obvious reason) is a measure of the degree of uncertainty or ‘randomness’ inherent in the universe; or, for want of a clearer explanation, consider a sandy beach. All of the grains of sand in the beach can be arranged in a vast number of different ways to form the shape of a disorganised heap, but if we make a giant, detailed sandcastle instead there are far fewer arrangements of the molecules of sand that will result in the same structure. Therefore, if we just consider the two situations separately, it is far, far more likely that we will end up with a disorganised ‘beach’ structure rather than a castle forming of its own accord (which is why sandcastles don’t spring fully formed from the sea), and we say that the beach has a higher degree of entropy than the castle. This increased likelihood of higher entropy situations, on an atomic scale, means that the universe tends to increase the overall level of entropy in it; if we attempt to impose order upon it (by making a sandcastle, rather than waiting for one to be formed purely by chance), we must input energy, which increases the entropy of the surrounding air and thus resulting in a net entropy increase. This is the second law of thermodynamics; entropy always increases, and this principle underlies vast quantities of modern physics and chemistry.

If we extrapolate this situation backwards, we realise that the universe must have had a definite beginning at some point; a starting point of order from which things get steadily more chaotic, for order cannot increase infinitely as we look backwards in time. This suggests some point at which our current universe sprang into being, including all the laws of physics that make it up; but this cannot have occurred under ‘our’ laws of physics that we experience in the everyday universe, as they could not kickstart their own existence. There must, therefore, have been some other, higher power to get the clockwork universe in motion, destroying the image of it as some eternal, unquestionable predictive cycle. At the time, this was seen as vindicating the idea of the existence of God to start everything off; it would be some years before Edwin Hubble would venture the Big Bang Theory, but even now we understand next to nothing about the moment of our creation.

However, this argument wasn’t exactly a death knell for determinism; after all, the laws of physics could still describe our existing universe as a ticking clock, surely? True; the killer blow for that idea would come from Werner Heisenburg in 1927.

Heisenburg was a particle physicist, often described as the person who invented quantum mechanics (a paper which won him a Nobel prize). The key feature of his work here was the concept of uncertainty on a subatomic level; that certain properties, such as the position and momentum of a particle, are impossible to know exactly at any one time. There is an incredibly complicated explanation for this concerning wave functions and matrix algebra, but a simpler way to explain part of the concept concerns how we examine something’s position (apologies in advance to all physics students I end up annoying). If we want to know where something is, then the tried and tested method is to look at the thing; this requires photons of light to bounce off the object and enter our eyes, or hypersensitive measuring equipment if we want to get really advanced. However, at a subatomic level a photon of light represents a sizeable chunk of energy, so when it bounces off an atom or subatomic particle, allowing us to know where it is, it so messes around with the atom’s energy that it changes its velocity and momentum, although we cannot predict how. Thus, the more precisely we try to measure the position of something, the less accurately we are able to know its velocity (and vice versa; I recognise this explanation is incomplete, but can we just take it as red that finer minds than mine agree on this point). Therefore, we cannot ever measure every property of every particle in a given space, never mind the engineering challenge; it’s simply not possible.

This idea did not enter the scientific consciousness comfortably; many scientists were incensed by the idea that they couldn’t know everything, that their goal of an entirely predictable, deterministic universe would forever remain unfulfilled. Einstein was a particularly vocal critic, dedicating the rest of his life’s work to attempting to disprove quantum mechanics and back up his famous statement that ‘God does not play dice with the universe’. But eventually the scientific world came to accept the truth; that determinism was dead. The universe would never seem so sure and predictable again.

Drunken Science

In my last post, I talked about the societal impact of alcohol and its place in our everyday culture; today, however, my inner nerd has taken it upon himself to get stuck into the real meat of the question of alcohol, the chemistry and biology of it all, and how all the science fits together.

To a scientist, the word ‘alcohol’ does not refer to a specific substance at all, but rather to a family of chemical compounds containing an oxygen and hydrogen atom bonded to one another (known as an OH group) on the end of a chain of carbon atoms. Different members of the family (or ‘homologous series’, to give it its proper name) have different numbers of carbon atoms and have slightly different physical properties (such as melting point), and they also react chemically to form slightly different compounds. The stuff we drink is that with two carbon atoms in its chain, and is technically known as ethanol.

There are a few things about ethanol that make it special stuff to us humans, and all of them refer to chemical reactions and biological interactions. The first is the formation of it; there are many different types of sugar found in nature (fructose & sucrose are two common examples; the ‘-ose’ ending is what denotes them as sugars), but one of the most common is glucose, with six carbon atoms. This is the substance our body converts starch and other sugars into in order to use for energy or store as glycogen. As such, many biological systems are so primed to convert other sugars into glucose, and it just so happens that when glucose breaks down in the presence of the right enzymes, it forms carbon dioxide and an alcohol; ethanol, to be precise, in a process known as either glycolosis (to a scientist) or fermentation (to everyone else).

Yeast performs this process in order to respire (ie produce energy) anaerobically (in the absence of oxygen), so leading to the two most common cases where this reaction occurs. The first we know as brewing, in which an anaerobic atmosphere is deliberately produced to make alcohol; the other occurs when baking bread. The yeast we put in the bread causes the sugar (ie glucose) in it to produce carbon dioxide, which is what causes the bread to rise since it has been filled with gas, whilst the ethanol tends to boil off in the heat of the baking process. For industrial purposes, ethanol is made by hydrating (reacting with water) an oil by-product called ethene, but the product isn’t generally something you’d want to drink.

But anyway, back to the booze itself, and this time what happens upon its entry into the body. Exactly why alcohol acts as a depressant and intoxicant (if that’s a proper word) is down to a very complex interaction with various parts and receptors of the brain that I am not nearly intelligent enough to understand, let alone explain. However, what I can explain is what happens when the body gets round to breaking the alcohol down and getting rid of the stuff. This takes place in the liver, an amazing organ that performs hundreds of jobs within the body and contains a vast repetoir of enzymes. One of these is known as alcohol dehydrogenase, which has the task of oxidising the alcohol (not a simple task, and one impossible without enzymes) into something the body can get rid of. However, most ethanol we drink is what is known as a primary alcohol (meaning the OH group is on the end of the carbon chain), and this causes it to oxidise in two stages, only the first of which can be done using alcohol dehydrogenase. This process converts the alcohol into an aldehyde (with an oxygen chemically double-bonded to the carbon where the OH group was), which in the case of ethanol is called acetaldehyde (or ethanal). This molecule cannot be broken down straight away, and instead gets itself lodged in the body’s tissues in such a way (thanks to its shape) to produce mild toxins, activate our immune system and make us feel generally lousy. This is also known as having a hangover, and only ends when the body is able to complete the second stage of the oxidation process and convert the acetaldehyde into acetic acid, which the body can get rid of relatively easily. Acetic acid is commonly known as the active ingredient in vinegar, which is why alcoholics smell so bad and are often said to be ‘pickled’.

This process occurs in the same way when other alcohols enter the body, but ethanol is unique in how harmless (relatively speaking) its aldehyde is. Methanol, for example, can also be oxidised by alcohol dehydrogenase, but the aldehyde it produces (officially called methanal) is commonly known as formaldehyde; a highly toxic substance used in preservation work and as a disinfectant that will quickly poison the body. It is for this reason that methanol is present in the fuel commonly known as ‘meths’- ethanol actually produces more energy per gram and makes up 90% of the fuel by volume, but since it is cheaper than most alcoholic drinks the toxic methanol is put in to prevent it being drunk by severely desperate alcoholics. Not that it stops many of them; methanol poisoning is a leading cause of death among many homeless people.

Homeless people were also responsible for a major discovery in the field of alcohol research, concerning the causes of alcoholism. For many years it was thought that alcoholics were purely addicts mentally rather than biologically, and had just ‘let it get to them’, but some years ago a young student (I believe she was Canadian, but certainty of that fact and her name both escape me) was looking for some fresh cadavers for her PhD research. She went to the police and asked if she could use the bodies of the various dead homeless people who they found on their morning beats, and when she started dissecting them she noticed signs of a compound in them that was known to be linked to heroin addiction. She mentioned to a friend that all these people appeared to be on heroin, but her friend said that these people barely had enough to buy drink, let alone something as expensive as heroin. This young doctor-to-be realised she might be onto something here, and changed the focus of her research onto studying how alcohol was broken down by different bodies, and discovered something quite astonishing. Inside serious alcoholics, ethanol was being broken down into this substance previously only linked to heroin addiction, leading her to believe that for some unlucky people, the behaviour of their bodies made alcohol as addictive to them as heroin was to others. Whilst this research has by no means settled the issue, it did demonstrate two important facts; firstly, that whilst alcoholism certainly has some links to mental issues, it is also fundamentally biological and genetic by nature and cannot be solely put down as the fault of the victim’s brain. Secondly, it ‘sciencified’ (my apologies to grammar nazis everywhere for making that word up) a fact already known by many reformed drinkers; that when a former alcoholic stops drinking, they can never go back. Not even one drink. There can be no ‘just having one’, or drinking socially with friends, because if one more drink hits their body, deprived for so long, there’s a very good chance it could kill them.

Still, that’s not a reason to get totally down about alcohol, for two very good reasons. The first of these comes from some (admittely rather spurious) research suggesting that ‘addictive personalities’, including alcoholics, are far more likely to do well in life, have good jobs and overall succeed; alcoholics are, by nature, present at the top as well as the bottom of our society. The other concerns the one bit of science I haven’t tried to explain here- your body is remarkably good at dealing with alcohol, and we all know it can make us feel better, so if only for your mental health a little drink now and then isn’t an all bad thing after all. And anyway, it makes for some killer YouTube videos…

SCIENCE!

One book that I always feel like I should understand better than I do (it’s the mechanics concerning light cones that stretch my ability to visualise) is Professor Stephen Hawking’s ‘A Brief History of Time’. The content is roughly what nowadays a Physics or Astronomy student would learn in first year cosmology, but when it was first released the content was close to the cutting edge of modern physics. It is a testament to the great charm of Hawking’s writing, as well as his ability to sell it, that the book has since sold millions of copies, and that Hawking himself is the most famous scientist of our age.

The reason I bring it up now is because of one passage from it that spring to mind the other day (I haven’t read it in over a year, but my brain works like that). In this extract, Hawking claims that some 500 years ago, it would be possible for a (presumably rich, intelligent, well-educated and well-travelled) man to learn everything there was to know about science and technology in his age. This is, when one thinks about it, a rather bold claim, considering the vast scope of what ‘science’ covers- even five centuries ago this would have included medicine, biology, astronomy, alchemy (chemistry not having been really invented), metallurgy and materials, every conceivable branch of engineering from agricultural to mining, and the early frontrunners of physics to name but some. To discover everything would have been quite some task, but I don’t think an entirely impossible one, and Hawking’s point stands: back then, there wasn’t all that much ‘science’ around.

And now look at it. Someone with an especially good memory could perhaps memorise the contents of a year’s worth of New Scientist, or perhaps even a few years of back issues if they were some kind of super-savant with far too much free time on their hands… and they still would have barely scratched the surface. In the last few centuries, and particularly the last hundred or so years, humanity’s collective march of science has been inexorable- we have discovered neurology, psychology, electricity, cosmology, atoms and further subatomic particles, all of modern chemistry, several million new species, the ability to classify species at all, more medicinal and engineering innovations than you could shake a stick at, plastics, composites and carbon nanotubes, palaeontology, relativity, genomes, and even the speed of spontaneous combustion of a burrito (why? well why the f&%$ not?). Yeah, we’ve come a long way.

The basis for all this change occurred during the scientific revolution of the 16th and 17th centuries. The precise cause of this change somewhat unknown- there was no great upheaval, but more of a general feeling that ‘hey, science is great, let’s do something with it!’. Some would argue that the idea that there was any change in the pace of science itself is untrue, and that the groundwork for this period of advancing scientific knowledge was largely done by Muslim astronomers and mathematicians several centuries earlier. Others may say that the increasing political and social changes that came with the Renaissance not only sent society reeling slightly, rendering it more pliable to new ideas and boundary-pushing, but also changed the way that the rich and noble functioned. Instead of barons, dukes and the nobility simply resting on their laurels and raking in the cash as the feudal system had previously allowed them to, an increasing number of them began to contribute to the arts and sciences, becoming agents of change and, in the cases of some, agents in the advancement of science.

It took a long time for science to gain any real momentum. For many a decade, nobody was ever a professional scientist or even engineer, and would generally study in their spare time. Universities were typically run by monks and populated by the sons of the rich or the younger sons of nobles- they were places where you both lived and learned expensively, but were not the centres of research that they are nowadays. They also contained a huge degree of resistance to the ideas put forward by Aristotle and others that had been rediscovered at the start of the revolution, and as such trying to get one’s new ideas taken seriously was a severe task. As such, just as many scientists were merely people who were interested in a subject and rich and intelligent enough to dabble in it as they were people committed to learning. Then there was the notorious religious problem- whilst the Church had no problem with most scientific endeavours, the rise of astronomy began one long and ceaseless feud between the Pope and physics into the fallibility of the bible, and some, such as Galileo and Copernicus, were actively persecuted by the Church for their new claims. Some were even hanged. But by far the biggest stumbling block was the sheer number of potential students of science- most common people were peasants, who would generally work the land at their lord’s will, and had zero chance of gravitating their life prospects higher than that. So- there was hardly anyone to do it, it was really, really hard to make any progress in and you might get killed for trying. And yet, somehow, science just kept on rolling onwards. A new theory here, an interesting experiment here, the odd interesting conversation between intellectuals, and new stuff kept turning up. No huge amount, but it was enough to keep things ticking over.

But, as the industrial revolution swept Europe, things started to change. As revolutions came and went, the power of the people started to rise, slowly squeezing out the influence and control of aristocrats by sheer weight of numbers. Power moved from the monarchy to the masses, from the Lords to the Commons- those with real control were the entrepreneurs and factory owners, not old men sitting in country houses with steadily shrinking lands that they owned. Society began to become more fluid, and anyone (well, more people than previously, anyway), could become the next big fish by inventing something new. Technology began to become of ever-increasing importance, and as such so did its discovery. Research by experiment was ever-more accessible, and science began to gather speed. During the 20th century things really began to motor- two world wars prompted the search for new technologies to enter an even more frenzied pace, the universal schooling of children was breeding a new generation of thinkers, and the idea of a university as a place of learning and research became more cemented in popular culture. Anyone could think of something new, and in that respect everyone was a scientist.

And this, to me, is the key to the world we live in today- a world in which a dozen or so scientific papers are published every day for branches of science relevant largely for their own sake. But this isn’t the true success story of science. The real success lies in the products and concepts we see every day- the iPhone, the pharmaceuticals, the infrastructure. The development of none of these discovered a new effect, a new material, or enabled us to better understand the way our thyroid gland works, and in that respect they are not science- but they required someone to think a little bit, to perhaps try a different way of doing something, to face a challenge. They pushed us forward one, tiny inexorable step, put a little bit more knowledge into the human race, and that, really, is the secret. There are 7 billion of us on this planet right now. Imagine if every single one contributed just one step forward.

The Age of Reason

Science is a wonderful thing- particularly in the modern age where the more adventurous (or more willing to tempt fate, depending on your point of view) like to think that most of science is actually pretty well done and dusted. I mean, yes there are a lot of the little details we have yet to work out, but the big stuff, the major hows and whys, have been basically sorted out. We know why there are rainbows, why quantum tunnelling composite appears to defy basic logic, and even why you always seem to pick the slowest queue- science appears to have got it pretty much covered.

[I feel I must take this opportunity to point out one of my favourite stories about the world of science- at the start of the 20th century, there was a prevailing attitude among physicists that physics was going to last, as an advanced science, for about another 20 years or so. They basically presumed that they had worked almost everything out, and now all they had to do was to tie up all the loose ends. However, one particular loose end, the photoelectric effect, simply refused to budge by their classical scientific laws. The only person to come up with a solution was Max Planck who, by modelling light (which everyone knew was a wave) as a particle instead, opened the door to the modern age of quantum theory. Physics as a whole took one look at all the new questions this proposed and, as one, took a collective facepalm.]

In any case, we are now at such an advanced stage of the scientific revolution, that there appears to be nothing, in everyday life at least, that we cannot, at least in part, explain. We might not know, for example, exactly how the brain is wired up, but we still have enough of an understanding to have a pretty accurate guess as to what part of it isn’t working properly when somebody comes in with brain damage. We don’t get exactly why or how photons appear to defy the laws of logic, but we can explain enough of it to tell you why a lens focuses light onto a point. You get the idea.

Any scientist worth his salt will scoff at this- a chemist will bang on about the fact that nanotubes were only developed a decade ago and will revolutionise the world in another, a biologist will tell you about all the myriad of species we know next to nothing about, and the myriad more that we haven’t discovered yet, and a theoretical physicist will start quoting logical impossibilities and make you feel like a complete fool. But this is all, really, rather high-level science- the day-to-day stuff is all pretty much done. Right?

Well… it’s tempting to think so. But in reality all the scientists are pretty correct- Newton’s great ocean of truth remains very much a wild and unexplored place, and not just in all the nerdy places that nobody without 3 separate doctorates can understand. There are some things that everybody, from the lowliest man in the street to the cleverest scientists, can comprehend completely and not understand in the slightest.

Take, for instance, the case of Sugar the cat. Sugar was a part-Persian with a hip deformity who often got uncomfortable in cars. As such when her family moved house, they opted to leave her with a neighbour. After a couple of weeks, Sugar disappeared, before reappearing 14 months later… at her family’s new house. What makes this story even more remarkable? The fact that Silky’s owners had moved from California to Oklahoma, and that a cat with a severe hip problem had trekked 1500 miles, over 100 a month,  to a place she had never even seen. How did she manage it? Nobody has a sodding clue.

This isn’t the only story of long-distance cat return, although Sugar holds the distance record. But an ability to navigate that a lot of sat navs would be jealous of isn’t the only surprising oddity in the world of nature. Take leopards, for example. The most common, and yet hardest to find and possibly deadliest of ‘The Big Five’, everyone knows that they are born killers. Humans, by contrast, are in many respects born prey- we are slow over short distances, have no horns, claws, long teeth or other natural defences, are fairly poor at hiding and don’t even live in herds for safety in numbers. Especially vulnerable are, of course, babies and young children, who by animal standards take an enormously long time to even stand upright, let alone mature. So why exactly, in 1938, were a leopard and her cubs found with a near-blind human child who she had carried off as a baby five years ago. Even more remarkable was the superlative sense of smell the child had, being able to differentiate between different people and even objects with nothing more than a good sniff- which also reminds me of a video I saw a while ago of a blind Scottish boy who can tell what material something is made of and how far away it is (well enough to play basketball) simply by making a clicking sound with his mouth.

I’m not really sure what I’m trying to say in this post- I have a sneaking suspicion my subconscious simply wanted to give me an excuse to share some of the weirdest stories I have yet to see on Cracked.com. So, to round off, I’ll leave you with a final one. In 1984 a hole was found in a farm in Washington State, about 3 metres by 2 and around 60cm deep. 25 metres away, the three tons of grass-covered earth that had previously filled the hole was found- completely intact, in a single block. One person described it as looking like it had been cut away with ‘a gigantic cookie cutter’, but this failed to explain why all of the roots hanging off it were intact. There were no tracks or any distinguishing feature apart from a dribble of earth leading between hole and divot, and the closest thing anyone had to an explanation was to lamely point out that there had been a minor earthquake 20 miles ago a week beforehand.

When I invent a time machine, forget killing Hitler- the first thing I’m doing is going back to find out what the &*^% happened with that hole.