Aurum Potestas Est

We as a race and a culture have a massive love affair with gold. It is the basis of our currency, the definitive mark of wealth and status, in some ways the bedrock of our society. We hoard it, we covet it, we hide it away except for special occasions, but we never really use it.

This is perhaps the strangest thing about gold; for something around which we have based our economy on, it is remarkably useless. To be sure, gold has many advantageous properties; it is the best thermal and electrical conductor and is pretty easy to shape, leading it to be used widely in contacts for computing and on the engine cover for the McLaren F1 supercar. But other than these, relatively minor, uses, gold is something we keep safe rather than make use of; it has none of the ubiquity nor usefulness of such metals as steel or copper. So why are we on the gold standard? Why not base our economy around iron, around copper, around praseodymium (a long shot, I will admit), something a bit more functional? What makes gold so special?

In part we can blame gold’s chemical nature; as a transition metal it is hard, tough, and a solid at room temperature, making it able to be mined, extracted, transported and used with ease and without degenerating and breaking too easily. It is also very malleable, meaning it can be shaped easily to form coins and jewellery; shaping into coins is especially important in order to standardise the weight of meta worth a particular amount. However, by far its most defining chemical feature is its reactivity; gold is very chemically stable in its pure, unionised, ‘native’ form, meaning it is unreactive, particularly with such common substances as; for this reason it is often referred to as a noble metal. This means gold is usually found native, making it easier to identify and mine, but is also means that gold products take millennia to oxidise and tarnish, if they do so at all. Therefore, gold holds its purity like no other chemical (shush, helium & co.), and this means it holds its value like nothing else. Even silver, another noble and comparatively precious metal, will blacken eventually and lose its perfection, but not gold. To an economist, gold is eternal, and this makes it the most stable and safe of all potential investments. Nothing can replace it, it is always a safe bet; a fine thing to base an economy on.

However, just as important as gold’s refusal to tarnish and protect is beauty is the simple presence of a beauty to protect. This is partly put down to the uniqueness of its colour; in the world around us there are many greens, blues, blacks, browns and whites, as well as the odd purple. However, red and yellow are (fire and a few types of fish and flower excepted) comparatively rare, and only four chemical elements that we commonly come across are red or yellow in colour; phosphorus, sulphur, copper and gold. And rusty iron but… just no. Of the others, phosphorus (red) is rather dangerous given its propensity to burst into flames, is also commonly found as a boring old white element, and is rather reactive, meaning it is not often found in its reddish form. Sulphur is also reactive, also burns and also readily forms compounds; but these compounds have the added bonus of stinking to high heaven. It is partly for this reason, and partly for the fact that it turns blood-red when molten, that brimstone (aka sulphur) is heavily associated with hell, punishment and general sinfulness in the Bible and that it would be rather an unpopular choice to base an economy on. In any case, the two non-metals do not have any of the properties that the transition metals of copper and gold do; those of being malleable, hard, having a high melting point, and being shiny and pwettiful. Gold edged out over copper partly for its unreactivity as explored above (after time copper loses its reddish beauty and takes on a, but also because of its deep, beautiful, lustrous finish. That beauty made it precious to us, made it something we desired and lusted after, and (combined with gold’s relative rarity, which could be an entire section of its own) made it valuable. This value allows relatively small amounts of gold to represent large quantities of worth and value, and justifies its use as coinage, bullion and an economic standard.

However, for me the key feature of gold’s place as our defining scale of value concerns its relative uselessness. Consider the following scenario; in the years preceding the birth of Christ, the technology, warfare and overall political situation of the day was governed by one material, bronze. It was used to make swords, armour, jewellery, the lot; until one day some smartarse figured out how to smelt iron. Iron was easier to work than bronze, allowing better stuff to be made, and with some skill it could be turned into steel. Steel was stronger as well as more malleable than bronze, and could be tempered to change its properties; over time, skilled metalsmiths even learned how to make the edge of a sword blade harder than the centre, making it better at cutting whilst the core absorbed the impact. This was all several hundred years in the future, but in the end the result was the same; bronze fell from grace and its societal value slumped. It is still around today, but it will never again enjoy its place as the metal that ruled the world.

Now, consider if that metal had, instead of bronze, been gold. Something that had been ultra-precious, the king of all metals, reduced to something that was merely valuable. It had been trumped by iron, and iron would have this connotation of being better than it; gold’s value would have dropped. In any economic system, even a primitive one, having the value of the substance around which your economy is based change in value would be catastrophic; when Mansa Musa travelled from Mali on a pilgrimage to Mecca, he stopped off in Cairo, then the home of the world’s foremost gold trade, and spent so much gold that the non-Malian world had never known about that the price of gold collapsed and it took more than a decade for the Egyptian economy to recover. If gold were to have a purpose, it could be usurped; we might find something better, we might decide we don’t need that any more, and thus gold’s value, once supported by those wishing to buy it for this purpose, would drop. Gold is used so little that this simply doesn’t happen, making it the most economically stable substance; it is valuable precisely and solely because we want it to be and, strange though it may seem, gold is always in fashion. Economically as well as chemically, gold is uniquely stable- the perfect choice around which to base a global economy.


What we know and what we understand are two very different things…

If the whole Y2K debacle over a decade ago taught us anything, it was that the vast majority of the population did not understand the little plastic boxes known as computers that were rapidly filling up their homes. Nothing especially wrong or unusual about this- there’s a lot of things that only a few nerds understand properly, an awful lot of other stuff in our life to understand, and in any case the personal computer had only just started to become commonplace. However, over 12 and a half years later, the general understanding of a lot of us does not appear to have increased to any significant degree, and we still remain largely ignorant of these little feats of electronic witchcraft. Oh sure, we can work and operate them (most of us anyway), and we know roughly what they do, but as to exactly how they operate, precisely how they carry out their tasks? Sorry, not a clue.

This is largely understandable, particularly given the value of ‘understand’ that is applicable in computer-based situations. Computers are a rare example of a complex system that an expert is genuinely capable of understanding, in minute detail, every single aspect of the system’s working, both what it does, why it is there, and why it is (or, in some cases, shouldn’t be) constructed to that particular specification. To understand a computer in its entirety, therefore, is an equally complex job, and this is one very good reason why computer nerds tend to be a quite solitary bunch, with quite few links to the rest of us and, indeed, the outside world at large.

One person who does not understand computers very well is me, despite the fact that I have been using them, in one form or another, for as long as I can comfortably remember. Over this summer, however, I had quite a lot of free time on my hands, and part of that time was spent finally relenting to the badgering of a friend and having a go with Linux (Ubuntu if you really want to know) for the first time. Since I like to do my background research before getting stuck into any project, this necessitated quite some research into the hows and whys of its installation, along with which came quite a lot of info as to the hows and practicalities of my computer generally. I thought, then, that I might spend the next couple of posts or so detailing some of what I learned, building up a picture of a computer’s functioning from the ground up, and starting with a bit of a history lesson…

‘Computer’ was originally a job title, the job itself being akin to accountancy without the imagination. A computer was a number-cruncher, a supposedly infallible data processing machine employed to perform a range of jobs ranging from astronomical prediction to calculating interest. The job was a fairly good one, anyone clever enough to land it probably doing well by the standards of his age, but the output wasn’t. The human brain is not built for infallibility and, not infrequently, would make mistakes. Most of these undoubtedly went unnoticed or at least rarely caused significant harm, but the system was nonetheless inefficient. Abacuses, log tables and slide rules all aided arithmetic manipulation to a great degree in their respective fields, but true infallibility was unachievable whilst still reliant on the human mind.

Enter Blaise Pascal, 17th century mathematician and pioneer of probability theory (among other things), who invented the mechanical calculator aged just 19, in 1642. His original design wasn’t much more than a counting machine, a sequence of cogs and wheels so constructed as to able to count and convert between units, tens, hundreds and so on (ie a turn of 4 spaces on the ‘units’ cog whilst a seven was already counted would bring up eleven), as well as being able to work with currency denominations and distances as well. However, it could also subtract, multiply and divide (with some difficulty), and moreover proved an important point- that a mechanical machine could cut out the human error factor and reduce any inaccuracy to one of simply entering the wrong number.

Pascal’s machine was both expensive and complicated, meaning only twenty were ever made, but his was the only working mechanical calculator of the 17th century. Several, of a range of designs, were built during the 18th century as show pieces, but by the 19th the release of Thomas de Colmar’s Arithmometer, after 30 years of development, signified the birth of an industry. It wasn’t a large one, since the machines were still expensive and only of limited use, but de Colmar’s machine was the simplest and most reliable model yet. Around 3,000 mechanical calculators, of various designs and manufacturers, were sold by 1890, but by then the field had been given an unexpected shuffling.

Just two years after de Colmar had first patented his pre-development Arithmometer, an Englishmen by the name of Charles Babbage showed an interesting-looking pile of brass to a few friends and associates- a small assembly of cogs and wheels that he said was merely a precursor to the design of a far larger machine: his difference engine. The mathematical workings of his design were based on Newton polynomials, a fiddly bit of maths that I won’t even pretend to understand, but that could be used to closely approximate logarithmic and trigonometric functions. However, what made the difference engine special was that the original setup of the device, the positions of the various columns and so forth, determined what function the machine performed. This was more than just a simple device for adding up, this was beginning to look like a programmable computer.

Babbage’s machine was not the all-conquering revolutionary design the hype about it might have you believe. Babbage was commissioned to build one by the British government for military purposes, but since Babbage was often brash, once claiming that he could not fathom the idiocy of the mind that would think up a question an MP had just asked him, and prized academia above fiscal matters & practicality, the idea fell through. After investing £17,000 in his machine before realising that he had switched to working on a new and improved design known as the analytical engine, they pulled the plug and the machine never got made. Neither did the analytical engine, which is a crying shame; this was the first true computer design, with two separate inputs for both data and the required program, which could be a lot more complicated than just adding or subtracting, and an integrated memory system. It could even print results on one of three printers, in what could be considered the first human interfacing system (akin to a modern-day monitor), and had ‘control flow systems’ incorporated to ensure the performing of programs occurred in the correct order. We may never know, since it has never been built, whether Babbage’s analytical engine would have worked, but a later model of his difference engine was built for the London Science Museum in 1991, yielding accurate results to 31 decimal places.

…and I appear to have run on a bit further than intended. No matter- my next post will continue this journey down the history of the computer, and we’ll see if I can get onto any actual explanation of how the things work.