Aurum Potestas Est

We as a race and a culture have a massive love affair with gold. It is the basis of our currency, the definitive mark of wealth and status, in some ways the bedrock of our society. We hoard it, we covet it, we hide it away except for special occasions, but we never really use it.

This is perhaps the strangest thing about gold; for something around which we have based our economy on, it is remarkably useless. To be sure, gold has many advantageous properties; it is the best thermal and electrical conductor and is pretty easy to shape, leading it to be used widely in contacts for computing and on the engine cover for the McLaren F1 supercar. But other than these, relatively minor, uses, gold is something we keep safe rather than make use of; it has none of the ubiquity nor usefulness of such metals as steel or copper. So why are we on the gold standard? Why not base our economy around iron, around copper, around praseodymium (a long shot, I will admit), something a bit more functional? What makes gold so special?

In part we can blame gold’s chemical nature; as a transition metal it is hard, tough, and a solid at room temperature, making it able to be mined, extracted, transported and used with ease and without degenerating and breaking too easily. It is also very malleable, meaning it can be shaped easily to form coins and jewellery; shaping into coins is especially important in order to standardise the weight of meta worth a particular amount. However, by far its most defining chemical feature is its reactivity; gold is very chemically stable in its pure, unionised, ‘native’ form, meaning it is unreactive, particularly with such common substances as; for this reason it is often referred to as a noble metal. This means gold is usually found native, making it easier to identify and mine, but is also means that gold products take millennia to oxidise and tarnish, if they do so at all. Therefore, gold holds its purity like no other chemical (shush, helium & co.), and this means it holds its value like nothing else. Even silver, another noble and comparatively precious metal, will blacken eventually and lose its perfection, but not gold. To an economist, gold is eternal, and this makes it the most stable and safe of all potential investments. Nothing can replace it, it is always a safe bet; a fine thing to base an economy on.

However, just as important as gold’s refusal to tarnish and protect is beauty is the simple presence of a beauty to protect. This is partly put down to the uniqueness of its colour; in the world around us there are many greens, blues, blacks, browns and whites, as well as the odd purple. However, red and yellow are (fire and a few types of fish and flower excepted) comparatively rare, and only four chemical elements that we commonly come across are red or yellow in colour; phosphorus, sulphur, copper and gold. And rusty iron but… just no. Of the others, phosphorus (red) is rather dangerous given its propensity to burst into flames, is also commonly found as a boring old white element, and is rather reactive, meaning it is not often found in its reddish form. Sulphur is also reactive, also burns and also readily forms compounds; but these compounds have the added bonus of stinking to high heaven. It is partly for this reason, and partly for the fact that it turns blood-red when molten, that brimstone (aka sulphur) is heavily associated with hell, punishment and general sinfulness in the Bible and that it would be rather an unpopular choice to base an economy on. In any case, the two non-metals do not have any of the properties that the transition metals of copper and gold do; those of being malleable, hard, having a high melting point, and being shiny and pwettiful. Gold edged out over copper partly for its unreactivity as explored above (after time copper loses its reddish beauty and takes on a, but also because of its deep, beautiful, lustrous finish. That beauty made it precious to us, made it something we desired and lusted after, and (combined with gold’s relative rarity, which could be an entire section of its own) made it valuable. This value allows relatively small amounts of gold to represent large quantities of worth and value, and justifies its use as coinage, bullion and an economic standard.

However, for me the key feature of gold’s place as our defining scale of value concerns its relative uselessness. Consider the following scenario; in the years preceding the birth of Christ, the technology, warfare and overall political situation of the day was governed by one material, bronze. It was used to make swords, armour, jewellery, the lot; until one day some smartarse figured out how to smelt iron. Iron was easier to work than bronze, allowing better stuff to be made, and with some skill it could be turned into steel. Steel was stronger as well as more malleable than bronze, and could be tempered to change its properties; over time, skilled metalsmiths even learned how to make the edge of a sword blade harder than the centre, making it better at cutting whilst the core absorbed the impact. This was all several hundred years in the future, but in the end the result was the same; bronze fell from grace and its societal value slumped. It is still around today, but it will never again enjoy its place as the metal that ruled the world.

Now, consider if that metal had, instead of bronze, been gold. Something that had been ultra-precious, the king of all metals, reduced to something that was merely valuable. It had been trumped by iron, and iron would have this connotation of being better than it; gold’s value would have dropped. In any economic system, even a primitive one, having the value of the substance around which your economy is based change in value would be catastrophic; when Mansa Musa travelled from Mali on a pilgrimage to Mecca, he stopped off in Cairo, then the home of the world’s foremost gold trade, and spent so much gold that the non-Malian world had never known about that the price of gold collapsed and it took more than a decade for the Egyptian economy to recover. If gold were to have a purpose, it could be usurped; we might find something better, we might decide we don’t need that any more, and thus gold’s value, once supported by those wishing to buy it for this purpose, would drop. Gold is used so little that this simply doesn’t happen, making it the most economically stable substance; it is valuable precisely and solely because we want it to be and, strange though it may seem, gold is always in fashion. Economically as well as chemically, gold is uniquely stable- the perfect choice around which to base a global economy.

Advertisement

Pineapples (TM)

If the last few decades of consumerism have taught us anything, it is just how much faith people are able of setting store in a brand. In everything from motorbikes to washing powder, we do not simply test and judge effectiveness of competing products objectively (although, especially when considering expensive items such as cars, this is sometimes impractical); we must compare them to what we think of the brand and the label, what reputation this product has and what it is particularly good at, which we think most suits our social standing and how others will judge our use of it. And good thing too, from many companies’ perspective, otherwise the amount of business they do would be slashed. There are many companies whose success can be almost entirely put down to the effect of their branding and the impact their marketing has had on the psyche of western culture, but perhaps the most spectacular example concerns Apple.

In some ways, to typecast Apple as a brand-built company is a harsh one; their products are doubtless good ones, and they have shown a staggering gift for bringing existed ideas together into forms that, if not quite new, are always the first to be a practical, genuine market presence. It is also true that Apple products are often better than their competitors in very specific fields; in computing, for example, OS X is better at dealing with media than other operating systems, whilst Windows has traditionally been far stronger when it comes to word processing, gaming and absolutely everything else (although Windows 8 looks very likely to change all of that- I am not looking forward to it). However, it is almost universally agreed (among non-Apple whores anyway) that once the rest of the market gets hold of it Apple’s version of a product is almost never the definitive best, from a purely analytical perspective (the iPod is a possible exception, solely due to the existence of iTunes redefining the music industry before everyone else and remaining competitive to this day) and that every Apple product is ridiculously overpriced for what it is. Seriously, who genuinely thinks that top-end Macs are a good investment?

Still, Apple make high-end, high-quality products with a few things they do really, really well that are basically capable of doing everything else. They should have a small market share, perhaps among the creative or the indie, and a somewhat larger one in the MP3 player sector. They should be a status symbol for those who can afford them, a nice company with a good history but that nowadays has to face up to a lot of competitors. As it is, the Apple way of doing business has proven successful enough to make them the biggest private company in the world. Bigger than every other technology company, bigger than every hedge fund or finance company, bigger than any oil company, worth more than every single one (excluding state owned companies such as Saudi Aramco, which is estimated to be worth around 3 trillion dollars by dealing in Saudi oil exports). How has a technology company come to be worth $400 billion? How?

One undoubted feature is Apple’s uncanny knack of getting there first- the Apple II was the first real personal computer and provided the genes for Windows-powered PC’s to take the world, whilst the iPod was the first MP3 player that was genuinely enjoyable to use, the iPhone the first smartphone (after just four years, somewhere in the region of 30% of the world’s phones are now smartphones) and the iPad the first tablet computer. Being in the technology business has made this kind of innovation especially rewarding for them; every company is constantly terrified of being left behind, so whenever a new innovation comes along they will knock something together as soon as possible just to jump on the bandwagon. However, technology is a difficult business to get right, meaning that these products are usually rubbish and make the Apple version shine by comparison. This also means that if Apple comes up with the idea first, they have had a couple of years of working time to make sure they get it right, whilst everyone else’s first efforts have had only a few scance months; it takes a while for any serious competitors to develop, by which time Apple have already made a few hundred million off it and have moved on to something else; innovation matters in this business.

But the real reason for Apple’s success can be put down to the aura the company have built around themselves and their products. From their earliest infancy Apple fans have been self-dubbed as the independent, the free thinkers, the creative, those who love to be different and stand out from the crowd of grey, calculating Windows-users (which sounds disturbingly like a conspiracy theory or a dystopian vision of the future when it is articulated like that). Whilst Windows has its problems, Apple has decided on what is important and has made something perfect in this regard (their view, not mine), and being willing to pay for it is just part of the induction into the wonderful world of being an Apple customer (still their view). It’s a compelling world view, and one that thousands of people have subscribed to, simply because it is so comforting; it sells us the idea that we are special, individual, and not just one of the millions of customers responsible for Apple’s phenomenal size and success as a company. But the secret to the success of this vision is not just the view itself; it is the method and the longevity of its delivery. This is an image that has been present in their advertising campaign from its earliest infancy, and is now so ingrained that it doesn’t have to be articulated any more; it’s just present in the subtle hints, the colour scheme, the way the Apple store is structured and the very existence of Apple-dedicated shops generally. Apple have delivered the masterclass in successful branding; and that’s all the conclusion you’re going to get for today.

The Interesting Instrument

Music has been called the greatest thing that humans do; some are of the opinion that it, even if only in the form of songs sung around the campfire, it is the oldest example of human art. However, whilst a huge amount of music’s effect and impact can be put down to the way it is interpreted by our ears and brain (I once listened to a song comprised entirely of various elements of urban sound, each individually recorded by separate microphones and each made louder or softer in order to create a tune), to create new music and allow ourselves true creative freedom over the sounds we make requires us to make and play instruments of various kinds. And, of all the myriad of different musical instruments humankind has developed, honed and used to make prettyful noises down the years, perhaps none is as interesting to consider as the oldest and most conceptually abstract of the lot; the human voice.

To those of us not part of the musical fraternity, the idea of the voice being considered an instrument at all is a very odd one; it is used most of the time simply to communicate, and is thus perhaps unique among instruments in that its primary function is not musical. However, to consider a voice as merely an addition to a piece of music rather than being an instrumental part of it is to dismiss its importance to the sound of the piece, and as such it must be considered one by any composer or songwriter looking to produce something coherent. It is also an incredibly diverse tool at a musician’s disposal; capable of a large range of notes anyway in a competent singer, by combining the voices of different people one can produce a tonal range rivalled only by the piano, and making it the only instrument regularly used as the sole component of a musical entity (ie in a choir). Admittedly, not using it in conjunction with other instruments does rather limit what it can do without looking really stupid, but it is nonetheless a quite amazingly versatile musical tool.

The voice also has a huge advantage over every other instrument in that absolutely anyone can ‘play’ it; even people who self-confessedly ‘can’t sing’ may still find themselves mumbling their favourite tune in the shower or singing along with their iPod occasionally. Not only that, but it is the only instrument that does not require any tool in addition to the body in order to play, meaning it is carried with everyone absolutely everywhere, thus giving everybody listening to a piece of music a direct connection to it; they can sing, mumble, or even just hum along. Not only is this a wet dream from a marketer’s perspective, enabling word-of-mouth spread to increase its efficiency exponentially, but it also makes live music that other level more awesome (imagine a music festival without thousands of screaming fans belting out the lyrics) and just makes music that much more compelling and, indeed, human to listen to.

However, the main artistic reason for the fundamental musical importance of the voice has more to do with what it can convey- but to adequately explain this, I’m going to need to go off on a quite staggeringly over-optimistic detour as I try to explain, in under 500 words, the artistic point of music. Right, here we go…:

Music is, fundamentally, an art form, and thus (to a purist at least) can be said to exist for no purpose other than its own existence, and for making the world a better place for those of us lucky enough to be in it. However, art in all its forms is now an incredibly large field with literally millions of practitioners across the world, so just making something people find pretty doesn’t really cut it any more. This is why some extraordinarily gifted painters can draw something next to perfectly photo-realistic and make a couple of grand from it, whilst Damien Hirst can put a shark in some formaldehyde and sell it for a few million. What people are really interested in buying, especially when it comes to ‘modern’ art, is not the quality of brushwork or prettifulness of the final result (which are fairly common nowadays), but its meaning, its significance, what it is trying to convey; the story, theatre and uniqueness behind it all (far rarer commodities that, thanks to the simple economic law of supply and demand, are thus much more expensive).

(NB: This is not to say that I don’t think the kind of people who buy Tracy Emin pieces are rather gullible and easily led, and apparently have far more money than they do tangible grip on reality- but that’s a discussion for another time, and this is certainly how they would justify their purchases)

Thus, the real challenge to any artist worth his salt is to try and create a piece that has meaning, symbolism, and some form of emotion; and this applies to every artistic field, be it film, literature, paintings, videogames (yes, I am on that side of the argument) or, to try and wrench this post back on-topic, music. The true beauty and artistic skill of music, the key to what makes those songs that transcend mere music alone so special, lies in giving a song emotion and meaning, and in this function the voice is the perfect instrument. Other instruments can produce sweet, tortured strains capable of playing the heart strings like a violin, but virtue of being able to produce those tones in the form of language, capable of delivering an explicit message to redouble the effect of the emotional one, a song can take on another level of depth, meaning and artistry. A voice may not be the only way to make your song explicitly mean something, and quite often it’s not used in such an artistic capacity at all; but when it is used properly, it can be mighty, mighty effective.

An Opera Posessed

My last post left the story of JRR Tolkein immediately after his writing of his first bestseller; the rather charming, lighthearted, almost fairy story of a tale that was The Hobbit. This was a major success, and not just among the ‘children aged between 6 and 12’ demographic identified by young Rayner Unwin; adults lapped up Tolkein’s work too, and his publishers Allen & Unwin were positively rubbing their hands in glee. Naturally, they requested a sequel, a request to which Tolkein’s attitude appears to have been along the lines of ‘challenge accepted’.

Even holding down the rigours of another job, and even accounting for the phenomenal length of his finished product, the writing of a book is a process that takes a few months for a professional writer (Dame Barbara Cartland once released 25 books in the space of a year, but that’s another story), and perhaps a year or two for an amateur like Tolkein. He started writing the book in December 1937, and it was finally published 18 years later in 1955.

This was partly a reflection of the difficulties Tolkein had in publishing his work (more on that later), but this also reflects the measured, meticulous and very serious approach Tolkein took to his writing. He started his story from scratch, each time going in a completely different direction with an entirely different plot, at least three times. His first effort, for instance, was due to chronicle another adventure of his protagonist Bilbo from The Hobbit, making it a direct sequel in both a literal and spiritual sense. However, he then remembered about the ring Bilbo found beneath the mountains, won (or stolen, depending on your point of view) from the creature Gollum, and the strange power it held; not just invisibility, as was Bilbo’s main use for it, but the hypnotic effect it had on Gollum (he even subsequently rewrote that scene for The Hobbit‘s second edition to emphasise that effect). He decided that the strange power of the ring was a more natural direction to follow, and so he wrote about that instead.

Progress was slow. Tolkein went months at a time without working on the book, making only occasional, sporadic yet highly focused bouts of progress. Huge amounts were cross-referenced or borrowed from his earlier writings concerning the mythology, history & background of Middle Earth, Tolkein constantly trying to make his mythic world feel and, in a sense, be as real as possible, but it was mainly due to the influence of his son Christopher, who Tolkein would send chapters to whilst he was away fighting the Second World War in his father’s native South Africa, that the book ever got finished at all. When it eventually did, Tolkein had been working the story of Bilbo’s son Frodo and his adventure to destroy the Ring of Power for over 12 years. His final work was over 1000 pages long, spread across six ‘books’, as well as being laden with appendices to explain & offer background information, and he called it The Lord of The Rings (in reference to his overarching antagonist, the Dark Lord Sauron).

A similar story had, incidentally, been attempted once before; Der Ring des Nibelungen is an opera (well, four operas) written by German composer Richard Wagner during the 19th century, traditionally performed over the course of four consecutive nights (yeah, you have to be pretty committed to sit through all of that) and also known as ‘The Ring Cycle’- it’s where ‘Ride of The Valkyries’ comes from. The opera follows the story of a ring, made from the traditionally evil Rhinegold (gold panned from the Rhine river), and the trail of death, chaos and destruction it leaves in its wake between its forging & destruction. Many commentators have pointed out the close similarities between the two, and as a keen follower of Germanic mythology Tolkein certainly knew the story, but Tolkein rubbished any suggestion that he had borrowed from it, saying “Both rings were round, and there the resemblance ceases”. You can probably work out my approximate personal opinion from the title of this post, although I wouldn’t read too much into it.

Even once his epic was finished, the problems weren’t over. Once finished, he quarrelled with Allen & Unwin over his desire to release LOTR in one volume, along with his still-incomplete Silmarillion (that he wasn’t allowed to may explain all the appendices). He then turned to Collins, but they claimed his book was in urgent need of an editor and a license to cut (my words, not theirs, I should add). Many other people have voiced this complaint since, but Tolkein refused and ordered Collins to publish by 1952. This they failed to do, so Tolkein wrote back to Allen & Unwin and eventually agreed to publish his book in three parts; The Fellowship of The Ring, The Two Towers, and The Return of The King (a title Tolkein, incidentally, detested because it told you how the book ended).

Still, the book was out now, and the critics… weren’t that enthusiastic. Well, some of them were, certainly, but the book has always had its detractors among the world of literature, and that was most certainly the case during its inception. The New York Times criticised Tolkein’s academic approach, saying he had “formulated a high-minded belief in the importance of his mission as a literary preservationist, which turns out to be death to literature itself”, whilst others claimed it, and its characters in particular, lacked depth. Even Hugo Dyson, one of Tolkein’s close friends and a member of his own literary group, spent public readings of the book lying on a sofa shouting complaints along the lines of “Oh God, not another elf!”. Unlike The Hobbit, which had been a light-hearted children’s story in many ways, The Lord of The Rings was darker & more grown up, dealing with themes of death, power and evil and written in a far more adult style; this could be said to have exposed it to more serious critics and a harder gaze than its predecessor, causing some to be put off by it (a problem that wasn’t helped by the sheer size of the thing).

However, I personally am part of the other crowd, those who have voiced their opinions in nearly 500 five-star reviews on Amazon (although one should never read too much into such figures) and who agree with the likes of CS  Lewis, The Sunday Telegraph and Sunday Times of the time that “Here is a book that will break your heart”, that it is “among the greatest works of imaginative fiction of the twentieth century” and that “the English-speaking world is divided into those who have read The Lord of the Rings and The Hobbit and those who are going to read them”. These are the people who have shown the truth in the review of the New York Herald Tribune: that Tolkein’s masterpiece was and is “destined to outlast our time”.

But… what exactly is it that makes Tolkein’s epic so special, such a fixture; why, even years after its publication as the first genuinely great work of fantasy, it is still widely regarded as the finest work the genre has ever produced? I could probably write an entire book just to try and answer that question (and several people probably have done), but to me it was because Tolkein understood, absolutely perfectly and fundamentally, exactly what he was trying to write. Many modern fantasy novels try to be uber-fantastical, or try to base themselves around an idea or a concept, in some way trying to find their own level of reality on which their world can exist, and they often find themselves in a sort of awkward middle ground, but Tolkein never suffered that problem because he knew that, quite simply, he was writing a myth, and he knew exactly how that was done. Terry Pratchett may have mastered comedic fantasy, George RR Martin may be the king of political-style fantasy, but only JRR Tolkein has, in recent times, been able to harness the awesome power of the first source of story; the legend, told around the campfire, of the hero and the villain, of the character defined by their virtues over their flaws, of the purest, rawest adventure in the pursuit of saving what is good and true in this world. These are the stories written to outlast the generations, and Tolkein’s mastery of them is, to me, the secret to his masterpiece.

Drunken Science

In my last post, I talked about the societal impact of alcohol and its place in our everyday culture; today, however, my inner nerd has taken it upon himself to get stuck into the real meat of the question of alcohol, the chemistry and biology of it all, and how all the science fits together.

To a scientist, the word ‘alcohol’ does not refer to a specific substance at all, but rather to a family of chemical compounds containing an oxygen and hydrogen atom bonded to one another (known as an OH group) on the end of a chain of carbon atoms. Different members of the family (or ‘homologous series’, to give it its proper name) have different numbers of carbon atoms and have slightly different physical properties (such as melting point), and they also react chemically to form slightly different compounds. The stuff we drink is that with two carbon atoms in its chain, and is technically known as ethanol.

There are a few things about ethanol that make it special stuff to us humans, and all of them refer to chemical reactions and biological interactions. The first is the formation of it; there are many different types of sugar found in nature (fructose & sucrose are two common examples; the ‘-ose’ ending is what denotes them as sugars), but one of the most common is glucose, with six carbon atoms. This is the substance our body converts starch and other sugars into in order to use for energy or store as glycogen. As such, many biological systems are so primed to convert other sugars into glucose, and it just so happens that when glucose breaks down in the presence of the right enzymes, it forms carbon dioxide and an alcohol; ethanol, to be precise, in a process known as either glycolosis (to a scientist) or fermentation (to everyone else).

Yeast performs this process in order to respire (ie produce energy) anaerobically (in the absence of oxygen), so leading to the two most common cases where this reaction occurs. The first we know as brewing, in which an anaerobic atmosphere is deliberately produced to make alcohol; the other occurs when baking bread. The yeast we put in the bread causes the sugar (ie glucose) in it to produce carbon dioxide, which is what causes the bread to rise since it has been filled with gas, whilst the ethanol tends to boil off in the heat of the baking process. For industrial purposes, ethanol is made by hydrating (reacting with water) an oil by-product called ethene, but the product isn’t generally something you’d want to drink.

But anyway, back to the booze itself, and this time what happens upon its entry into the body. Exactly why alcohol acts as a depressant and intoxicant (if that’s a proper word) is down to a very complex interaction with various parts and receptors of the brain that I am not nearly intelligent enough to understand, let alone explain. However, what I can explain is what happens when the body gets round to breaking the alcohol down and getting rid of the stuff. This takes place in the liver, an amazing organ that performs hundreds of jobs within the body and contains a vast repetoir of enzymes. One of these is known as alcohol dehydrogenase, which has the task of oxidising the alcohol (not a simple task, and one impossible without enzymes) into something the body can get rid of. However, most ethanol we drink is what is known as a primary alcohol (meaning the OH group is on the end of the carbon chain), and this causes it to oxidise in two stages, only the first of which can be done using alcohol dehydrogenase. This process converts the alcohol into an aldehyde (with an oxygen chemically double-bonded to the carbon where the OH group was), which in the case of ethanol is called acetaldehyde (or ethanal). This molecule cannot be broken down straight away, and instead gets itself lodged in the body’s tissues in such a way (thanks to its shape) to produce mild toxins, activate our immune system and make us feel generally lousy. This is also known as having a hangover, and only ends when the body is able to complete the second stage of the oxidation process and convert the acetaldehyde into acetic acid, which the body can get rid of relatively easily. Acetic acid is commonly known as the active ingredient in vinegar, which is why alcoholics smell so bad and are often said to be ‘pickled’.

This process occurs in the same way when other alcohols enter the body, but ethanol is unique in how harmless (relatively speaking) its aldehyde is. Methanol, for example, can also be oxidised by alcohol dehydrogenase, but the aldehyde it produces (officially called methanal) is commonly known as formaldehyde; a highly toxic substance used in preservation work and as a disinfectant that will quickly poison the body. It is for this reason that methanol is present in the fuel commonly known as ‘meths’- ethanol actually produces more energy per gram and makes up 90% of the fuel by volume, but since it is cheaper than most alcoholic drinks the toxic methanol is put in to prevent it being drunk by severely desperate alcoholics. Not that it stops many of them; methanol poisoning is a leading cause of death among many homeless people.

Homeless people were also responsible for a major discovery in the field of alcohol research, concerning the causes of alcoholism. For many years it was thought that alcoholics were purely addicts mentally rather than biologically, and had just ‘let it get to them’, but some years ago a young student (I believe she was Canadian, but certainty of that fact and her name both escape me) was looking for some fresh cadavers for her PhD research. She went to the police and asked if she could use the bodies of the various dead homeless people who they found on their morning beats, and when she started dissecting them she noticed signs of a compound in them that was known to be linked to heroin addiction. She mentioned to a friend that all these people appeared to be on heroin, but her friend said that these people barely had enough to buy drink, let alone something as expensive as heroin. This young doctor-to-be realised she might be onto something here, and changed the focus of her research onto studying how alcohol was broken down by different bodies, and discovered something quite astonishing. Inside serious alcoholics, ethanol was being broken down into this substance previously only linked to heroin addiction, leading her to believe that for some unlucky people, the behaviour of their bodies made alcohol as addictive to them as heroin was to others. Whilst this research has by no means settled the issue, it did demonstrate two important facts; firstly, that whilst alcoholism certainly has some links to mental issues, it is also fundamentally biological and genetic by nature and cannot be solely put down as the fault of the victim’s brain. Secondly, it ‘sciencified’ (my apologies to grammar nazis everywhere for making that word up) a fact already known by many reformed drinkers; that when a former alcoholic stops drinking, they can never go back. Not even one drink. There can be no ‘just having one’, or drinking socially with friends, because if one more drink hits their body, deprived for so long, there’s a very good chance it could kill them.

Still, that’s not a reason to get totally down about alcohol, for two very good reasons. The first of these comes from some (admittely rather spurious) research suggesting that ‘addictive personalities’, including alcoholics, are far more likely to do well in life, have good jobs and overall succeed; alcoholics are, by nature, present at the top as well as the bottom of our society. The other concerns the one bit of science I haven’t tried to explain here- your body is remarkably good at dealing with alcohol, and we all know it can make us feel better, so if only for your mental health a little drink now and then isn’t an all bad thing after all. And anyway, it makes for some killer YouTube videos…

Other Politicky Stuff

OK, I know I talked about politics last time, and no I don’t want to start another series on this, but I actually found when writing my last post that I got very rapidly sidetracked when I tried to use voter turnout as a way of demonstrating the fact that everyone hates their politicians, and I thought I might dedicate a post to this particular train of thought as well.

You see, across the world, but predominantly in the developed west where the right to choose our leaders has been around for ages, less and less people are turning out each time to vote.  By way of an example, Ronald Reagan famously won a ‘landslide’ victory when coming to power in 1980- but only actually attracted the vote of 29% of all eligible voters. In some countries, such as Australia, voting is mandatory, but thoughts about introducing such a system elsewhere have frequently met with opposition and claims that it goes against people’s democratic right to abstain from doing so (this argument is largely rubbish, but no time for that now).

A lot of reasons have been suggested for this trend, among them a sense of political apathy, laziness, and the idea that we having the right to choose our leaders for so long has meant we no longer find such an idea special or worth exercising. For example, the presidential election in Venezuela – a country that underwent something of a political revolution just over a decade ago and has a history of military dictatorships, corruption and general political chaos – a little while ago saw a voter turnout of nearly 90% (incumbent president Hugo Chavez winning with 54% of the vote to win his fourth term of office in case you were interested) making Reagan look boring by comparison.

However, another, more interesting (hence why I’m talking about it) argument has also been proposed, and one that makes an awful lot of sense. In Britain there are 3 major parties competing for every seat, and perhaps 1 or two others who may be standing in your local area. In the USA, your choice is pretty limited to either Obama or Romney, especially if you’re trying to avoid the ire of the rabidly aggressive ‘NO VOTE IS A VOTE FOR ROMNEY AND HITLER AND SLAUGHTERING KITTENS’ brigade. Basically, the point is that your choice of who to vote for is limited to usually less than 5 people, and given the number of different issues they have views on that mean something to you the chance of any one of them following your precise political philosophy is pretty close to zero.

This has wide reaching implications extending to every corner of democracy, and is indicative of one simple fact; that when the US Declaration of Independence was first drafted some 250 years ago and the founding fathers drew up what would become the template for modern democracy, it was not designed for a state, or indeed a world, as big and multifaceted as ours. That template was founded on the basis of the idea that one vote was all that was needed to keep a government in line and following the will of the masses, but in our modern society (and quite possibly also in the one they were designing for) that is simply not the case. Once in power, a government can do almost what it likes (I said ALMOST) and still be confident that they will get a significant proportion of the country voting for them; not only that, but that their unpopular decisions can often be ‘balanced out’ by more popular, mass-appeal ones, rather than their every decision being the direct will of the people.

One solution would be to have a system more akin to Greek democracy, where every issue is answered by referendum which the government must obey. However, this presents just as many problems as it answers; referendums are very expensive and time-consuming to set up and perform, and if they became commonplace it could further enhance the existing issue of voter apathy. Only the most actively political would vote in every one, returning the real power to the hands of a relative few who, unlike previously, haven’t been voted in. However, perhaps the most pressing issue with this solution is that it rather renders the role of MPs, representatives, senators and even Prime Ministers & Presidents rather pointless. What is the point of our society choosing those who really care about the good of their country, have worked hard to slowly rise up the ranks and giving them a chance to determine how their country is governed, if we are merely going to reduce their role to ones of administrators and form fillers? Despite the problems I mentioned last time out, of all the people we’ve got to choose from politicians are probably the best people to have governing us (or at least the most reliably OK, even if it’s simply because we picked them).

Plus, politics is a tough business, and what is the will of the people is not necessarily always what’s best for the country as a whole. Take Greece at the moment; massive protests are (or at least were; I know everyone’s still pissed off about it) underway due to the austerity measures imposed by the government, because of the crippling economic suffering that is sure to result. However, the politicians know that such measures are necessary and are refusing to budge on the issue- desperate times call for difficult decisions (OK, I know there were elections that almost entirely centred on this decision that sided with austerity, but shush- you’re ruining my argument). To pick another example, President Obama (and several Democrat candidates before him) have met with huge opposition to the idea of introducing a US national healthcare system, basically because Americans hate taxes. Nonetheless, this is something he believes very strongly in, and has finally managed to get through congress; if he wins the elections later this year, we’ll see how well he executes.

In short, then, there are far too many issues, too many boxes to balance and ideas to question, for all protesting in a democratic society to take place at the ballot box. Is there a better solution to waving placards in the street and sending strongly worded letters? Do those methods at all work? In all honesty, I don’t know- that whole internet petitions get debated in parliament thing the British government recently imported from Switzerland is a nice idea, but, just like more traditional forms of protest, gives those in power no genuine categorical imperative to change anything. If I had a solution, I’d probably be running for government myself (which is one option that definitely works- just don’t all try it at once), but as it is I am nothing more than an idle commentator thinking about an imperfect system.

Yeah, I struggle for conclusions sometimes.

Living for… when, exactly?

When we are young, we get a lot of advice and rules shoved down our throats in a seemingly endless stream of dos and don’ts. “Do eat your greens”, “Don’t spend too much time watching TV”, “Get your fingers away from your nose” and, an old personal favourite, “Keep your elbows off the table”. There are some schools of psychology who claim it is this militant enforcement of rules with no leeway or grey area may be responsible for some of our more rebellious behaviour in older life and, particularly, the teenage years, but I won’t delve into that now.

But there is one piece of advice, very broadly applied in a variety of contexts, in fact more of a general message than a rule, that is of particular interest to me. Throughout our lives, from cradle to right into adulthood, we are encouraged to take time over our decisions, to make only sensible choices, to plan ahead and think of the consequences, living for long-term satisfaction than short-term thrills. This takes the form of a myriad of bits of advice like ‘save not spend’ or ‘don’t eat all that chocolate at once’ (perhaps the most readily disobeyed of all parental instructions), but the general message remains the same: make the sensible, analytical decision.

The reason that this advice is so interesting is because when we hit adult life, many of us will encounter another, entirely contradictory school of thought that runs totally counter to the idea of sensible analysis- the idea of ‘living for the moment’. The basic viewpoint goes along the lines of ‘We only have one short life that could end tomorrow, so enjoy it as much as you can whilst you can. Take risks, make the mad decisions, go for the off-chances, try out as much as you can, and try to live your life in the moment, thinking of yourself and the here & now rather than worrying about what’s going to happen 20 years down the line’.

This is a very compelling viewpoint, particularly to the fun-centric outlook of the early-to-mid-twenties age bracket who most commonly get given and promote this way of life, for a host of reasons. Firstly, it offers a way of living in which very little can ever be considered to be a mistake, only an attempt at something new that didn’t come off. Secondly, its practice generates immediate and tangible results, rather than slower, more boring, long-term gains that a ‘sensible life’ may gain you, giving it an immediate association with living the good life. But, most importantly, following this life path is great fun, and leads you to the moments that make life truly special. Someone I know has often quoted their greatest ever regret as, when seriously strapped for cash, taking the sensible fiscal decision and not forking out to go to a Queen concert. Freddie Mercury died shortly afterwards, and this hardcore Queen fan never got to see them live. There is a similar and oft-quoted argument for the huge expense of the space program: ‘Across the galaxy there may be hundreds of dead civilizations, all of whom made the sensible economic choice to not pursue space exploration- who will only be discovered by whichever race made the irrational decision’. In short, sensible decisions may make your life seem good to an accountant, but might not make it seem that special or worthwhile.

On the other hand, this does not make ‘living for the moment’ an especially good life choice either- there’s a very good reason why your parents wanted you to be sensible. A ‘live for the future’ lifestyle is far more likely to reap long-term rewards in terms of salary and societal rank,  plans laid with the right degree of patience and care invariably more successful, whilst a constant, ceaseless focus on satisfying the urges of the moment is only ever going to end in disaster. This was perhaps best demonstrated in that episode of Family Guy entitled “Brian Sings and Swings”, in which, following a near-death experience, Brian is inspired by the ‘live for today’ lifestyle of Frank Sinatra Jr. For him, this takes the form of singing with Sinatra (and Stewie) every night, and drinking heavily both before & during performances, quickly resulting in drunken shows, throwing up into the toilet, losing a baby and, eventually, the gutter. Clearly, simply living for the now with no consideration for future happiness will very quickly leave you broke, out of a job, possibly homeless and with a monumental hangover. Not only that, but such a heavy focus on the short term has been blamed for a whole host of unsavoury side effects ranging from the ‘plastic’ consumer culture of the modern world and a lack of patience between people to the global economic meltdown, the latter of which could almost certainly have been prevented (and cleared up a bit quicker) had the world’s banks been a little more concerned with their long-term future and a little less with the size of their profit margin.

Clearly then, this is not a clear-cut balance between a right and wrong way of doing things- for one thing everybody’s priorities will be different, but for another neither way of life makes perfect sense without some degree of compromise. Perhaps this is in and of itself a life lesson- that nothing is ever quite fixed, that there are always shades of grey, and that compromise is sure to permeate every facet of our existence. Living for the moment is costly in all regards and potentially catastrophic, whilst living for the distant future is boring and makes life devoid of real value, neither of which is an ideal way to be. Perhaps the best solution is to aim for somewhere in the middle; don’t live for now, don’t live for the indeterminate future, but perhaps live for… this time next week?

I am away on holiday for the next week, so posts should resume on the Monday after next. To tide you over until then, I leave you with a recommendation: YouTube ‘Crapshots’. Find a spare hour or two. Watch all of. Giggle.

Some things are just unforgettable

What makes an amazing moment? What it is that turns an ordinary or mundane event into something special, something great, something memorable, something that will stick in the mind long after countless other memories have faded, and which will be able to conjure up emotions that, for years and years to come, will send thrills of excitement shivering down your spine? What, precisely, is it that makes something unforgettable.

Is it the event itself? Sometimes, yes, that could be enough. Every so often there are moments so amazing, so surprising, so out of this world and different, that it is burned into one’s soul for evermore. The feat of athletic ability and genius, the trick or feat of skill that just seems completely impossible, the speech or book whose mere words can force themselves through the rigid exterior of the mind and imprint themselves permanently into the soft, pliable core of the soul itself. But… are these moments truly unforgettable? At the time, they may seem so, and for a while afterwards they may become something of a mini-obsession- telling all your mates about it, linking it on Facebook or Twitter, but will these moments continue to inspire and delight however many years from now? On their own… I don’t think so.

Is it the context? To take a favourite example, Jonny Wilkinson’s drop goal to win the 2003 Rugby World Cup for England. The clock was in the final seconds of the second half of extra time, Jonny was the nation’s golden boy, beloved by all, it was against old rivals Australia, in Australia, with the home media having slaughtered England in the previous few weeks. England had been building and building for this moment for four long, hard years, and it all came down to one kick (his speciality), by one man, with the hopes and fears of the entire rugby world on his shoulders… if that context wasn’t special, then I don’t know what was. This is but one example of a moment made by context- there are countless others. The young Chinese man who stood up to the tank in Tienanmen Square is one, the Live Aid concert another. But… is it everything? Is a moment being poignant on its own enough to make a moment affix itself in your memory? Or, to come at it from another direction, is a moment excluded from being special simply by virtue of not being worth anything major? Just because something is done for its own sake, does that mean it can’t be special? Once again, I don’t think so.

So… what is it then, this magic ingredient, what is needed to make a moment shine? For an answer, I am going to resort to a case study (aka, an anecdote). A few of my mates are in a band (genre-wise somewhere near the heavy end of Muse), and there is one particular gig that they have now done two years in a row. I should know- I was at both of them. Both times, the crowd was small (around 70 people), and the venue was the same. Last year, the event as a whole was a great laugh- a few of the bands were received a bit coolly, but several others had the crowd going mental- joke-moshing, pressing against the barrier, and generally getting really into the music. My mates’ band was one of the well-received ones, and their set would have been one of the highlights of the night, if the headline act hadn’t blown everyone else completely out of the water.
This year, however, things were a little different. I can personally attest that, in the intervening 12 months, they had improved massively as a band- singing was better and more coherent, music itself was flawless, and they had even gained in confidence and charisma on stage. The music itself was infinitely better, but the actual set… lacked something. Through no fault of the band, that moment just wasn’t as special as it had been a year ago, and the evening as a whole was actually pretty forgettable. And the difference between the two events? In a word: atmosphere.

The previous year, the headline act had been a… well I don’t know enough about music to genre them but suffice it to say it was on the heavier end of the spectrum, and as such the crowd were fairly wired up generally, and especially for anything involving heavy guitar-playing. This year however, the headliners were acoustic in nature- while their music was far from bad, it didn’t exactly inspire surges of emotion, especially to such a small crowd, and this was reflected in the crowd and their preferences. Thus, the whole night just did not have the same atmosphere to it, and just didn’t feel as special (there were other reasons as well, but the point still stands- the lack of atmosphere prevented the moment being special).

This, to me, is evidence of my point- that, to make a moment special, all that is required is for the atmosphere surrounding it, wherever you are experiencing it, to be special, because it is the atmosphere of a moment that enables it to bypass the mind and hit home straight at the emotional core. There are countless ways of giving a moment the required atmosphere- appropriate music can often do the trick, as can the context of the build-up to it (hence why context itself can have such a big impact), or simply the stakes and tension that the moment inspires. However it is inspired though, what it means is simple- to make the most out of a moment, go out of one’s way to make sure the atmosphere you experience it in is the best it possibly can be.