Pineapples (TM)

If the last few decades of consumerism have taught us anything, it is just how much faith people are able of setting store in a brand. In everything from motorbikes to washing powder, we do not simply test and judge effectiveness of competing products objectively (although, especially when considering expensive items such as cars, this is sometimes impractical); we must compare them to what we think of the brand and the label, what reputation this product has and what it is particularly good at, which we think most suits our social standing and how others will judge our use of it. And good thing too, from many companies’ perspective, otherwise the amount of business they do would be slashed. There are many companies whose success can be almost entirely put down to the effect of their branding and the impact their marketing has had on the psyche of western culture, but perhaps the most spectacular example concerns Apple.

In some ways, to typecast Apple as a brand-built company is a harsh one; their products are doubtless good ones, and they have shown a staggering gift for bringing existed ideas together into forms that, if not quite new, are always the first to be a practical, genuine market presence. It is also true that Apple products are often better than their competitors in very specific fields; in computing, for example, OS X is better at dealing with media than other operating systems, whilst Windows has traditionally been far stronger when it comes to word processing, gaming and absolutely everything else (although Windows 8 looks very likely to change all of that- I am not looking forward to it). However, it is almost universally agreed (among non-Apple whores anyway) that once the rest of the market gets hold of it Apple’s version of a product is almost never the definitive best, from a purely analytical perspective (the iPod is a possible exception, solely due to the existence of iTunes redefining the music industry before everyone else and remaining competitive to this day) and that every Apple product is ridiculously overpriced for what it is. Seriously, who genuinely thinks that top-end Macs are a good investment?

Still, Apple make high-end, high-quality products with a few things they do really, really well that are basically capable of doing everything else. They should have a small market share, perhaps among the creative or the indie, and a somewhat larger one in the MP3 player sector. They should be a status symbol for those who can afford them, a nice company with a good history but that nowadays has to face up to a lot of competitors. As it is, the Apple way of doing business has proven successful enough to make them the biggest private company in the world. Bigger than every other technology company, bigger than every hedge fund or finance company, bigger than any oil company, worth more than every single one (excluding state owned companies such as Saudi Aramco, which is estimated to be worth around 3 trillion dollars by dealing in Saudi oil exports). How has a technology company come to be worth $400 billion? How?

One undoubted feature is Apple’s uncanny knack of getting there first- the Apple II was the first real personal computer and provided the genes for Windows-powered PC’s to take the world, whilst the iPod was the first MP3 player that was genuinely enjoyable to use, the iPhone the first smartphone (after just four years, somewhere in the region of 30% of the world’s phones are now smartphones) and the iPad the first tablet computer. Being in the technology business has made this kind of innovation especially rewarding for them; every company is constantly terrified of being left behind, so whenever a new innovation comes along they will knock something together as soon as possible just to jump on the bandwagon. However, technology is a difficult business to get right, meaning that these products are usually rubbish and make the Apple version shine by comparison. This also means that if Apple comes up with the idea first, they have had a couple of years of working time to make sure they get it right, whilst everyone else’s first efforts have had only a few scance months; it takes a while for any serious competitors to develop, by which time Apple have already made a few hundred million off it and have moved on to something else; innovation matters in this business.

But the real reason for Apple’s success can be put down to the aura the company have built around themselves and their products. From their earliest infancy Apple fans have been self-dubbed as the independent, the free thinkers, the creative, those who love to be different and stand out from the crowd of grey, calculating Windows-users (which sounds disturbingly like a conspiracy theory or a dystopian vision of the future when it is articulated like that). Whilst Windows has its problems, Apple has decided on what is important and has made something perfect in this regard (their view, not mine), and being willing to pay for it is just part of the induction into the wonderful world of being an Apple customer (still their view). It’s a compelling world view, and one that thousands of people have subscribed to, simply because it is so comforting; it sells us the idea that we are special, individual, and not just one of the millions of customers responsible for Apple’s phenomenal size and success as a company. But the secret to the success of this vision is not just the view itself; it is the method and the longevity of its delivery. This is an image that has been present in their advertising campaign from its earliest infancy, and is now so ingrained that it doesn’t have to be articulated any more; it’s just present in the subtle hints, the colour scheme, the way the Apple store is structured and the very existence of Apple-dedicated shops generally. Apple have delivered the masterclass in successful branding; and that’s all the conclusion you’re going to get for today.

Advertisement

An Opera Posessed

My last post left the story of JRR Tolkein immediately after his writing of his first bestseller; the rather charming, lighthearted, almost fairy story of a tale that was The Hobbit. This was a major success, and not just among the ‘children aged between 6 and 12’ demographic identified by young Rayner Unwin; adults lapped up Tolkein’s work too, and his publishers Allen & Unwin were positively rubbing their hands in glee. Naturally, they requested a sequel, a request to which Tolkein’s attitude appears to have been along the lines of ‘challenge accepted’.

Even holding down the rigours of another job, and even accounting for the phenomenal length of his finished product, the writing of a book is a process that takes a few months for a professional writer (Dame Barbara Cartland once released 25 books in the space of a year, but that’s another story), and perhaps a year or two for an amateur like Tolkein. He started writing the book in December 1937, and it was finally published 18 years later in 1955.

This was partly a reflection of the difficulties Tolkein had in publishing his work (more on that later), but this also reflects the measured, meticulous and very serious approach Tolkein took to his writing. He started his story from scratch, each time going in a completely different direction with an entirely different plot, at least three times. His first effort, for instance, was due to chronicle another adventure of his protagonist Bilbo from The Hobbit, making it a direct sequel in both a literal and spiritual sense. However, he then remembered about the ring Bilbo found beneath the mountains, won (or stolen, depending on your point of view) from the creature Gollum, and the strange power it held; not just invisibility, as was Bilbo’s main use for it, but the hypnotic effect it had on Gollum (he even subsequently rewrote that scene for The Hobbit‘s second edition to emphasise that effect). He decided that the strange power of the ring was a more natural direction to follow, and so he wrote about that instead.

Progress was slow. Tolkein went months at a time without working on the book, making only occasional, sporadic yet highly focused bouts of progress. Huge amounts were cross-referenced or borrowed from his earlier writings concerning the mythology, history & background of Middle Earth, Tolkein constantly trying to make his mythic world feel and, in a sense, be as real as possible, but it was mainly due to the influence of his son Christopher, who Tolkein would send chapters to whilst he was away fighting the Second World War in his father’s native South Africa, that the book ever got finished at all. When it eventually did, Tolkein had been working the story of Bilbo’s son Frodo and his adventure to destroy the Ring of Power for over 12 years. His final work was over 1000 pages long, spread across six ‘books’, as well as being laden with appendices to explain & offer background information, and he called it The Lord of The Rings (in reference to his overarching antagonist, the Dark Lord Sauron).

A similar story had, incidentally, been attempted once before; Der Ring des Nibelungen is an opera (well, four operas) written by German composer Richard Wagner during the 19th century, traditionally performed over the course of four consecutive nights (yeah, you have to be pretty committed to sit through all of that) and also known as ‘The Ring Cycle’- it’s where ‘Ride of The Valkyries’ comes from. The opera follows the story of a ring, made from the traditionally evil Rhinegold (gold panned from the Rhine river), and the trail of death, chaos and destruction it leaves in its wake between its forging & destruction. Many commentators have pointed out the close similarities between the two, and as a keen follower of Germanic mythology Tolkein certainly knew the story, but Tolkein rubbished any suggestion that he had borrowed from it, saying “Both rings were round, and there the resemblance ceases”. You can probably work out my approximate personal opinion from the title of this post, although I wouldn’t read too much into it.

Even once his epic was finished, the problems weren’t over. Once finished, he quarrelled with Allen & Unwin over his desire to release LOTR in one volume, along with his still-incomplete Silmarillion (that he wasn’t allowed to may explain all the appendices). He then turned to Collins, but they claimed his book was in urgent need of an editor and a license to cut (my words, not theirs, I should add). Many other people have voiced this complaint since, but Tolkein refused and ordered Collins to publish by 1952. This they failed to do, so Tolkein wrote back to Allen & Unwin and eventually agreed to publish his book in three parts; The Fellowship of The Ring, The Two Towers, and The Return of The King (a title Tolkein, incidentally, detested because it told you how the book ended).

Still, the book was out now, and the critics… weren’t that enthusiastic. Well, some of them were, certainly, but the book has always had its detractors among the world of literature, and that was most certainly the case during its inception. The New York Times criticised Tolkein’s academic approach, saying he had “formulated a high-minded belief in the importance of his mission as a literary preservationist, which turns out to be death to literature itself”, whilst others claimed it, and its characters in particular, lacked depth. Even Hugo Dyson, one of Tolkein’s close friends and a member of his own literary group, spent public readings of the book lying on a sofa shouting complaints along the lines of “Oh God, not another elf!”. Unlike The Hobbit, which had been a light-hearted children’s story in many ways, The Lord of The Rings was darker & more grown up, dealing with themes of death, power and evil and written in a far more adult style; this could be said to have exposed it to more serious critics and a harder gaze than its predecessor, causing some to be put off by it (a problem that wasn’t helped by the sheer size of the thing).

However, I personally am part of the other crowd, those who have voiced their opinions in nearly 500 five-star reviews on Amazon (although one should never read too much into such figures) and who agree with the likes of CS  Lewis, The Sunday Telegraph and Sunday Times of the time that “Here is a book that will break your heart”, that it is “among the greatest works of imaginative fiction of the twentieth century” and that “the English-speaking world is divided into those who have read The Lord of the Rings and The Hobbit and those who are going to read them”. These are the people who have shown the truth in the review of the New York Herald Tribune: that Tolkein’s masterpiece was and is “destined to outlast our time”.

But… what exactly is it that makes Tolkein’s epic so special, such a fixture; why, even years after its publication as the first genuinely great work of fantasy, it is still widely regarded as the finest work the genre has ever produced? I could probably write an entire book just to try and answer that question (and several people probably have done), but to me it was because Tolkein understood, absolutely perfectly and fundamentally, exactly what he was trying to write. Many modern fantasy novels try to be uber-fantastical, or try to base themselves around an idea or a concept, in some way trying to find their own level of reality on which their world can exist, and they often find themselves in a sort of awkward middle ground, but Tolkein never suffered that problem because he knew that, quite simply, he was writing a myth, and he knew exactly how that was done. Terry Pratchett may have mastered comedic fantasy, George RR Martin may be the king of political-style fantasy, but only JRR Tolkein has, in recent times, been able to harness the awesome power of the first source of story; the legend, told around the campfire, of the hero and the villain, of the character defined by their virtues over their flaws, of the purest, rawest adventure in the pursuit of saving what is good and true in this world. These are the stories written to outlast the generations, and Tolkein’s mastery of them is, to me, the secret to his masterpiece.

Drunken Science

In my last post, I talked about the societal impact of alcohol and its place in our everyday culture; today, however, my inner nerd has taken it upon himself to get stuck into the real meat of the question of alcohol, the chemistry and biology of it all, and how all the science fits together.

To a scientist, the word ‘alcohol’ does not refer to a specific substance at all, but rather to a family of chemical compounds containing an oxygen and hydrogen atom bonded to one another (known as an OH group) on the end of a chain of carbon atoms. Different members of the family (or ‘homologous series’, to give it its proper name) have different numbers of carbon atoms and have slightly different physical properties (such as melting point), and they also react chemically to form slightly different compounds. The stuff we drink is that with two carbon atoms in its chain, and is technically known as ethanol.

There are a few things about ethanol that make it special stuff to us humans, and all of them refer to chemical reactions and biological interactions. The first is the formation of it; there are many different types of sugar found in nature (fructose & sucrose are two common examples; the ‘-ose’ ending is what denotes them as sugars), but one of the most common is glucose, with six carbon atoms. This is the substance our body converts starch and other sugars into in order to use for energy or store as glycogen. As such, many biological systems are so primed to convert other sugars into glucose, and it just so happens that when glucose breaks down in the presence of the right enzymes, it forms carbon dioxide and an alcohol; ethanol, to be precise, in a process known as either glycolosis (to a scientist) or fermentation (to everyone else).

Yeast performs this process in order to respire (ie produce energy) anaerobically (in the absence of oxygen), so leading to the two most common cases where this reaction occurs. The first we know as brewing, in which an anaerobic atmosphere is deliberately produced to make alcohol; the other occurs when baking bread. The yeast we put in the bread causes the sugar (ie glucose) in it to produce carbon dioxide, which is what causes the bread to rise since it has been filled with gas, whilst the ethanol tends to boil off in the heat of the baking process. For industrial purposes, ethanol is made by hydrating (reacting with water) an oil by-product called ethene, but the product isn’t generally something you’d want to drink.

But anyway, back to the booze itself, and this time what happens upon its entry into the body. Exactly why alcohol acts as a depressant and intoxicant (if that’s a proper word) is down to a very complex interaction with various parts and receptors of the brain that I am not nearly intelligent enough to understand, let alone explain. However, what I can explain is what happens when the body gets round to breaking the alcohol down and getting rid of the stuff. This takes place in the liver, an amazing organ that performs hundreds of jobs within the body and contains a vast repetoir of enzymes. One of these is known as alcohol dehydrogenase, which has the task of oxidising the alcohol (not a simple task, and one impossible without enzymes) into something the body can get rid of. However, most ethanol we drink is what is known as a primary alcohol (meaning the OH group is on the end of the carbon chain), and this causes it to oxidise in two stages, only the first of which can be done using alcohol dehydrogenase. This process converts the alcohol into an aldehyde (with an oxygen chemically double-bonded to the carbon where the OH group was), which in the case of ethanol is called acetaldehyde (or ethanal). This molecule cannot be broken down straight away, and instead gets itself lodged in the body’s tissues in such a way (thanks to its shape) to produce mild toxins, activate our immune system and make us feel generally lousy. This is also known as having a hangover, and only ends when the body is able to complete the second stage of the oxidation process and convert the acetaldehyde into acetic acid, which the body can get rid of relatively easily. Acetic acid is commonly known as the active ingredient in vinegar, which is why alcoholics smell so bad and are often said to be ‘pickled’.

This process occurs in the same way when other alcohols enter the body, but ethanol is unique in how harmless (relatively speaking) its aldehyde is. Methanol, for example, can also be oxidised by alcohol dehydrogenase, but the aldehyde it produces (officially called methanal) is commonly known as formaldehyde; a highly toxic substance used in preservation work and as a disinfectant that will quickly poison the body. It is for this reason that methanol is present in the fuel commonly known as ‘meths’- ethanol actually produces more energy per gram and makes up 90% of the fuel by volume, but since it is cheaper than most alcoholic drinks the toxic methanol is put in to prevent it being drunk by severely desperate alcoholics. Not that it stops many of them; methanol poisoning is a leading cause of death among many homeless people.

Homeless people were also responsible for a major discovery in the field of alcohol research, concerning the causes of alcoholism. For many years it was thought that alcoholics were purely addicts mentally rather than biologically, and had just ‘let it get to them’, but some years ago a young student (I believe she was Canadian, but certainty of that fact and her name both escape me) was looking for some fresh cadavers for her PhD research. She went to the police and asked if she could use the bodies of the various dead homeless people who they found on their morning beats, and when she started dissecting them she noticed signs of a compound in them that was known to be linked to heroin addiction. She mentioned to a friend that all these people appeared to be on heroin, but her friend said that these people barely had enough to buy drink, let alone something as expensive as heroin. This young doctor-to-be realised she might be onto something here, and changed the focus of her research onto studying how alcohol was broken down by different bodies, and discovered something quite astonishing. Inside serious alcoholics, ethanol was being broken down into this substance previously only linked to heroin addiction, leading her to believe that for some unlucky people, the behaviour of their bodies made alcohol as addictive to them as heroin was to others. Whilst this research has by no means settled the issue, it did demonstrate two important facts; firstly, that whilst alcoholism certainly has some links to mental issues, it is also fundamentally biological and genetic by nature and cannot be solely put down as the fault of the victim’s brain. Secondly, it ‘sciencified’ (my apologies to grammar nazis everywhere for making that word up) a fact already known by many reformed drinkers; that when a former alcoholic stops drinking, they can never go back. Not even one drink. There can be no ‘just having one’, or drinking socially with friends, because if one more drink hits their body, deprived for so long, there’s a very good chance it could kill them.

Still, that’s not a reason to get totally down about alcohol, for two very good reasons. The first of these comes from some (admittely rather spurious) research suggesting that ‘addictive personalities’, including alcoholics, are far more likely to do well in life, have good jobs and overall succeed; alcoholics are, by nature, present at the top as well as the bottom of our society. The other concerns the one bit of science I haven’t tried to explain here- your body is remarkably good at dealing with alcohol, and we all know it can make us feel better, so if only for your mental health a little drink now and then isn’t an all bad thing after all. And anyway, it makes for some killer YouTube videos…

NUMBERS

One of the most endlessly charming parts of the human experience is our capacity to see something we can’t describe and just make something up in order to do so, never mind whether it makes any sense in the long run or not. Countless examples have been demonstrated over the years, but the mother lode of such situations has to be humanity’s invention of counting.

Numbers do not, in and of themselves, exist- they are simply a construct designed by our brains to help us get around the awe-inspiring concept of the relative amounts of things. However, this hasn’t prevented this ‘neat little tool’ spiralling out of control to form the vast field that is mathematics. Once merely a diverting pastime designed to help us get more use out of our counting tools, maths (I’m British, live with the spelling) first tentatively applied itself to shapes and geometry before experimenting with trigonometry, storming onwards to algebra, turning calculus into a total mess about four nanoseconds after its discovery of something useful, before just throwing it all together into a melting point of cross-genre mayhem that eventually ended up as a field that it as close as STEM (science, technology, engineering and mathematics) gets to art, in that it has no discernible purpose other than for the sake of its own existence.

This is not to say that mathematics is not a useful field, far from it. The study of different ways of counting lead to the discovery of binary arithmetic and enabled the birth of modern computing, huge chunks of astronomy and classical scientific experiments were and are reliant on the application of geometric and trigonometric principles, mathematical modelling has allowed us to predict behaviour ranging from economics & statistics to the weather (albeit with varying degrees of accuracy) and just about every aspect of modern science and engineering is grounded in the brute logic that is core mathematics. But… well, perhaps the best way to explain where the modern science of maths has lead over the last century is to study the story of i.

One of the most basic functions we are able to perform to a number is to multiply it by something- a special case, when we multiply it by itself, is ‘squaring’ it (since a number ‘squared’ is equal to the area of a square with side lengths of that number). Naturally, there is a way of reversing this function, known as finding the square root of a number (ie square rooting the square of a number will yield the original number). However, convention dictates that a negative number squared makes a positive one, and hence there is no number squared that makes a negative and there is no such thing as the square root of a negative number, such as -1. So far, all I have done is use a very basic application of logic, something a five-year old could understand, to explain a fact about ‘real’ numbers, but maths decided that it didn’t want to not be able to square root a negative number, so had to find a way round that problem. The solution? Invent an entirely new type of number, based on the quantity i (which equals the square root of -1), with its own totally arbitrary and made up way of fitting  on a number line, and which can in no way exist in real life.

Admittedly, i has turned out to be useful. When considering electromagnetic forces, quantum physicists generally assign the electrical and magnetic components real and imaginary quantities in order to identify said different components, but its main purpose was only ever to satisfy the OCD nature of mathematicians by filling a hole in their theorems. Since then, it has just become another toy in the mathematician’s arsenal, something for them to play with, slip into inappropriate situations to try and solve abstract and largely irrelevant problems, and with which they can push the field of maths in ever more ridiculous directions.

A good example of the way mathematics has started to lose any semblance of its grip on reality concerns the most famous problem in the whole of the mathematical world- Fermat’s last theorem. Pythagoras famously used the fact that, in certain cases, a squared plus b squared equals c squared as a way of solving some basic problems of geometry, but it was never known as to whether a cubed plus b cubed could ever equal c cubed if a, b and c were whole numbers. This was also true for all other powers of a, b and c greater than 2, but in 1637 the brilliant French mathematician Pierre de Fermat claimed, in a scrawled note inside his copy of Diohantus’ Arithmetica, to have a proof for this fact ‘that is too large for this margin to contain’. This statement ensured the immortality of the puzzle, but its eventual solution (not found until 1995, leading most independent observers to conclude that Fermat must have made a mistake somewhere in his ‘marvellous proof’) took one man, Andrew Wiles, around a decade to complete. His proof involved showing that the terms involved in the theorem could be expressed in the form of an incredibly weird equation that doesn’t exist in the real world, and that all equations of this type had a counterpart equation of an equally irrelevant type. However, since the ‘Fermat equation’ was too weird to exist in the other format, it could not logically be true.

To a mathematician, this was the holy grail; not only did it finally lay to rest an ages-old riddle, but it linked two hitherto unrelated branches of algebraic mathematics by way of proving what is (now it’s been solved) known as the Taniyama-Shimura theorem. To anyone interested in the real world, this exercise made no contribution to it whatsoever- apart from satisfying a few nerds, nobody’s life was made easier by the solution, it didn’t solve any real-world problem, and it did not make the world a tangibly better place. In this respect then, it was a total waste of time.

However, despite everything I’ve just said, I’m not going to decide that all modern day mathematics is a waste of time; very few human activities ever are. Mathematics is many things; among them ridiculous, confusing, full of contradictions and potential slip-ups and, in a field whose age of winning a major prize is younger than in any other STEM field, apparently full of those likely to belittle you out of future success should you enter the world of serious academia. But, for some people, maths is just what makes the world makes sense, and at its heart that was all it was ever created to do. And if some people want their life to be all about the little symbols that make the world make sense, then well done to the world for making a place for them.

Oh, and there’s a theory doing the rounds of cosmology nowadays that reality is nothing more than a mathematical construct. Who knows in what obscure branch of reverse logarithmic integrals we’ll find answers about that one…

Bouncing horses

I have , over recent months, built up a rule concerning posts about YouTube videos, partly on the grounds that it’s bloody hard to make a full post out of them but also because there are most certainly a hell of a lot of good ones out there that I haven’t heard of, so any discussion of them is sure to be incomplete and biased, which I try to avoid wherever possible. Normally, this blog also rarely delves into what might be even vaguely dubbed ‘current affairs’, but since it regularly does discuss the weird and wonderful world of the internet and its occasional forays into the real world I thought that I might make an exception; today, I’m going to be talking about Gangnam Style.

Now officially the most liked video in the long and multi-faceted history of YouTube (taking over from the previous record holder and a personal favourite, LMFAO’s Party Rock Anthem), this music video by Korean rapper & pop star PSY was released over two and a half months ago, and for the majority of that time it lay in some obscure and foreign corner of the internet. Then, in that strange way that random videos, memes and general random bits and pieces are wont to do online, it suddenly shot to prominence thanks to the web collectively pissing itself over the sight of a chubby Korean bloke in sunglasses doing ‘the horse riding dance’. Quite how this was even discovered by some casual YouTube-surfer is something of a mystery to me given that said dance doesn’t even start for a good minute and a half or so, but the fact remains that it was, and that it is now absolutely bloody everywhere. Only the other day it became the first ever Korean single to reach no.1 in the UK charts, despite not having been translated from its original language, and has even prompted a dance off between rival Thai gangs prior to a gunfight. Seriously.

Not that it has met with universal appeal though. I’m honestly surprised that more critics didn’t get up in their artistic arms at the sheer ridiculousness of it, and the apparent lack of reason for it to enjoy the degree of success that it has (although quite a few probably got that out of their system after Call Me Maybe), but several did nonetheless. Some have called it ‘generic’ in music terms, others have found its general ridiculousness more tiresome and annoying than fun, and one Australian journalist commented that the song “makes you wonder if you have accidentally taken someone else’s medication”. That such criticism has been fairly limited can be partly attributed to the fact that the song itself is actually intended to be a parody anyway. Gangnam is a classy, fashionable district of the South Korean capital Seoul (PSY has likened it to Beverly Hills in California), and gangnam style is a Korean phrase referring to the kind of lavish & upmarket (if slightly pretentious) lifestyle of those who live there; or, more specifically, the kind of posers & hipsters who claim to affect ‘the Gangnam Style’. The song’s self-parody comes from the contrast between PSY’s lyrics, written from the first-person perspective of such a poser, and his deliberately ridiculous dress and dance style.

Such an act of deliberate self-parody has certainly helped to win plaudits from serious music critics, who have found themselves to be surprisingly good-humoured once told that the ridiculousness is deliberate and therefore actually funny- however, it’s almost certainly not the reason for the video’s over 300 million YouTube views, most of which surely go to people who’ve never heard of Gangnam, and certainly have no idea of the people PSY is mocking. In fact, there have been several different theories proposed as to why its popularity has soared quite so violently.

Most point to PSY’s very internet-friendly position on his video’s copyright. The Guardian claim that PSY has in fact waived his copyright to the video, but what is certain is that he has neglected to take any legal action on the dozens of parodies and alternate versions of his video, allowing others to spread the word in their own, unique ways and giving it enormous potential to spread, and spread far. These parodies have been many and varied in content, author and style, ranging from the North Korean government’s version aimed at satirising the South Korean president Park Guen-hye (breaking their own world record for most ridiculous entry into a political pissing contest, especially given that it mocks her supposed devotion to an autocratic system of government, and one moreover that ended over 30 years ago), to the apparently borderline racist “Jewish Style” (neither of which I have watched, so cannot comment on). One parody has even sparked a quite significant legal case, with 14 California lifeguards being fired for filming, dancing in, or even appearing in the background of, their parody video “Lifeguard Style” and investigation has since been launched by the City Council in response to the thousands of complaints and suggestions, one even by PSY himself, that the local government were taking themselves somewhat too seriously.

However, by far the most plausible reason for he mammoth success of the video is also the simplest; that people simply find it funny as hell. Yes, it helps a lot that such a joke was entirely intended (let’s be honest, he probably couldn’t have come up with quite such inspired lunacy by accident), and yes it helps how easily it has been able to spread, but to be honest the internet is almost always able to overcome such petty restrictions when it finds something it likes. Sometimes, giggling ridiculousness is just plain funny, and sometimes I can’t come up with a proper conclusion to these posts.

P.S. I forgot to mention it at the time, but last post was my 100th ever published on this little bloggy corner of the internet. Weird to think it’s been going for over 9 months already. And to anyone who’s ever stumbled across it, thank you; for making me feel a little less alone.

Attack of the Blocks

I spend far too much time on the internet. As well as putting many hours of work into trying to keep this blog updated regularly, I while away a fair portion of time on Facebook, follow a large number of video series’ and webcomics, and can often be found wandering through the recesses of YouTube (an interesting and frequently harrowing experience that can tell one an awful lot about the extremes of human nature). But there is one thing that any resident of the web cannot hope to avoid for any great period of time, and quite often doesn’t want to- the strange world of Minecraft.

Since its release as a humble alpha-version indie game in 2009, Minecraft has boomed to become a runaway success and something of a cultural phenomenon. By the end of 2011, before it had even been released in its final release format, Minecraft had registered 4 million purchases and 4 times that many registered users, which isn’t bad for a game that has never advertised itself, spread semi-virally among nerdy gamers for its mere three-year history and was made purely as an interesting project by its creator Markus Persson (aka Notch). Thousands of videos, ranging from gameplay to some quite startlingly good music videos (check out the work of Captain Sparklez if you haven’t already) litter YouTube and many of the games’ features (such as TNT and the exploding mobs known as Creepers) have become memes in their own right to some degree.

So then, why exactly has Minecraft succeeded where hundreds and thousands of games have failed, becoming a revolution in gamer culture? What is it that makes Minecraft both so brilliant, and so special?

Many, upon being asked this question, tend to revert to extolling the virtues of the game’s indie nature. Being created entirely without funding as an experiment in gaming rather than profit-making, Minecraft’s roots are firmly rooted in the humble sphere of independent gaming, and it shows. One obvious feature is the games inherent simplicity- initially solely featuring the ability to wander around, place and destroy blocks, the controls are mainly (although far from entirely) confined to move and ‘use’, whether that latter function be shoot, slash, mine or punch down a tree. The basic, cuboid, ‘blocky’ nature of the game’s graphics, allowing for both simplicity of production and creating an iconic, retro aesthetic that makes it memorable and standout to look at. Whilst the game has frequently been criticised for not including a tutorial (I myself took a good quarter of an hour to find out that you started by punching a tree, and a further ten minutes to work out that you were supposed to hold down the mouse button rather than repeatedly click), this is another common feature of indie gaming, partly because it saves time in development, but mostly because it makes the game feel like it is not pandering to you and thus allowing indie gamers to feel some degree of elitism that they are good enough to work it out by themselves. This also ties in with the very nature of the game- another criticism used to be (and, to an extent, still is, even with the addition of the Enderdragon as a final win objective) that the game appeared to be largely devoid of point, existent only for its own purpose. This is entirely true, whether you view that as a bonus or a detriment being entirely your own opinion, and this idea of an unfamiliar, experimental game structure is another feature common in one form or another to a lot of indie games.

However, to me these do not seem to be entirely worthy of the name ‘answers’ regarding the question of Minecraft’s phenomenal success. The reason I think this way is that they do not adequately explain exactly why Minecraft rose to such prominence whilst other, often similar, indie games have been left in relative obscurity. Limbo, for example, is a side-scrolling platformer and a quite disturbing, yet compelling, in-game experience, with almost as much intrigue and puzzle from a set of game mechanics simpler even than those of Minecraft. It has also received critical acclaim often far in excess of Minecraft (which has received a positive, but not wildly amazed, response from critics), and yet is still known to only an occasional few. Amnesia: The Dark Descent has been often described as the greatest survival horror game in history, as well as incorporating a superb set of graphics, a three-dimensional world view (unlike the 2D view common to most indie games) and the most pants-wettingly terrifying experience anyone who’s ever played it is likely to ever face- but again, it is confined to the indie realm. Hell, Terraria is basically Minecraft in 2D, but has sold around 40 times less than Minecraft itself. All three of these games have received fairly significant acclaim and coverage, and rightly so, but none has become the riotous cultural phenomenon that Minecraft has, and neither have had an Assassin’s Creed mod (first example that sprung to mind).

So… why has Minecraft been so successful. Well, I’m going to be sticking my neck out here, but to my mind it’s because it doesn’t play like an indie game. Whilst most independently produced titled are 2D, confined to fairly limited surroundings and made as simple & basic as possible to save on development (Amnesia can be regarded as an exception), Minecraft takes it own inherent simplicity and blows it up to a grand scale. It is a vast, open world sandbox game, with vague resonances of the Elder Scrolls games and MMORPG’s, taking the freedom, exploration and experimentation that have always been the advantages of this branch of the AAA world, and combined them with the innovative, simplistic gaming experience of its indie roots. In some ways it’s similar to Facebook, in that it takes a simple principle and then applies it to the largest stage possible, and both have enjoyed a similarly explosive rise to fame. The randomly generated worlds provide infinite caverns to explore, endless mobs to slay, all the space imaginable to build the grandest of castles, the largest of cathedrals, or the SS Enterprise if that takes your fancy. There are a thousand different ways to play the game on a million different planes, all based on just a few simple mechanics. Minecraft is the best of indie and AAA blended together, and is all the more awesome for it.

Fist Pumping

Anyone see the Wimbledon final yesterday? If not, you missed out- great game of tennis, really competitive for the first two sets, and Roger Federer showing just why he is the greatest player of all time towards the end. Tough for Andy Murray after a long, hard tournament, but he did himself proud and as they say: form is temporary, class is permanent. And Federer has some class.

However, the reason I bring this up is not to extol the virtues of a tennis match again (I think my post following Murray’s loss at the Australian Open was enough for that), but because of a feature that, whilst not tennis-specific, appears to be something like home turf for it- the fist pump.

It’s a universally-recognised (from my experience anyway) expression of victory- the clenched fist, raised a little with the bent elbow, used to celebrate each point won, each small victory. It’s an almost laughably recognisable pattern in a tennis match, for whilst the loser of a point will invariably let their hand go limp by their side, or alternatively vent his or her frustration, the winner will almost always change their grip on the racket, and raise one clenched fist in a quiet, individual expression of triumph- or go ape-shit mental in the case of set or match wins.

So then, where does this symbol come from? Why, across the world, is the raised, clenched fist used in arenas ranging from sport to propaganda to warfare as a symbol of victory, be they small or world-changing? What is it that lies behind the fist pump?

Let us first consider the act of a clenched fist itself. Try it now. Go on- clench your fist, hard, maintaining a strong grip. See the knuckles stand out, sense the muscles bulge, feel the forearm stiffen. Now, try to maintain that position. Keep up that strong grip for 30 seconds, a minute, maybe two. After a while, you should feel your grip begin to loosen, almost subconsciously. Try to keep it tight if you can, but soon your forearm will start to ache, grip fading and loosening. It’s OK, you can let go now, but you see the point- maintaining a strong grip is hard old work. Thus, showing a strong grip is symbolic of still having energy, strength to continue, a sign that you are not beaten yet and can still keep on going. This is further accompanied by having the fist in a raised, rather than slack, position, requiring that little bit more effort. Demonstrating this symbol to an opponent after any small victory is almost a way of rubbing their noses in it, a way of saying that whilst they have been humbled, the victor can still keep on going, and is not finished yet.

Then there is the symbolism of the fist as a weapon. Just about every weapon in human history, bar those in Wild Wild West and bad martial arts films, requires the hands to operate it, and our most basic ones (club, sword, mace, axe etc.) all require a strong grip around a handle to use effectively. The fist itself is also, of course, a weapon of sorts in its own right. Although martial artists have taken the concept a stage further, the very origins of human fighting and warfare comes from basic swinging at one another with fists- and it is always the closed fists, using knuckles as the driving weapon, that are symbolic of true hand-to-hand fighting, despite the fact that the most famous martial arts move, the ‘karate chop’ (or knife-hand strike to give it its true name) requires an open hand. Either way, the symbolism and connection between the fist and weaponry/fighting means that the raised fist is representative not only of defiance, of fighting back,  standing tall and being strong against all the other could throw against them (the form in which it was used in large amounts in old Soviet propaganda), but also of dominance, representing the victor’s power and control over their defeated foe, further adding to the whole ‘rubbing their noses in it’ symbolism.

And then there is the position of the fist. Whilst the fist can be and is held in a variety of positions ranging from the full overhead to the low down clench on an extended arm, it is invariably raised slightly when clenched in victory. The movement may only be of a few centimetres, but its significance should not be underestimated- at the very least it brings the arm into a bent position. A bent arm position is the starting point for all punches and strikes, as it is very hard to get any sort of power from a bent arm, so the bending of the arm on the fist clench is once again a connection to the idea of the fist as a weapon. This is reinforced by the upwards motion being towards the face and upper body, as this is the principle target, and certainly the principle direction of movement (groin strikes excepted) in traditional fist fighting. Finally, we have the full lift, fists clenched and raised above the head in the moment of triumph. Here the symbolism is purely positional- the fists raised, especially when compared to the bent neck and hunched shoulders of the defeated compatriot, makes the victor seem bigger and more imposing, looming over his opponent and becoming overbearing and ‘above’ them.

The actual ‘pumping’ action of the fist pump, rarer than the unaccompanied clench,  adds its own effect, although in this case it is less symbolism and more naked emotion on show- not only passion for the moment, but also raw aggression to let one’s opponent know that not only are you up for this, but you are well ready and prepared to front up and challenge them on every level. But this symbolism could be considered to be perhaps for the uncivilised and overemotional, whereas the subtlest, calmest men may content themselves with the tiniest grin and a quick clench, conjuring up centuries of basic symbolism in one tiny, almost insignificant, act of victory.

The Rich and the Failures

Modern culture loves its celebrities. For many a year, our obsessions have been largely focused upon those who spend their lives in the public eye- sportsmen and women, film and music stars, and anyone lucky and vacuous enough to persuade a TV network that they deserve a presenting contract. In recent years however, the sphere of fame has spread outwards, incorporating some more niche fields- survival experts like Bear Grylls are one group to come under the spotlight, as are a multitude of chefs who have begun to work their way into the media. However, the group I wish to talk about are businessmen. With the success of shows like Dragon’s Den and The Apprentice, as well as the charisma of such business giants as Mark Zuckerberg and Bill Gates, a few people who were once only known of by dry financiers are now public figures who we all recognise.

One of the side-effects of this has, of course, been the publishing of autobiographies. It is almost a rite of passage for the modern celebrity- once you have been approached by a publisher and (usually) ghostwriter to get your life down on paper, you know you’ve made it. In the case of businessmen, the target market for these books are people in awe of their way of life – the self-made riches, the fame and the standing – who wish to follow in their footsteps and as such, these autobiographies are basically long guides of business advice based around their own personal case study. The books now filling this genre do not only come from the big TV megastars however- many other people smart enough to spot a good bandwagon and rich enough to justify leaping onto it appear to be following the trend of publishing these ‘business manuals’, in an effort to make another quick buck to add to their own long personal lists.

The advice they offer can be fairly predictable- don’t back down, doggedly push on when people give you crap, take risks and break the rules, spot opportunities and try to be the first one to exploit them, etc. All of which is, I am sure what they believe really took them to the top.

I, however, would add one more thing to this list- learn to recognise when you’re onto a loser. For whilst all this advice might work superbly for the handful of millionaires able to put their stories down, it could be said to have worked less well for the myriad of people who lie broken and failed by the wayside from following exactly the same advice. You see, it is many of those exact same traits – a stubborn, almost arrogant, refusal to back down, a risk-taking, opportunistic personality, unshakeable, almost delusional, self-confidence – that characterise many of our society’s losers. The lonely drunk in the bar banging on about how ‘I could have made it y’know’ is one example, or the bloke whose worked in the same office for 20 years and has very much his own ideas about his repeated passing over for promotion. These people have never been able to let go, never been able to step outside the all-encompassing bubble of their own fantasy and realise the harsh reality of their situation, and indeed of life itself. They are just as sure of themselves as Duncan Bannatyne, just as pugnacious as Alan Sugar, just as eager to spy an opportunity as Steve Jobs. But it’s the little things that separate them, and keep their salary in the thousands rather than the millions. Not just the business nous, but the ability to recognise a sure-fire winner from a dead horse, the ability to present oneself as driven rather than arrogant, to know who to trust and which side to pick, as well as the little slivers (and in some cases giant chunks) of luck that are behind every major success. And just as it is the drive and single-mindedness that can set a great man on his road to riches, so it can also be what holds back the hundreds of failures who try to follow in his footsteps and end up chasing dreams, when they are unable to escape them.

I well recognise that I am in a fairly rubbish position from which to offer advice in this situation, as I have always recognised that business, and in some ways success itself, is not my strong suit. Whilst I am not sure it would be all too beyond me to create a good product, I am quite aware that my abilities to market and sell such an item would not do it justice. In this respect I am born to be mediocre- whilst I have some skills, I don’t have the ambition or confidence to try and go for broke in an effort to hit the top. However, whilst this conservative approach does limit my chances of hitting the big time, it also allows me to stay grounded and satisfied with my position and minimises the chance of any catastrophic failure in life.

I’m not entirely sure what lessons one can take from this idea. For anyone seeking to go for the stars, then all I can offer is good luck, and a warning to keep your head on your shoulders and a firm grip on reality. For everyone else… well, I suppose that the best way to put it is to say that there are two ways to seek success in your life. One is to work out exactly where you want to be, exactly how you want to be successful, and strive to achieve it. You may have to give up a lot, and it may take you a very, very long time, but if you genuinely have what it takes and are not deluding yourself, then that path is not closed off to you.

The other, some would say harder, yet arguably more rewarding way, is to learn how to be happy with who and what you are right now.