Misnomers

I am going to break two of my cardinal rules  at once over the course of this post, for it is the first in the history of this blog that could be adequately described as a whinge. I have something of a personal hatred against these on principle that they never improve anybody’s life or even the world in general, but I’m hoping that this one is at least well-meaning and not as hideously vitriolic as some ‘opinion pieces’ I have had the misfortune to read over the years.

So…

A little while ago, the BBC published an article concerning the arrest of a man suspected of being a part of hacking group Lulzsec, an organised and select offshoot of the infamous internet hacking advocates and ‘pressure group’ Anonymous. The FBI have accused him of being part of a series of attacks on Sony last May & June, in which thousands of personal details on competition entries were published online. Lulzsec at the time made a statement to the effect that ‘we got all these details from one easy sting, so why do you trust them?’, which might have made the attack a case of trying to prove a point had the point not been directed at an electronics company and was thus kind of stupid. Had it been aimed at a government I might have understood, but to me this just looks like the internet doing what it does best- doing stuff simply for the fun of it. This is in fact the typical motive behind most Lulzsec activities, doing things ‘for teh lulz’, hence the first half of their name and the fact that their logo is a stick figure in typical meme style.

The BBC made reference to their name too in their coverage of the event, but since the journalist involved had clearly taken their information from a rather poorly-worded sentence of a Wikipedia article he claimed that ‘lulz’ was a play on words of lol, aka laugh out loud. This is not, technically speaking, entirely wrong, but is a bit like claiming the word ‘gay’ can now be used to mean happy in general conversation- something of an anachronism, albeit a very recent one. Lulz in the modern internet sense is used more to mean ‘laughs’ or ‘entertainment’, and  ‘for teh lulz’ could even be translated as simply ‘for the hell of it’. As I say, the argument was not expressly wrong as it was revealing that this journalist was either not especially good at getting his point across or dealing with slightly unfamiliar subject matter.

This is not the only example of the media getting things a little wrong when it comes to the internet. A few months ago, after a man was arrested for viciously abusing a celebrity (I forget who) using twitter, he was dubbed a ‘troll’, a term that, according to the BBC article I read, denotes somebody who uses the internet to bully and abuse people (sorry for picking on the BBC because a lot of others do it too, but I read them more than most other news sources). However, any reasonably experienced denizen of the internet will be able to tell you that the word ‘troll’ originated from the activity known as ‘trolling’, etymologically thought to originate from fishing (from a similar route as ‘trawling’). The idea behind this is that the original term was used in the context of ‘trolling for newbies’, ie laying down an obvious feeder line that an old head would recognise as being both obvious and discussed to its death, but that a newer face would respond to earnestly. Thus ‘newbies’ were fished for and identified, mostly for the amusement of the more experienced faces. Thus, trolling has lead to mean making jokes or provocative comments for one’s own amusement and at the expense of others, and ‘troll’ has become descriptive of somebody who trolls others. Whilst it is perhaps not the most noble of human activities, and some repeat offenders could definitely do with a bit more fresh air now and again, it is mostly harmless and definitely not to be taken altogether too seriously. What it is also not is a synonym for internet abuse or even (as one source has reported it) ‘defac[ing] Internet tribute sites with the aim of causing grief to families’. That is just plain old despicable bullying, something that has no place on the internet or the world in general, and dubbing casual humour-seekers such is just giving mostly alright people an unnecessarily bad name.

And here we get onto the bone I wish to pick- that the media, as a rule, do not appear to understand the internet or its culture, and instead treat it almost like a child’s plaything, a small distraction whose society is far less important than its ability to spawn companies. There may be an element of fear involved, an intentional mistrust of the web and a view to hold off embracing it as long as possible, for mainstream media is coming under heavy competition from the web and many have argued that the latter may soon kill the former altogether. This is as maybe, but news organisations should be obliged to act with at least a modicum of neutrality and respectability, especially for a service such as the BBC that does not depend on commercial funding anyway. It would perhaps not be too much to ask for a couple of organisations to hire an internet correspondent, to go with their food, technology, sports, science, environment, every country around the world, domestic, travel and weather ones, if only to allow issues concerning it to be conveyed accurately by someone who knows what he’s talking about. If it’s good enough for the rest of the world, then it’s surely good enough for the culture that has made mankind’s greatest invention what it is today.

OK, rant over, I’ll do something a little more normal next time out.

The Conquest of Air

Everybody in the USA, and in fact just about everyone across the world, has heard of Orville and Wilbur Wright. Two of the pioneers of aviation, when their experimental biplane Flyer achieved the first ever manned, powered, heavier-than-air flight on the morning of December 17, 1903, they had finally achieved one of man’s long-held dreams; control and mastery of air travel.

However, what is often puzzling when considering the Wright brothers’ story is the number of misconceptions surrounding them. Many, for instance, are under the impression that they were the first people to fly at all, inventing all the various technicalities of lift, aerofoil structures and control that are now commonplace in today’s aircraft. In fact, the story of flight, perhaps the oldest and maddest of human ambitions, an idea inspired by every time someone has looked up in wonder at the graceful flight of a bird, is a good deal older than either of them.

Our story begins, as does nearly all technological innovation, in imperial China, around 300 BC (the Greek scholar Archytas had admittedly made a model wooden pigeon ‘fly’ some 100 years previously, but nobody is sure exactly how he managed it). The Chinese’s first contribution was the invention of the kite, an innovation that would be insignificant if it wasn’t for whichever nutter decided to build one big enough to fly in. However, being strapped inside a giant kite and sent hurtling skywards not only took some balls, but was heavily dependent on wind conditions, heinously dangerous and dubiously useful, so in the end the Chinese gave up on manned flight and turned instead to unmanned ballooning, which they used for both military signalling and ceremonial purposes. It isn’t actually known if they ever successfully put a man into the air using a kite, but they almost certainly gave it a go. The Chinese did have one further attempt, this time at inventing the rocket engine, some years later, in which a young and presumably mental man theorised that if you strapped enough fireworks to a chair then they would send the chair and its occupants hurtling into the night sky. His prototype (predictably) exploded, and it wasn’t for two millennia, after the passage of classical civilisation, the Dark Ages and the Renaissance, that anyone tried flight again.

That is not to say that the idea didn’t stick around. The science was, admittedly beyond most people, but as early as 1500 Leonardo da Vinci, after close examination of bird wings, had successfully deduced the principle of lift and made several sketches showing designs for a manned glider. The design was never tested, and not fully rediscovered for many hundreds of years after his death (Da Vinci was not only a controversial figure and far ahead of his time, but wrote his notebooks in a code that it took centuries to decipher), but modern-day experiments have shown that his design would probably have worked. Da Vinci also put forward the popular idea of ornithopters, aircraft powered by flapping motion as in bird wings, and many subsequent attempts at flight attempted to emulate this method of motion. Needless to say, these all failed (not least because very few of the inventors concerned actually understood aerodynamics).

In fact, it wasn’t until the late 18th century that anyone started to really make any headway in the pursuit of flight. In 1783, a Parisian physics professor, Jacques Charles, built on the work of several Englishmen concerning the newly discovered hydrogen gas and the properties and behaviour of gases themselves. Theorising that, since hydrogen was less dense than air, it should follow Archimedes’ principle of buoyancy and rise, thus enabling it to lift a balloon, he launched the world’s first hydrogen balloon from the Champs du Mars on August 27th. The balloon was only small, and there were significant difficulties encountered in building it, but in the design process Charles, aided by his engineers the Roberts brothers, invented a method of treating silk to make it airtight, spelling the way for future pioneers of aviation. Whilst Charles made some significant headway in the launch of ever-larger hydrogen balloons, he was beaten to the next significant milestones by the Montgolfier brothers, Joseph-Michel and Jacques-Etienne. In that same year, their far simpler hot-air balloon designs not only put the first living things (a sheep, rooster and duck) into the atmosphere, but, just a month later, a human too- Jacques-Etienne was the first European, and probably the first human, ever to fly.

After that, balloon technology took off rapidly (no pun intended). The French rapidly became masters of the air, being the first to cross the English Channel and creators of the first steerable and powered balloon flights. Finally settling on Charles’ hydrogen balloons as a preferable method of flight, blimps and airships began, over the next century or so, to become an accepted method of travel, and would remain so right up until the Hindenburg disaster of 1937, which rather put people off the idea. For some scientists and engineers, humankind had made it- we could now fly, could control where we were going at least partially independent of the elements, and any attempt to do so with a heavier-than-air machine was both a waste of time and money, the preserve of dreamers. Nonetheless, to change the world, you sometimes have to dream big, and that was where Sir George Cayley came in.

Cayley was an aristocratic Yorkshireman, a skilled engineer and inventor, and a magnanimous, generous man- he offered all of his inventions for the public good and expected no payment for them. He dabbled in a number of fields, including seatbelts, lifeboats, caterpillar tracks, prosthetics, ballistics and railway signalling. In his development of flight, he even reinvented the wheel- he developed the idea of holding a wheel in place using thin metal spokes under tension rather than solid ones under compression, in an effort to make the wheels lighter, and is thus responsible for making all modern bicycles practical to use. However, he is most famous for being the first man ever, in 1853, to put somebody into the air using a heavier-than-air glider (although Cayley may have put a ten-year old in a biplane four years earlier).

The man in question was Cayley’s chauffeur (or butler- historical sources differ widely), who was (perhaps understandably) so hesitant to go in his boss’ mental contraption that he handed in his notice upon landing after his flight across Brompton Dale, stating  as his reason that ‘I was hired to drive, not fly’. Nonetheless, Cayley had shown that the impossible could be done- man could fly using just wings and wheels. He had also designed the aerofoil from scratch, identified the forces of thrust, lift, weight and drag that control an aircraft’s movements, and paved the way for the true pioneer of ‘heavy’ flight- Otto Lilienthal.

Lilienthal (aka ‘The Glider King’) was another engineer, making 25 patents in his life, including a revolutionary new engine design. But his fame comes from a world without engines- the world of the sky, with which he was obsessed. He was just a boy when he first strapped wings to his arms in an effort to fly (which obviously failed completely), and later published works detailing the physics of bird flight. It wasn’t until 1891, aged 43, once his career and financial position was stable and he had finished fighting in the Franco-Prussian War, that he began to fly in earnest, building around 12 gliders over a 5-year period (of which 6 still survive). It might have taken him a while, but once he started there was no stopping him, as he made over 2000 flights in just 5 years (averaging more than one every day). During this time he was only able to rack up 5 hours of flight time (meaning his average flight time was just 9 seconds), but his contribution to his field was enormous. He was the first to be able to control and manoeuvre his machines by varying his position and weight distribution, a factor whose importance he realised was absolutely paramount, and also recognised that a proper understanding of how to achieve powered flight (a pursuit that had been proceeding largely unsuccessfully for the past 50 years) could not be achieved without a basis in unpowered glider flight, in recognising that one must work in harmony with aerodynamic forces. Tragically, one of Lilienthal’s gliders crashed in 1896, and he died after two days in hospital. But his work lived on, and the story of his exploits and his death reached across the world, including to a pair of brothers living in Dayton, Ohio, USA, by the name of Wright. Together, the Wright brothers made huge innovations- they redesigned the aerofoil more efficiently, revolutionised aircraft control using wing warping technology (another idea possibly invented by da Vinci), conducted hours of testing in their own wind tunnel, built dozens of test gliders and brought together the work of Cayley, Lilienthal, da Vinci and a host of other, mostly sadly dead, pioneers of the air.  The Wright brothers are undoubtedly the conquerors of the air, being the first to show that man need not be constrained by either gravity or wind, but can use the air as a medium of travel unlike any other. But the credit is not theirs- it is a credit shared between all those who have lived and died in pursuit of the dream of fling like birds. To quote Lilienthal’s dying words, as he lay crippled by mortal injuries from his crash, ‘Sacrifices must be made’.

Questionably Moral

We human beings tend to set a lot of store by the idea of morality (well, most of us anyway), and it is generally accepted that having a strong code of morals is a good thing. Even if many of us have never exactly qualified what we consider to be right or wrong, the majority of people have at least a basic idea of what they consider morally acceptable and a significant number are willing to make their moral standpoint on various issues very well known to anyone who doesn’t want to listen (internet, I’m looking at you again). One of the key features considered to be integral to such a moral code is the idea of rigidity and having fixed rules. Much like law, morality should ideally be inflexible, passing equal judgement on the same situation regardless of who is involved, how you’re feeling at the time and other outside factors. If only to avoid being accused of hypocrisy, social law dictates that one ‘should’ pass equal moral judgement on both your worst enemy and your spouse, and such a stringent dedication to ‘justice’ is a prized concept among those with strong moral codes.

However, human beings are nothing if not inconsistent, and even the strongest and most vehemently held ideas have a habit of withering in the face of context. One’s moral code is no exception, and with that in mind, let’s talk about cats.

Consider a person- call him a socialist, if you like that sort of description. Somebody who basically believes that we should be doing our bit to help our fellow man. Someone who buys The Big Issue, donates to charity, and gives their change to the homeless. They take the view that those in a more disadvantaged position should be offered help, and they live and share this view on a daily basis.

Now, consider what happens when, one day, said person is having a barbecue and a stray cat comes into the garden. Such strays are, nowadays, uncommon in suburban Britain, but across Europe (the Mediterranean especially), there may be hundreds of them in a town (maybe the person’s on holiday). Picture one such cat- skinny, with visible ribs, unkempt and patchy fur, perhaps a few open sores. A mangy, quite pathetic creature, clinging onto life through a mixture of tenacity and grubbing for scraps, it enters the garden and makes its way towards the man and his barbecue.

Human beings, especially modern-day ones, leave quite a wasteful and indulgent existence. We certainly do not need the vast majority of the food we produce and consume, and could quite happily do without a fair bit of it. A small cat, by contrast, can survive quite happily for at least day on just one small bowl of food, or a few scraps of meat. From a neutral, logical standpoint, therefore, the correct and generous thing to do according to this person’s moral standpoint, would be to throw the cat a few scraps and sleep comfortably with a satisfied conscience that evening. But, all our person sees is a mangy street cat, a dirty horrible stray that they don’t want anywhere near them or their food, so they do all they can to kick, scream, shout, throw water and generally drive a starving life form after just a few scraps away from a huge pile of pristine meat, much of which is likely to go to waste.

Now, you could argue that if the cat had been given food, it would have kept on coming back, quite insatiably, for more, and could possibly have got bolder and more aggressive. An aggressive, confident cat is more likely to try and steal food, and letting a possibly diseased and flea-ridden animal near food you are due to eat is probably not in the best interests of hygiene. You could argue that offering food is just going to encourage other cats to come to you for food, until you become a feeding station for all those in the area and are thus promoting the survival and growth of a feline population that nobody really likes to see around and would be unsustainable to keep. You could argue, if you were particularly harsh and probably not of the same viewpoint as the person in question, that a cat is not ‘worth’ as much as a human, if only because we should stick to looking after our own for starters and, in any case, it would be better for the world anyway if there weren’t stray cats around to cause such freak out-ness and moral dilemmas. But all of this does not change the fact that this person has, from an objective standpoint, violated their moral code by refusing a creature less fortunate than themselves a mere scrap that could, potentially, represent the difference between their living and dying.

There are other such examples of such moral inconsistency in the world around us. Animals are a common connecting factor (pacifists and people who generally don’t like murder will quite happily swat flies and such ‘because they’re annoying’), but there are other, more human examples (those who say we should be feeding the world’s poor whilst simultaneously both eating and wasting vast amounts of food and donating a mere pittance to help those in need). Now, does this mean that all of these moral standpoints are stupid? Of course not, if we all decided not to help and be nice to one another then the world would be an absolute mess. Does it mean that we’re all just bad, hypocritical people, as the violently forceful charity collectors would have you believe? Again, no- this ‘hypocrisy’ is something that all humans do to some extent, so either the entire human race is fundamentally flawed (in which case the point is not worth arguing) or we feel that looking after ourselves first and foremost before helping others is simply more practical. Should we all turn to communist leadership to try and redress some of these imbalances and remove the moral dilemmas? I won’t even go there.

It’s a little hard to identify a clear moral or conclusion to all of this, except to highlight that moral inconsistency is a natural and very human trait. Some might deplore this state of affairs, but we’ve always known humans are imperfect creatures; not that that gives us a right to give up on being the best we can be.

Copyright Quirks

This post is set to follow on from my earlier one on the subject of copyright law and its origins. However, just understanding the existence of copyright law does not necessarily premeditate the understanding of the various complications, quirks and intricacies that get people quite so angry about it- so today I want to explore a few of these features that get people so annoyed, and explain why and how they came to be.

For starters, it is not in the public interest for material to stay forever copyrighted, for the simple reason that stuff is always more valuable if freely in the public domain as it is more accessible for the majority. If we consider a technological innovation or invention, restricting its production solely to the inventor leaves them free to charge pretty much what they like, since they have no competition to compete with. Not only does this give them an undesirable monopoly, it also restricts that invention from being best used on a large scale, particularly if it is something like a drug or medicine. Therefore, whilst a copyright obviously has to exist in order to stimulate the creation of new stuff, allowing it to last forever is just asking for trouble, which is why copyrights generally have expiry times. The length of a copyright’s life varies depending on a product- for authors it generally lasts for their lifetime plus a period of around 70 years or so to allow their family to profit from it (expired copyright is the reason that old books can be bought for next to nothing in digital form these days, as they cost nothing to produce). For physical products and, strangely, music, the grace period is generally both fixed and shorter (and dependent on the country concerned), and for drugs and pharmaceuticals it is just ten years (drugs companies are corrupt and profit-obsessed enough without giving them too long to rake in the cash).

Then, we encounter the fact that a copyright also represents a valuable commodity, and thus something that can potentially be put up for sale. You might think that allowing this sort of thing to go on is wrong and is only going to cause problems, but it is often necessary. Consider somebody who owns the rights to a book, and wants someone to make a film out of it, partly because they may be up for a cut of the profits and will gain money from the sale of their rights, but also because it represents a massive advertisement for their product. They, therefore, want to be able to sell part of the whole ‘right to publish’ idea to a film studio who can do the job for them, and any law prohibiting this is just pissing everybody off and preventing a good film from potentially being made. The same thing could apply to a struggling company who owns some valuable copyright to a product; the ability to sell it not only offers them the opportunity to make a bit of money to cover their losses, but also means that the product is more likely to stay on the free market and continue being produced by whoever bought the rights. It is for this reason legal for copyright to be traded between various different people or groups to varying degrees, although the law does allow the original owner to cancel any permanent trade after 35 years if they want to do something with the property.

And what about the issue of who is responsible for a work at all?  One might say that it is simply the work of the author/inventor concerned, but things are often not that simple. For one thing, innovations are often the result of work by a team of people and to restrict the copyright to any one of them would surely be unfair. For another, what if, say, the discovery of a new medical treatment came about because the scientist responsible was paid to do so, and given all the necessary equipment and personnel, by a company. Without corporate support, the discovery could never have been made, so surely that company is just as much legally entitled to the copyright as the individual responsible? This is legally known as ‘work made for hire’, and the copyright in this scenario is the property of the company rather than the individual, lasting for a fixed period (70 years in the US) since the company involved is unlikely to ‘die’ in quite the same predictable lifespan of a human being, and is unlikely to have any relatives for the copyright to benefit afterwards. It is for this reason also that companies, rather than just people, are allowed to hold copyright.

All of these quirks of law are undoubtedly necessary to try and be at least relatively fair to all concerned, but they are responsible for most of the arguments currently put about pertaining to ‘why copyright law is %&*$ed up’. The correct length of a copyright for various different stuff is always up for debate, whether it be musicians who want them to get longer (Paul McCartney made some complaints about this a few years ago), critics who want corporate ones to get shorter, or morons who want to get rid of them altogether (they generally mean well, but anarchistic principles today don’t either a) work very well or b) attract support likely to get them taken seriously). The sale of copyright angers a lot of people, particularly film critics- sales of the film rights for stuff like comic book characters generally include a clause requiring the studio to give it back if they don’t do anything with it for a few years. This has resulted in a lot of very badly-made films over the years which continue to be published solely because the relevant studio don’t want to give back for free a valuable commodity that still might have a few thousand dollars to be squeezed out of it (basically, blame copyright law for the new Spiderman film). The fact that both corporations and individuals can both have a right to the ownership of a product (and even the idea that a company can claim responsibility for the creation of something) has resulted in countless massive lawsuits over the years, almost invariably won by the biggest publishing company, and has created an image of game developers/musicians/artists being downtrodden by big business that is often used as justification by internet pirates. Not that the image is inaccurate or anything, but very few companies appear to realise that this is why there is such an undercurrent of sympathy for piracy on the internet and why their attempts to attack it through law have met with quite such a vitriolic response (as well as being poorly-worded and not thought out properly).

So… yeah, that’s pretty much copyright, or at least why it exists and people get annoyed about it. There are a lot of features concerning copyright law that people don’t like, and I’d be the last to say that it couldn’t do with a bit of bringing up to date- but it’s all there for a reason and it’s not just there because suit-clad stereotypes are lighting hundred dollar cigars off the arse of the rest of us. So please, when arguing about it, don’t suggest anything should just go without thinking of why it’s there in the first place.

A Brief History of Copyright

Yeah, sorry to be returning to this topic yet again, I am perfectly aware that I am probably going to be repeating an awful lot of stuff that either a) I’ve said already or b) you already know. Nonetheless, having spent a frustrating amount of time in recent weeks getting very annoyed at clever people saying stupid things, I feel the need to inform the world if only to satisfy my own simmering anger at something really not worth getting angry about. So:

Over the past year or so, the rise of a whole host of FLLAs (Four Letter Legal Acronyms) from SOPA to ACTA has, as I have previously documented, sent the internet and the world at large in to paroxysms of mayhem at the very idea that Google might break and/or they would have to pay to watch the latest Marvel film. Naturally, they also provoked a lot of debate, ranging in intelligence from intellectual to average denizen of the web, on the subject of copyright and copyright law. I personally think that the best way to understand anything is to try and understand exactly why and how stuff came to exist in the first place, so today I present a historical analysis of copyright law and how it came into being.

Let us travel back in time, back to our stereotypical club-wielding tribe of stone age human. Back then, the leader not only controlled and lead the tribe, but ensured that every facet of it worked to increase his and everyone else’s chance of survival, and chance of ensuring that the next meal would be coming along. In short, what was good for the tribe was good for the people in it. If anyone came up with a new idea or technological innovation, such as a shield for example, this design would also be appropriated and used for the good of the tribe. You worked for the tribe, and in return the tribe gave you protection, help gathering food and such and, through your collective efforts, you stayed alive. Everybody wins.

However, over time the tribes began to get bigger. One tribe would conquer their neighbours, gaining more power and thus enabling them to take on bigger, larger, more powerful tribes and absorb them too. Gradually, territories, nations and empires form, and what was once a small group in which everyone knew everyone else became a far larger organisation. The problem as things get bigger is that what’s good for a country starts to not necessarily become as good for the individual. As a tribe gets larger, the individual becomes more independent of the motions of his leader, to the point at which the knowledge that you have helped the security of your tribe does not bear a direct connection to the availability of your next meal- especially if the tribe adopts a capitalist model of ‘get yer own food’ (as opposed to a more communist one of ‘hunters pool your resources and share between everyone’ as is common in a very small-scale situation when it is easy to organise). In this scenario, sharing an innovation for ‘the good of the tribe’ has far less of a tangible benefit for the individual.

Historically, this rarely proved to be much of a problem- the only people with the time and resources to invest in discovering or producing something new were the church, who generally shared between themselves knowledge that would have been useless to the illiterate majority anyway, and those working for the monarchy or nobility, who were the bosses anyway. However, with the invention of the printing press around the start of the 16th century, this all changed. Public literacy was on the up and the press now meant that anyone (well, anyone rich enough to afford the printers’ fees)  could publish books and information on a grand scale. Whilst previously the copying of a book required many man-hours of labour from a skilled scribe, who were rare, expensive and carefully controlled, now the process was quick, easy and available. The impact of the printing press was made all the greater by the social change of the few hundred years between the Renaissance and today, as the establishment of a less feudal and more merit-based social system, with proper professions springing up as opposed to general peasantry, meaning that more people had the money to afford such publishing, preventing the use of the press being restricted solely to the nobility.

What all this meant was that more and more normal (at least, relatively normal) people could begin contributing ideas to society- but they weren’t about to give them up to their ruler ‘for the good of the tribe’. They wanted payment, compensation for their work, a financial acknowledgement of the hours they’d put in to try and make the world a better place and an encouragement for others to follow in their footsteps. So they sold their work, as was their due. However, selling a book, which basically only contains information, is not like selling something physical, like food. All the value is contained in the words, not the paper, meaning that somebody else with access to a printing press could also make money from the work you put in by running of copies of your book on their machine, meaning they were profiting from your work. This can significantly cut or even (if the other salesman is rich and can afford to undercut your prices) nullify any profits you stand to make from the publication of your work, discouraging you from putting the work in in the first place.

Now, even the most draconian of governments can recognise that your citizens producing material that could not only benefit your nation’s happiness but also potentially have great material use is a valuable potential resource, and that they should be doing what they can to promote the production of that material, if only to save having to put in the large investment of time and resources themselves. So, it makes sense to encourage the production of this material, by ensuring that people have a financial incentive to do it. This must involve protecting them from touts attempting to copy their work, and hence we arrive at the principle of copyright: that a person responsible for the creation of a work of art, literature, film or music, or who is responsible for some form of technological innovation, should have legal control over the release & sale of that work for at least a set period of time. And here, as I will explain next time, things start to get complicated…

The Land of the Red

Nowadays, the country to talk about if you want to be seen as being politically forward-looking is, of course, China. The most populous nation on Earth (containing 1.3 billion souls) with an economy and defence budget second only to the USA in terms of size, it also features a gigantic manufacturing and raw materials extraction industry, the world’s largest standing army and one of only five remaining communist governments. In many ways, this is China’s second boom as a superpower, after its early forays into civilisation and technological innovation around the time of Christ made it the world’s largest economy for most of the intervening time. However, the technological revolution that swept the Western world in the two or three hundred years during and preceding the Industrial Revolution (which, according to QI, was entirely due to the development and use of high-quality glass in Europe, a material almost totally unheard of in China having been invented in Egypt and popularised by the Romans) rather passed China by, leaving it a severely underdeveloped nation by the nineteenth century. After around 100 years of bitter political infighting, during which time the 2000 year old Imperial China was replaced by a republic whose control was fiercely contested between nationalists and communists, the chaos of the Second World War destroyed most of what was left of the system. The Second Sino-Japanese War (as that particular branch of WWII was called) killed around 20 million Chinese civilians, the second biggest loss to a country after the Soviet Union, as a Japanese army fresh from an earlier revolution from Imperial to modern systems went on a rampage of rape, murder and destruction throughout the underdeveloped northern China, where some war leaders still fought with swords. The war also annihilated the nationalists, leaving the communists free to sweep to power after the Japanese surrender and establish the now 63-year old People’s Republic, then lead by former librarian Mao Zedong.

Since then, China has changed almost beyond recognition. During the idolised Mao’s reign, the Chinese population near-doubled in an effort to increase the available worker population, an idea tried far less successfully in other countries around the world with significantly less space to fill. This population was then put to work during Mao’s “Great Leap Forward”, in which he tried to move his country away from its previously agricultural economy and into a more manufacturing-centric system. However, whilst the Chinese government insists to this day that three subsequent years of famine were entirely due to natural disasters such as drought and poor weather, and only killed 15 million people, most external commentators agree that the sudden change in the availability of food thanks to the Great Leap certainly contributed to the death toll estimated to actually be in the region of 20-40 million. Oh, and the whole business was an economic failure, as farmers uneducated in modern manufacturing techniques attempted to produce steel at home, resulting in a net replacement of useful food for useless, low-quality pig iron.

This event in many ways typifies the Chinese way- that if millions of people must suffer in order for things to work out better in the long run and on the numbers sheet, then so be it, partially reflecting the disregard for the value of life historically also common in Japan. China is a country that has said it would, in the event of a nuclear war, consider the death of 90% of their population acceptable losses so long as they won, a country whose main justification for this “Great Leap Forward” was to try and bring about a state of social structure & culture that the government could effectively impose socialism upon, as it tried to do during its “Cultural Revolution” during the mid-sixties. All this served to do was get a lot of people killed, resulted in a decade of absolute chaos, literally destroyed China’s education system and, despite reaffirming Mao’s godlike status (partially thanks to an intensification in the formation of his personality cult), some of his actions rather shamed the governmental high-ups, forcing the party to take the angle that, whilst his guiding thought was of course still the foundation of the People’s Republic and entirely correct in every regard, his actions were somehow separate from that and got rather brushed under the carpet. It did help that, by this point, Mao was now dead and was unlikely to have them all hung for daring to question his actions.

But, despite all this chaos, all the destruction and all the political upheaval (nowadays the government is still liable to arrest anyone who suggests that the Cultural Revolution was a good idea), these things shaped China into the powerhouse it is today. It may have slaughtered millions of people and resolutely not worked for 20 years, but Mao’s focus on a manufacturing economy has now started to bear fruit and give the Chinese economy a stable footing that many countries would dearly love in these days of economic instability. It may have an appalling human rights record and have presided over the large-scale destruction of the Chinese environment, but Chinese communism has allowed for the government to control its labour force and industry effectively, allowing it to escape the worst ravages of the last few economic downturns and preventing internal instability. And the extent to which it has forced itself upon the people of China for decades, forcing them into the party line with an iron fist, has allowed its controls to be gently relaxed in the modern era whilst ensuring the government’s position is secure, to an extent satisfying the criticisms of western commentators. Now, China is rich enough and positioned solidly enough to placate its people, to keep up its education system and build cheap housing for the proletariat. To an accountant, therefore,  this has all worked out in the long run.

But we are not all accountants or economists- we are members of the human race, and there is more for us to consider than just some numbers on a spreadsheet. The Chinese government employs thousands of internet security agents to ensure that ‘dangerous’ ideas are not making their way into the country via the web, performs more executions annually than the rest of the world combined, and still viciously represses every critic of the government and any advocate of a new, more democratic system. China has paid an enormously heavy price for the success it enjoys today. Is that price worth it? Well, the government thinks so… but do you?

Attack of the Blocks

I spend far too much time on the internet. As well as putting many hours of work into trying to keep this blog updated regularly, I while away a fair portion of time on Facebook, follow a large number of video series’ and webcomics, and can often be found wandering through the recesses of YouTube (an interesting and frequently harrowing experience that can tell one an awful lot about the extremes of human nature). But there is one thing that any resident of the web cannot hope to avoid for any great period of time, and quite often doesn’t want to- the strange world of Minecraft.

Since its release as a humble alpha-version indie game in 2009, Minecraft has boomed to become a runaway success and something of a cultural phenomenon. By the end of 2011, before it had even been released in its final release format, Minecraft had registered 4 million purchases and 4 times that many registered users, which isn’t bad for a game that has never advertised itself, spread semi-virally among nerdy gamers for its mere three-year history and was made purely as an interesting project by its creator Markus Persson (aka Notch). Thousands of videos, ranging from gameplay to some quite startlingly good music videos (check out the work of Captain Sparklez if you haven’t already) litter YouTube and many of the games’ features (such as TNT and the exploding mobs known as Creepers) have become memes in their own right to some degree.

So then, why exactly has Minecraft succeeded where hundreds and thousands of games have failed, becoming a revolution in gamer culture? What is it that makes Minecraft both so brilliant, and so special?

Many, upon being asked this question, tend to revert to extolling the virtues of the game’s indie nature. Being created entirely without funding as an experiment in gaming rather than profit-making, Minecraft’s roots are firmly rooted in the humble sphere of independent gaming, and it shows. One obvious feature is the games inherent simplicity- initially solely featuring the ability to wander around, place and destroy blocks, the controls are mainly (although far from entirely) confined to move and ‘use’, whether that latter function be shoot, slash, mine or punch down a tree. The basic, cuboid, ‘blocky’ nature of the game’s graphics, allowing for both simplicity of production and creating an iconic, retro aesthetic that makes it memorable and standout to look at. Whilst the game has frequently been criticised for not including a tutorial (I myself took a good quarter of an hour to find out that you started by punching a tree, and a further ten minutes to work out that you were supposed to hold down the mouse button rather than repeatedly click), this is another common feature of indie gaming, partly because it saves time in development, but mostly because it makes the game feel like it is not pandering to you and thus allowing indie gamers to feel some degree of elitism that they are good enough to work it out by themselves. This also ties in with the very nature of the game- another criticism used to be (and, to an extent, still is, even with the addition of the Enderdragon as a final win objective) that the game appeared to be largely devoid of point, existent only for its own purpose. This is entirely true, whether you view that as a bonus or a detriment being entirely your own opinion, and this idea of an unfamiliar, experimental game structure is another feature common in one form or another to a lot of indie games.

However, to me these do not seem to be entirely worthy of the name ‘answers’ regarding the question of Minecraft’s phenomenal success. The reason I think this way is that they do not adequately explain exactly why Minecraft rose to such prominence whilst other, often similar, indie games have been left in relative obscurity. Limbo, for example, is a side-scrolling platformer and a quite disturbing, yet compelling, in-game experience, with almost as much intrigue and puzzle from a set of game mechanics simpler even than those of Minecraft. It has also received critical acclaim often far in excess of Minecraft (which has received a positive, but not wildly amazed, response from critics), and yet is still known to only an occasional few. Amnesia: The Dark Descent has been often described as the greatest survival horror game in history, as well as incorporating a superb set of graphics, a three-dimensional world view (unlike the 2D view common to most indie games) and the most pants-wettingly terrifying experience anyone who’s ever played it is likely to ever face- but again, it is confined to the indie realm. Hell, Terraria is basically Minecraft in 2D, but has sold around 40 times less than Minecraft itself. All three of these games have received fairly significant acclaim and coverage, and rightly so, but none has become the riotous cultural phenomenon that Minecraft has, and neither have had an Assassin’s Creed mod (first example that sprung to mind).

So… why has Minecraft been so successful. Well, I’m going to be sticking my neck out here, but to my mind it’s because it doesn’t play like an indie game. Whilst most independently produced titled are 2D, confined to fairly limited surroundings and made as simple & basic as possible to save on development (Amnesia can be regarded as an exception), Minecraft takes it own inherent simplicity and blows it up to a grand scale. It is a vast, open world sandbox game, with vague resonances of the Elder Scrolls games and MMORPG’s, taking the freedom, exploration and experimentation that have always been the advantages of this branch of the AAA world, and combined them with the innovative, simplistic gaming experience of its indie roots. In some ways it’s similar to Facebook, in that it takes a simple principle and then applies it to the largest stage possible, and both have enjoyed a similarly explosive rise to fame. The randomly generated worlds provide infinite caverns to explore, endless mobs to slay, all the space imaginable to build the grandest of castles, the largest of cathedrals, or the SS Enterprise if that takes your fancy. There are a thousand different ways to play the game on a million different planes, all based on just a few simple mechanics. Minecraft is the best of indie and AAA blended together, and is all the more awesome for it.