Misnomers

I am going to break two of my cardinal rules  at once over the course of this post, for it is the first in the history of this blog that could be adequately described as a whinge. I have something of a personal hatred against these on principle that they never improve anybody’s life or even the world in general, but I’m hoping that this one is at least well-meaning and not as hideously vitriolic as some ‘opinion pieces’ I have had the misfortune to read over the years.

So…

A little while ago, the BBC published an article concerning the arrest of a man suspected of being a part of hacking group Lulzsec, an organised and select offshoot of the infamous internet hacking advocates and ‘pressure group’ Anonymous. The FBI have accused him of being part of a series of attacks on Sony last May & June, in which thousands of personal details on competition entries were published online. Lulzsec at the time made a statement to the effect that ‘we got all these details from one easy sting, so why do you trust them?’, which might have made the attack a case of trying to prove a point had the point not been directed at an electronics company and was thus kind of stupid. Had it been aimed at a government I might have understood, but to me this just looks like the internet doing what it does best- doing stuff simply for the fun of it. This is in fact the typical motive behind most Lulzsec activities, doing things ‘for teh lulz’, hence the first half of their name and the fact that their logo is a stick figure in typical meme style.

The BBC made reference to their name too in their coverage of the event, but since the journalist involved had clearly taken their information from a rather poorly-worded sentence of a Wikipedia article he claimed that ‘lulz’ was a play on words of lol, aka laugh out loud. This is not, technically speaking, entirely wrong, but is a bit like claiming the word ‘gay’ can now be used to mean happy in general conversation- something of an anachronism, albeit a very recent one. Lulz in the modern internet sense is used more to mean ‘laughs’ or ‘entertainment’, and  ‘for teh lulz’ could even be translated as simply ‘for the hell of it’. As I say, the argument was not expressly wrong as it was revealing that this journalist was either not especially good at getting his point across or dealing with slightly unfamiliar subject matter.

This is not the only example of the media getting things a little wrong when it comes to the internet. A few months ago, after a man was arrested for viciously abusing a celebrity (I forget who) using twitter, he was dubbed a ‘troll’, a term that, according to the BBC article I read, denotes somebody who uses the internet to bully and abuse people (sorry for picking on the BBC because a lot of others do it too, but I read them more than most other news sources). However, any reasonably experienced denizen of the internet will be able to tell you that the word ‘troll’ originated from the activity known as ‘trolling’, etymologically thought to originate from fishing (from a similar route as ‘trawling’). The idea behind this is that the original term was used in the context of ‘trolling for newbies’, ie laying down an obvious feeder line that an old head would recognise as being both obvious and discussed to its death, but that a newer face would respond to earnestly. Thus ‘newbies’ were fished for and identified, mostly for the amusement of the more experienced faces. Thus, trolling has lead to mean making jokes or provocative comments for one’s own amusement and at the expense of others, and ‘troll’ has become descriptive of somebody who trolls others. Whilst it is perhaps not the most noble of human activities, and some repeat offenders could definitely do with a bit more fresh air now and again, it is mostly harmless and definitely not to be taken altogether too seriously. What it is also not is a synonym for internet abuse or even (as one source has reported it) ‘defac[ing] Internet tribute sites with the aim of causing grief to families’. That is just plain old despicable bullying, something that has no place on the internet or the world in general, and dubbing casual humour-seekers such is just giving mostly alright people an unnecessarily bad name.

And here we get onto the bone I wish to pick- that the media, as a rule, do not appear to understand the internet or its culture, and instead treat it almost like a child’s plaything, a small distraction whose society is far less important than its ability to spawn companies. There may be an element of fear involved, an intentional mistrust of the web and a view to hold off embracing it as long as possible, for mainstream media is coming under heavy competition from the web and many have argued that the latter may soon kill the former altogether. This is as maybe, but news organisations should be obliged to act with at least a modicum of neutrality and respectability, especially for a service such as the BBC that does not depend on commercial funding anyway. It would perhaps not be too much to ask for a couple of organisations to hire an internet correspondent, to go with their food, technology, sports, science, environment, every country around the world, domestic, travel and weather ones, if only to allow issues concerning it to be conveyed accurately by someone who knows what he’s talking about. If it’s good enough for the rest of the world, then it’s surely good enough for the culture that has made mankind’s greatest invention what it is today.

OK, rant over, I’ll do something a little more normal next time out.

Advertisement

The Conquest of Air

Everybody in the USA, and in fact just about everyone across the world, has heard of Orville and Wilbur Wright. Two of the pioneers of aviation, when their experimental biplane Flyer achieved the first ever manned, powered, heavier-than-air flight on the morning of December 17, 1903, they had finally achieved one of man’s long-held dreams; control and mastery of air travel.

However, what is often puzzling when considering the Wright brothers’ story is the number of misconceptions surrounding them. Many, for instance, are under the impression that they were the first people to fly at all, inventing all the various technicalities of lift, aerofoil structures and control that are now commonplace in today’s aircraft. In fact, the story of flight, perhaps the oldest and maddest of human ambitions, an idea inspired by every time someone has looked up in wonder at the graceful flight of a bird, is a good deal older than either of them.

Our story begins, as does nearly all technological innovation, in imperial China, around 300 BC (the Greek scholar Archytas had admittedly made a model wooden pigeon ‘fly’ some 100 years previously, but nobody is sure exactly how he managed it). The Chinese’s first contribution was the invention of the kite, an innovation that would be insignificant if it wasn’t for whichever nutter decided to build one big enough to fly in. However, being strapped inside a giant kite and sent hurtling skywards not only took some balls, but was heavily dependent on wind conditions, heinously dangerous and dubiously useful, so in the end the Chinese gave up on manned flight and turned instead to unmanned ballooning, which they used for both military signalling and ceremonial purposes. It isn’t actually known if they ever successfully put a man into the air using a kite, but they almost certainly gave it a go. The Chinese did have one further attempt, this time at inventing the rocket engine, some years later, in which a young and presumably mental man theorised that if you strapped enough fireworks to a chair then they would send the chair and its occupants hurtling into the night sky. His prototype (predictably) exploded, and it wasn’t for two millennia, after the passage of classical civilisation, the Dark Ages and the Renaissance, that anyone tried flight again.

That is not to say that the idea didn’t stick around. The science was, admittedly beyond most people, but as early as 1500 Leonardo da Vinci, after close examination of bird wings, had successfully deduced the principle of lift and made several sketches showing designs for a manned glider. The design was never tested, and not fully rediscovered for many hundreds of years after his death (Da Vinci was not only a controversial figure and far ahead of his time, but wrote his notebooks in a code that it took centuries to decipher), but modern-day experiments have shown that his design would probably have worked. Da Vinci also put forward the popular idea of ornithopters, aircraft powered by flapping motion as in bird wings, and many subsequent attempts at flight attempted to emulate this method of motion. Needless to say, these all failed (not least because very few of the inventors concerned actually understood aerodynamics).

In fact, it wasn’t until the late 18th century that anyone started to really make any headway in the pursuit of flight. In 1783, a Parisian physics professor, Jacques Charles, built on the work of several Englishmen concerning the newly discovered hydrogen gas and the properties and behaviour of gases themselves. Theorising that, since hydrogen was less dense than air, it should follow Archimedes’ principle of buoyancy and rise, thus enabling it to lift a balloon, he launched the world’s first hydrogen balloon from the Champs du Mars on August 27th. The balloon was only small, and there were significant difficulties encountered in building it, but in the design process Charles, aided by his engineers the Roberts brothers, invented a method of treating silk to make it airtight, spelling the way for future pioneers of aviation. Whilst Charles made some significant headway in the launch of ever-larger hydrogen balloons, he was beaten to the next significant milestones by the Montgolfier brothers, Joseph-Michel and Jacques-Etienne. In that same year, their far simpler hot-air balloon designs not only put the first living things (a sheep, rooster and duck) into the atmosphere, but, just a month later, a human too- Jacques-Etienne was the first European, and probably the first human, ever to fly.

After that, balloon technology took off rapidly (no pun intended). The French rapidly became masters of the air, being the first to cross the English Channel and creators of the first steerable and powered balloon flights. Finally settling on Charles’ hydrogen balloons as a preferable method of flight, blimps and airships began, over the next century or so, to become an accepted method of travel, and would remain so right up until the Hindenburg disaster of 1937, which rather put people off the idea. For some scientists and engineers, humankind had made it- we could now fly, could control where we were going at least partially independent of the elements, and any attempt to do so with a heavier-than-air machine was both a waste of time and money, the preserve of dreamers. Nonetheless, to change the world, you sometimes have to dream big, and that was where Sir George Cayley came in.

Cayley was an aristocratic Yorkshireman, a skilled engineer and inventor, and a magnanimous, generous man- he offered all of his inventions for the public good and expected no payment for them. He dabbled in a number of fields, including seatbelts, lifeboats, caterpillar tracks, prosthetics, ballistics and railway signalling. In his development of flight, he even reinvented the wheel- he developed the idea of holding a wheel in place using thin metal spokes under tension rather than solid ones under compression, in an effort to make the wheels lighter, and is thus responsible for making all modern bicycles practical to use. However, he is most famous for being the first man ever, in 1853, to put somebody into the air using a heavier-than-air glider (although Cayley may have put a ten-year old in a biplane four years earlier).

The man in question was Cayley’s chauffeur (or butler- historical sources differ widely), who was (perhaps understandably) so hesitant to go in his boss’ mental contraption that he handed in his notice upon landing after his flight across Brompton Dale, stating  as his reason that ‘I was hired to drive, not fly’. Nonetheless, Cayley had shown that the impossible could be done- man could fly using just wings and wheels. He had also designed the aerofoil from scratch, identified the forces of thrust, lift, weight and drag that control an aircraft’s movements, and paved the way for the true pioneer of ‘heavy’ flight- Otto Lilienthal.

Lilienthal (aka ‘The Glider King’) was another engineer, making 25 patents in his life, including a revolutionary new engine design. But his fame comes from a world without engines- the world of the sky, with which he was obsessed. He was just a boy when he first strapped wings to his arms in an effort to fly (which obviously failed completely), and later published works detailing the physics of bird flight. It wasn’t until 1891, aged 43, once his career and financial position was stable and he had finished fighting in the Franco-Prussian War, that he began to fly in earnest, building around 12 gliders over a 5-year period (of which 6 still survive). It might have taken him a while, but once he started there was no stopping him, as he made over 2000 flights in just 5 years (averaging more than one every day). During this time he was only able to rack up 5 hours of flight time (meaning his average flight time was just 9 seconds), but his contribution to his field was enormous. He was the first to be able to control and manoeuvre his machines by varying his position and weight distribution, a factor whose importance he realised was absolutely paramount, and also recognised that a proper understanding of how to achieve powered flight (a pursuit that had been proceeding largely unsuccessfully for the past 50 years) could not be achieved without a basis in unpowered glider flight, in recognising that one must work in harmony with aerodynamic forces. Tragically, one of Lilienthal’s gliders crashed in 1896, and he died after two days in hospital. But his work lived on, and the story of his exploits and his death reached across the world, including to a pair of brothers living in Dayton, Ohio, USA, by the name of Wright. Together, the Wright brothers made huge innovations- they redesigned the aerofoil more efficiently, revolutionised aircraft control using wing warping technology (another idea possibly invented by da Vinci), conducted hours of testing in their own wind tunnel, built dozens of test gliders and brought together the work of Cayley, Lilienthal, da Vinci and a host of other, mostly sadly dead, pioneers of the air.  The Wright brothers are undoubtedly the conquerors of the air, being the first to show that man need not be constrained by either gravity or wind, but can use the air as a medium of travel unlike any other. But the credit is not theirs- it is a credit shared between all those who have lived and died in pursuit of the dream of fling like birds. To quote Lilienthal’s dying words, as he lay crippled by mortal injuries from his crash, ‘Sacrifices must be made’.

Questionably Moral

We human beings tend to set a lot of store by the idea of morality (well, most of us anyway), and it is generally accepted that having a strong code of morals is a good thing. Even if many of us have never exactly qualified what we consider to be right or wrong, the majority of people have at least a basic idea of what they consider morally acceptable and a significant number are willing to make their moral standpoint on various issues very well known to anyone who doesn’t want to listen (internet, I’m looking at you again). One of the key features considered to be integral to such a moral code is the idea of rigidity and having fixed rules. Much like law, morality should ideally be inflexible, passing equal judgement on the same situation regardless of who is involved, how you’re feeling at the time and other outside factors. If only to avoid being accused of hypocrisy, social law dictates that one ‘should’ pass equal moral judgement on both your worst enemy and your spouse, and such a stringent dedication to ‘justice’ is a prized concept among those with strong moral codes.

However, human beings are nothing if not inconsistent, and even the strongest and most vehemently held ideas have a habit of withering in the face of context. One’s moral code is no exception, and with that in mind, let’s talk about cats.

Consider a person- call him a socialist, if you like that sort of description. Somebody who basically believes that we should be doing our bit to help our fellow man. Someone who buys The Big Issue, donates to charity, and gives their change to the homeless. They take the view that those in a more disadvantaged position should be offered help, and they live and share this view on a daily basis.

Now, consider what happens when, one day, said person is having a barbecue and a stray cat comes into the garden. Such strays are, nowadays, uncommon in suburban Britain, but across Europe (the Mediterranean especially), there may be hundreds of them in a town (maybe the person’s on holiday). Picture one such cat- skinny, with visible ribs, unkempt and patchy fur, perhaps a few open sores. A mangy, quite pathetic creature, clinging onto life through a mixture of tenacity and grubbing for scraps, it enters the garden and makes its way towards the man and his barbecue.

Human beings, especially modern-day ones, leave quite a wasteful and indulgent existence. We certainly do not need the vast majority of the food we produce and consume, and could quite happily do without a fair bit of it. A small cat, by contrast, can survive quite happily for at least day on just one small bowl of food, or a few scraps of meat. From a neutral, logical standpoint, therefore, the correct and generous thing to do according to this person’s moral standpoint, would be to throw the cat a few scraps and sleep comfortably with a satisfied conscience that evening. But, all our person sees is a mangy street cat, a dirty horrible stray that they don’t want anywhere near them or their food, so they do all they can to kick, scream, shout, throw water and generally drive a starving life form after just a few scraps away from a huge pile of pristine meat, much of which is likely to go to waste.

Now, you could argue that if the cat had been given food, it would have kept on coming back, quite insatiably, for more, and could possibly have got bolder and more aggressive. An aggressive, confident cat is more likely to try and steal food, and letting a possibly diseased and flea-ridden animal near food you are due to eat is probably not in the best interests of hygiene. You could argue that offering food is just going to encourage other cats to come to you for food, until you become a feeding station for all those in the area and are thus promoting the survival and growth of a feline population that nobody really likes to see around and would be unsustainable to keep. You could argue, if you were particularly harsh and probably not of the same viewpoint as the person in question, that a cat is not ‘worth’ as much as a human, if only because we should stick to looking after our own for starters and, in any case, it would be better for the world anyway if there weren’t stray cats around to cause such freak out-ness and moral dilemmas. But all of this does not change the fact that this person has, from an objective standpoint, violated their moral code by refusing a creature less fortunate than themselves a mere scrap that could, potentially, represent the difference between their living and dying.

There are other such examples of such moral inconsistency in the world around us. Animals are a common connecting factor (pacifists and people who generally don’t like murder will quite happily swat flies and such ‘because they’re annoying’), but there are other, more human examples (those who say we should be feeding the world’s poor whilst simultaneously both eating and wasting vast amounts of food and donating a mere pittance to help those in need). Now, does this mean that all of these moral standpoints are stupid? Of course not, if we all decided not to help and be nice to one another then the world would be an absolute mess. Does it mean that we’re all just bad, hypocritical people, as the violently forceful charity collectors would have you believe? Again, no- this ‘hypocrisy’ is something that all humans do to some extent, so either the entire human race is fundamentally flawed (in which case the point is not worth arguing) or we feel that looking after ourselves first and foremost before helping others is simply more practical. Should we all turn to communist leadership to try and redress some of these imbalances and remove the moral dilemmas? I won’t even go there.

It’s a little hard to identify a clear moral or conclusion to all of this, except to highlight that moral inconsistency is a natural and very human trait. Some might deplore this state of affairs, but we’ve always known humans are imperfect creatures; not that that gives us a right to give up on being the best we can be.

Copyright Quirks

This post is set to follow on from my earlier one on the subject of copyright law and its origins. However, just understanding the existence of copyright law does not necessarily premeditate the understanding of the various complications, quirks and intricacies that get people quite so angry about it- so today I want to explore a few of these features that get people so annoyed, and explain why and how they came to be.

For starters, it is not in the public interest for material to stay forever copyrighted, for the simple reason that stuff is always more valuable if freely in the public domain as it is more accessible for the majority. If we consider a technological innovation or invention, restricting its production solely to the inventor leaves them free to charge pretty much what they like, since they have no competition to compete with. Not only does this give them an undesirable monopoly, it also restricts that invention from being best used on a large scale, particularly if it is something like a drug or medicine. Therefore, whilst a copyright obviously has to exist in order to stimulate the creation of new stuff, allowing it to last forever is just asking for trouble, which is why copyrights generally have expiry times. The length of a copyright’s life varies depending on a product- for authors it generally lasts for their lifetime plus a period of around 70 years or so to allow their family to profit from it (expired copyright is the reason that old books can be bought for next to nothing in digital form these days, as they cost nothing to produce). For physical products and, strangely, music, the grace period is generally both fixed and shorter (and dependent on the country concerned), and for drugs and pharmaceuticals it is just ten years (drugs companies are corrupt and profit-obsessed enough without giving them too long to rake in the cash).

Then, we encounter the fact that a copyright also represents a valuable commodity, and thus something that can potentially be put up for sale. You might think that allowing this sort of thing to go on is wrong and is only going to cause problems, but it is often necessary. Consider somebody who owns the rights to a book, and wants someone to make a film out of it, partly because they may be up for a cut of the profits and will gain money from the sale of their rights, but also because it represents a massive advertisement for their product. They, therefore, want to be able to sell part of the whole ‘right to publish’ idea to a film studio who can do the job for them, and any law prohibiting this is just pissing everybody off and preventing a good film from potentially being made. The same thing could apply to a struggling company who owns some valuable copyright to a product; the ability to sell it not only offers them the opportunity to make a bit of money to cover their losses, but also means that the product is more likely to stay on the free market and continue being produced by whoever bought the rights. It is for this reason legal for copyright to be traded between various different people or groups to varying degrees, although the law does allow the original owner to cancel any permanent trade after 35 years if they want to do something with the property.

And what about the issue of who is responsible for a work at all?  One might say that it is simply the work of the author/inventor concerned, but things are often not that simple. For one thing, innovations are often the result of work by a team of people and to restrict the copyright to any one of them would surely be unfair. For another, what if, say, the discovery of a new medical treatment came about because the scientist responsible was paid to do so, and given all the necessary equipment and personnel, by a company. Without corporate support, the discovery could never have been made, so surely that company is just as much legally entitled to the copyright as the individual responsible? This is legally known as ‘work made for hire’, and the copyright in this scenario is the property of the company rather than the individual, lasting for a fixed period (70 years in the US) since the company involved is unlikely to ‘die’ in quite the same predictable lifespan of a human being, and is unlikely to have any relatives for the copyright to benefit afterwards. It is for this reason also that companies, rather than just people, are allowed to hold copyright.

All of these quirks of law are undoubtedly necessary to try and be at least relatively fair to all concerned, but they are responsible for most of the arguments currently put about pertaining to ‘why copyright law is %&*$ed up’. The correct length of a copyright for various different stuff is always up for debate, whether it be musicians who want them to get longer (Paul McCartney made some complaints about this a few years ago), critics who want corporate ones to get shorter, or morons who want to get rid of them altogether (they generally mean well, but anarchistic principles today don’t either a) work very well or b) attract support likely to get them taken seriously). The sale of copyright angers a lot of people, particularly film critics- sales of the film rights for stuff like comic book characters generally include a clause requiring the studio to give it back if they don’t do anything with it for a few years. This has resulted in a lot of very badly-made films over the years which continue to be published solely because the relevant studio don’t want to give back for free a valuable commodity that still might have a few thousand dollars to be squeezed out of it (basically, blame copyright law for the new Spiderman film). The fact that both corporations and individuals can both have a right to the ownership of a product (and even the idea that a company can claim responsibility for the creation of something) has resulted in countless massive lawsuits over the years, almost invariably won by the biggest publishing company, and has created an image of game developers/musicians/artists being downtrodden by big business that is often used as justification by internet pirates. Not that the image is inaccurate or anything, but very few companies appear to realise that this is why there is such an undercurrent of sympathy for piracy on the internet and why their attempts to attack it through law have met with quite such a vitriolic response (as well as being poorly-worded and not thought out properly).

So… yeah, that’s pretty much copyright, or at least why it exists and people get annoyed about it. There are a lot of features concerning copyright law that people don’t like, and I’d be the last to say that it couldn’t do with a bit of bringing up to date- but it’s all there for a reason and it’s not just there because suit-clad stereotypes are lighting hundred dollar cigars off the arse of the rest of us. So please, when arguing about it, don’t suggest anything should just go without thinking of why it’s there in the first place.

A Brief History of Copyright

Yeah, sorry to be returning to this topic yet again, I am perfectly aware that I am probably going to be repeating an awful lot of stuff that either a) I’ve said already or b) you already know. Nonetheless, having spent a frustrating amount of time in recent weeks getting very annoyed at clever people saying stupid things, I feel the need to inform the world if only to satisfy my own simmering anger at something really not worth getting angry about. So:

Over the past year or so, the rise of a whole host of FLLAs (Four Letter Legal Acronyms) from SOPA to ACTA has, as I have previously documented, sent the internet and the world at large in to paroxysms of mayhem at the very idea that Google might break and/or they would have to pay to watch the latest Marvel film. Naturally, they also provoked a lot of debate, ranging in intelligence from intellectual to average denizen of the web, on the subject of copyright and copyright law. I personally think that the best way to understand anything is to try and understand exactly why and how stuff came to exist in the first place, so today I present a historical analysis of copyright law and how it came into being.

Let us travel back in time, back to our stereotypical club-wielding tribe of stone age human. Back then, the leader not only controlled and lead the tribe, but ensured that every facet of it worked to increase his and everyone else’s chance of survival, and chance of ensuring that the next meal would be coming along. In short, what was good for the tribe was good for the people in it. If anyone came up with a new idea or technological innovation, such as a shield for example, this design would also be appropriated and used for the good of the tribe. You worked for the tribe, and in return the tribe gave you protection, help gathering food and such and, through your collective efforts, you stayed alive. Everybody wins.

However, over time the tribes began to get bigger. One tribe would conquer their neighbours, gaining more power and thus enabling them to take on bigger, larger, more powerful tribes and absorb them too. Gradually, territories, nations and empires form, and what was once a small group in which everyone knew everyone else became a far larger organisation. The problem as things get bigger is that what’s good for a country starts to not necessarily become as good for the individual. As a tribe gets larger, the individual becomes more independent of the motions of his leader, to the point at which the knowledge that you have helped the security of your tribe does not bear a direct connection to the availability of your next meal- especially if the tribe adopts a capitalist model of ‘get yer own food’ (as opposed to a more communist one of ‘hunters pool your resources and share between everyone’ as is common in a very small-scale situation when it is easy to organise). In this scenario, sharing an innovation for ‘the good of the tribe’ has far less of a tangible benefit for the individual.

Historically, this rarely proved to be much of a problem- the only people with the time and resources to invest in discovering or producing something new were the church, who generally shared between themselves knowledge that would have been useless to the illiterate majority anyway, and those working for the monarchy or nobility, who were the bosses anyway. However, with the invention of the printing press around the start of the 16th century, this all changed. Public literacy was on the up and the press now meant that anyone (well, anyone rich enough to afford the printers’ fees)  could publish books and information on a grand scale. Whilst previously the copying of a book required many man-hours of labour from a skilled scribe, who were rare, expensive and carefully controlled, now the process was quick, easy and available. The impact of the printing press was made all the greater by the social change of the few hundred years between the Renaissance and today, as the establishment of a less feudal and more merit-based social system, with proper professions springing up as opposed to general peasantry, meaning that more people had the money to afford such publishing, preventing the use of the press being restricted solely to the nobility.

What all this meant was that more and more normal (at least, relatively normal) people could begin contributing ideas to society- but they weren’t about to give them up to their ruler ‘for the good of the tribe’. They wanted payment, compensation for their work, a financial acknowledgement of the hours they’d put in to try and make the world a better place and an encouragement for others to follow in their footsteps. So they sold their work, as was their due. However, selling a book, which basically only contains information, is not like selling something physical, like food. All the value is contained in the words, not the paper, meaning that somebody else with access to a printing press could also make money from the work you put in by running of copies of your book on their machine, meaning they were profiting from your work. This can significantly cut or even (if the other salesman is rich and can afford to undercut your prices) nullify any profits you stand to make from the publication of your work, discouraging you from putting the work in in the first place.

Now, even the most draconian of governments can recognise that your citizens producing material that could not only benefit your nation’s happiness but also potentially have great material use is a valuable potential resource, and that they should be doing what they can to promote the production of that material, if only to save having to put in the large investment of time and resources themselves. So, it makes sense to encourage the production of this material, by ensuring that people have a financial incentive to do it. This must involve protecting them from touts attempting to copy their work, and hence we arrive at the principle of copyright: that a person responsible for the creation of a work of art, literature, film or music, or who is responsible for some form of technological innovation, should have legal control over the release & sale of that work for at least a set period of time. And here, as I will explain next time, things start to get complicated…

The Land of the Red

Nowadays, the country to talk about if you want to be seen as being politically forward-looking is, of course, China. The most populous nation on Earth (containing 1.3 billion souls) with an economy and defence budget second only to the USA in terms of size, it also features a gigantic manufacturing and raw materials extraction industry, the world’s largest standing army and one of only five remaining communist governments. In many ways, this is China’s second boom as a superpower, after its early forays into civilisation and technological innovation around the time of Christ made it the world’s largest economy for most of the intervening time. However, the technological revolution that swept the Western world in the two or three hundred years during and preceding the Industrial Revolution (which, according to QI, was entirely due to the development and use of high-quality glass in Europe, a material almost totally unheard of in China having been invented in Egypt and popularised by the Romans) rather passed China by, leaving it a severely underdeveloped nation by the nineteenth century. After around 100 years of bitter political infighting, during which time the 2000 year old Imperial China was replaced by a republic whose control was fiercely contested between nationalists and communists, the chaos of the Second World War destroyed most of what was left of the system. The Second Sino-Japanese War (as that particular branch of WWII was called) killed around 20 million Chinese civilians, the second biggest loss to a country after the Soviet Union, as a Japanese army fresh from an earlier revolution from Imperial to modern systems went on a rampage of rape, murder and destruction throughout the underdeveloped northern China, where some war leaders still fought with swords. The war also annihilated the nationalists, leaving the communists free to sweep to power after the Japanese surrender and establish the now 63-year old People’s Republic, then lead by former librarian Mao Zedong.

Since then, China has changed almost beyond recognition. During the idolised Mao’s reign, the Chinese population near-doubled in an effort to increase the available worker population, an idea tried far less successfully in other countries around the world with significantly less space to fill. This population was then put to work during Mao’s “Great Leap Forward”, in which he tried to move his country away from its previously agricultural economy and into a more manufacturing-centric system. However, whilst the Chinese government insists to this day that three subsequent years of famine were entirely due to natural disasters such as drought and poor weather, and only killed 15 million people, most external commentators agree that the sudden change in the availability of food thanks to the Great Leap certainly contributed to the death toll estimated to actually be in the region of 20-40 million. Oh, and the whole business was an economic failure, as farmers uneducated in modern manufacturing techniques attempted to produce steel at home, resulting in a net replacement of useful food for useless, low-quality pig iron.

This event in many ways typifies the Chinese way- that if millions of people must suffer in order for things to work out better in the long run and on the numbers sheet, then so be it, partially reflecting the disregard for the value of life historically also common in Japan. China is a country that has said it would, in the event of a nuclear war, consider the death of 90% of their population acceptable losses so long as they won, a country whose main justification for this “Great Leap Forward” was to try and bring about a state of social structure & culture that the government could effectively impose socialism upon, as it tried to do during its “Cultural Revolution” during the mid-sixties. All this served to do was get a lot of people killed, resulted in a decade of absolute chaos, literally destroyed China’s education system and, despite reaffirming Mao’s godlike status (partially thanks to an intensification in the formation of his personality cult), some of his actions rather shamed the governmental high-ups, forcing the party to take the angle that, whilst his guiding thought was of course still the foundation of the People’s Republic and entirely correct in every regard, his actions were somehow separate from that and got rather brushed under the carpet. It did help that, by this point, Mao was now dead and was unlikely to have them all hung for daring to question his actions.

But, despite all this chaos, all the destruction and all the political upheaval (nowadays the government is still liable to arrest anyone who suggests that the Cultural Revolution was a good idea), these things shaped China into the powerhouse it is today. It may have slaughtered millions of people and resolutely not worked for 20 years, but Mao’s focus on a manufacturing economy has now started to bear fruit and give the Chinese economy a stable footing that many countries would dearly love in these days of economic instability. It may have an appalling human rights record and have presided over the large-scale destruction of the Chinese environment, but Chinese communism has allowed for the government to control its labour force and industry effectively, allowing it to escape the worst ravages of the last few economic downturns and preventing internal instability. And the extent to which it has forced itself upon the people of China for decades, forcing them into the party line with an iron fist, has allowed its controls to be gently relaxed in the modern era whilst ensuring the government’s position is secure, to an extent satisfying the criticisms of western commentators. Now, China is rich enough and positioned solidly enough to placate its people, to keep up its education system and build cheap housing for the proletariat. To an accountant, therefore,  this has all worked out in the long run.

But we are not all accountants or economists- we are members of the human race, and there is more for us to consider than just some numbers on a spreadsheet. The Chinese government employs thousands of internet security agents to ensure that ‘dangerous’ ideas are not making their way into the country via the web, performs more executions annually than the rest of the world combined, and still viciously represses every critic of the government and any advocate of a new, more democratic system. China has paid an enormously heavy price for the success it enjoys today. Is that price worth it? Well, the government thinks so… but do you?

Attack of the Blocks

I spend far too much time on the internet. As well as putting many hours of work into trying to keep this blog updated regularly, I while away a fair portion of time on Facebook, follow a large number of video series’ and webcomics, and can often be found wandering through the recesses of YouTube (an interesting and frequently harrowing experience that can tell one an awful lot about the extremes of human nature). But there is one thing that any resident of the web cannot hope to avoid for any great period of time, and quite often doesn’t want to- the strange world of Minecraft.

Since its release as a humble alpha-version indie game in 2009, Minecraft has boomed to become a runaway success and something of a cultural phenomenon. By the end of 2011, before it had even been released in its final release format, Minecraft had registered 4 million purchases and 4 times that many registered users, which isn’t bad for a game that has never advertised itself, spread semi-virally among nerdy gamers for its mere three-year history and was made purely as an interesting project by its creator Markus Persson (aka Notch). Thousands of videos, ranging from gameplay to some quite startlingly good music videos (check out the work of Captain Sparklez if you haven’t already) litter YouTube and many of the games’ features (such as TNT and the exploding mobs known as Creepers) have become memes in their own right to some degree.

So then, why exactly has Minecraft succeeded where hundreds and thousands of games have failed, becoming a revolution in gamer culture? What is it that makes Minecraft both so brilliant, and so special?

Many, upon being asked this question, tend to revert to extolling the virtues of the game’s indie nature. Being created entirely without funding as an experiment in gaming rather than profit-making, Minecraft’s roots are firmly rooted in the humble sphere of independent gaming, and it shows. One obvious feature is the games inherent simplicity- initially solely featuring the ability to wander around, place and destroy blocks, the controls are mainly (although far from entirely) confined to move and ‘use’, whether that latter function be shoot, slash, mine or punch down a tree. The basic, cuboid, ‘blocky’ nature of the game’s graphics, allowing for both simplicity of production and creating an iconic, retro aesthetic that makes it memorable and standout to look at. Whilst the game has frequently been criticised for not including a tutorial (I myself took a good quarter of an hour to find out that you started by punching a tree, and a further ten minutes to work out that you were supposed to hold down the mouse button rather than repeatedly click), this is another common feature of indie gaming, partly because it saves time in development, but mostly because it makes the game feel like it is not pandering to you and thus allowing indie gamers to feel some degree of elitism that they are good enough to work it out by themselves. This also ties in with the very nature of the game- another criticism used to be (and, to an extent, still is, even with the addition of the Enderdragon as a final win objective) that the game appeared to be largely devoid of point, existent only for its own purpose. This is entirely true, whether you view that as a bonus or a detriment being entirely your own opinion, and this idea of an unfamiliar, experimental game structure is another feature common in one form or another to a lot of indie games.

However, to me these do not seem to be entirely worthy of the name ‘answers’ regarding the question of Minecraft’s phenomenal success. The reason I think this way is that they do not adequately explain exactly why Minecraft rose to such prominence whilst other, often similar, indie games have been left in relative obscurity. Limbo, for example, is a side-scrolling platformer and a quite disturbing, yet compelling, in-game experience, with almost as much intrigue and puzzle from a set of game mechanics simpler even than those of Minecraft. It has also received critical acclaim often far in excess of Minecraft (which has received a positive, but not wildly amazed, response from critics), and yet is still known to only an occasional few. Amnesia: The Dark Descent has been often described as the greatest survival horror game in history, as well as incorporating a superb set of graphics, a three-dimensional world view (unlike the 2D view common to most indie games) and the most pants-wettingly terrifying experience anyone who’s ever played it is likely to ever face- but again, it is confined to the indie realm. Hell, Terraria is basically Minecraft in 2D, but has sold around 40 times less than Minecraft itself. All three of these games have received fairly significant acclaim and coverage, and rightly so, but none has become the riotous cultural phenomenon that Minecraft has, and neither have had an Assassin’s Creed mod (first example that sprung to mind).

So… why has Minecraft been so successful. Well, I’m going to be sticking my neck out here, but to my mind it’s because it doesn’t play like an indie game. Whilst most independently produced titled are 2D, confined to fairly limited surroundings and made as simple & basic as possible to save on development (Amnesia can be regarded as an exception), Minecraft takes it own inherent simplicity and blows it up to a grand scale. It is a vast, open world sandbox game, with vague resonances of the Elder Scrolls games and MMORPG’s, taking the freedom, exploration and experimentation that have always been the advantages of this branch of the AAA world, and combined them with the innovative, simplistic gaming experience of its indie roots. In some ways it’s similar to Facebook, in that it takes a simple principle and then applies it to the largest stage possible, and both have enjoyed a similarly explosive rise to fame. The randomly generated worlds provide infinite caverns to explore, endless mobs to slay, all the space imaginable to build the grandest of castles, the largest of cathedrals, or the SS Enterprise if that takes your fancy. There are a thousand different ways to play the game on a million different planes, all based on just a few simple mechanics. Minecraft is the best of indie and AAA blended together, and is all the more awesome for it.

Where do we come from?

In the sport of rugby at the moment (don’t worry, I won’t stay on this topic for too long I promise), there is rather a large debate going on- one that has been echoing around the game for at least a decade now, but that seems to be coming ever closer to the fore. This is the issue of player nationality, namely the modern trend for foreign players to start playing for sides other than those of their birth. The IRB’s rules currently state that one is eligible to play for a country having either lived there for the past three years or if you, either of your parents or any of your grandparents were born there (and so long as you haven’t played for another international side). This state of affairs that has allowed a myriad of foreigners, mainly South Africans (Mouritz Botha, Matt Stevens, Brad Barritt) and New Zealanders (Dylan Hartley, Thomas Waldrom, Riki Flutey), as well as a player all of whose family have played for Samoa (Manu Tuilagi), to play for England in recent years. In fact, Scotland recently played host to an almost comic state of affairs as both the SRU and the media counted down the days until electric Dutch wing Tim Visser, long hailed as the solution to the Scots’ try scoring problems, was eligible to play for Scotland on residency grounds.

These rules were put in place after the ‘Grannygate’ scandal during the early noughties. Kiwi coach Graham Henry, hailed as ‘The Great Redeemer’ by Welsh fans after turning their national side around and leading them to eleven successive victories, had ‘found’ a couple of New Zealanders (Shane Howarth and Brett Sinkinson) with Welsh grandparents to help bolster his side. However, it wasn’t long before a bit of investigative journalism found out that there was no Welsh connection whatsoever, and the whole thing had been a fabrication by Henry and his team. Both players were stopped playing for Wales, and amidst the furore the IRB brought in their new rules.  Sinkinson later qualified on residency and won six further caps for the Welsh. Howarth, having previously played for New Zealand, never played international rugby again.

It might seem odd, then, that this issue is still considered a scandal, despite the IRB having supposedly ‘sorted it out’. But it remains a hugely contentious issue, dividing those who think that Mouritz Botha’s thick South African accent should not be allowed in a white shirt and those who point out that he apparently considers himself English and has as much a right as anyone to compete for the shirt. This is not just an issue in rugby either- during the Olympics, there was a decent amount of criticism for the presence of ‘plastic Brits’ in the Great Britain squad (many of them sporting strong American accents), something that has been present since the days of hastily anglicised South African Zola Budd. In some ways athletics is even more dodgy, as athletes are permitted to change the country they represent (take Bernard Lagat, who originally represented his native Kenya before switching to the USA).

The problem is that nationality is not a simple black & white dividing line, especially in today’s multicultural, well-travelled world. Many people across the globe now hold a dual nationality and a pair of legal passports, and it would be churlish to suggest that they ‘belong’ any more to one country than another. Take Mo Farah, for example, one of Britain’s heroes after the games, and a British citizen- despite being born in, and having all his family come from, Somaliland (technically speaking this is an independent, semi-autonomous state, but is internationally only recognised as part of Somalia). And just as we Britons exalt the performance of ‘our man’, in his home country the locals are equally ecstatic about the performance of a man they consider Somali, whatever country’s colours he runs in.

The thing is, Mo Farah, to the British public at least, seems British. We are all used to our modern, multicultural society, especially in London, so his ethnic origin barely registers as ‘foreign’ any more, and he has developed a strong English accent since he first moved here aged 9. On the other hand, both of Shana Cox’s parents were born in Britain, but was raised in Long Island and has a notable American accent, leading many to dub her a ‘plastic Brit’ after she lead off the 4 x 400m women’s relay team for Great Britain. In fact, you would be surprised how important accent is to our perception of someone’s nationality, as it is the most obvious indicator of where a person’s development as a speaker and a person occurred.

A simultaneously both interesting and quite sad demonstration of this involves a pair of Scottish rappers I saw in the paper a few years ago (and whose names I have forgotten). When they first auditioned as rappers, they did so in their normal Scots accents- and were soundly laughed out of the water. Seriously, their interviewers could barely keep a straight face as they rejected them out of hand purely based on the sound of their voice. Their solution? To adopt American accents, not just for their music but for their entire life. They rapped in American, spoke in American, swore, drank, partied & had sex all in these fake accents. People they met often used to be amazed by the perfect Scottish accents these all-american music stars were able to impersonate. And it worked, allowing them to break onto the music scene and pursue their dreams as musicians, although it exacted quite a cost. At home in Scotland, one of them asked someone at the train station about the timetable. Initially unable to understand the slight hint of distaste he could hear in their homely Scots lilt, it was about a minute before he realised he had asked the question entirely in his fake accent.

(Interestingly, Scottish music stars The Proclaimers, who the rappers were unfavourably compared to in their initial interview, were once asked about the use of their home accents in their music as opposed to the more traditional American of the music industry, and were so annoyed at the assumption that they ‘should’ be singing in an accent that wasn’t theirs that they even made a song (‘Flatten all the Vowels’) about the incident.)

This story highlights perhaps the key issue when considering the debate of nationality- that what we perceive as where someone’s from will often not tell us the whole story. It is not as simple as ‘oh so-and-so is clearly an American, why are they running for Britain?’, because what someone ‘clearly is’ and what they actually are can often be very different. At the very first football international, England v Scotland, most of the Scottish team were selected on the basis of having Scottish-sounding names. We can’t just be judging people on what first meets the eye.

The Price of Sex

This is (probably, I might come back to it if I have trouble thinking of material) the last post I will be doing in this mini-series on the subject of sex.  Today’s title is probably the bluntest of the series as a whole, and yet is probably most descriptive of its post’s content, as today I am going to be dealing with the rather edgy subject of prostitution.

Prostitution is famously quoted as being the world’s oldest profession, and it’s not hard to see why. Since men tend to have physical superiority over women they have tended to adopt overlord roles since the ‘hitting other people with clubs and shouting “Ug”‘ stage, women have, as previously stated, tended to be relatively undervalued and underskilled (in regards to stuff other than, oh I don’t know, raising kids and foraging for food with a degree of success often exceeding that of hunting parties, although that is partly to do with methodology and I could spend all day arguing this point). In fact it can be argued that the only reason that some (presumably rather arrogant) male-dominated tribes didn’t just have done with women as a gender is purely down to sex- partly because it allowed them to father children but mostly, obviously, because they really enjoyed it. Thus the availability of sex was historically not a woman’s most valuable asset to her male peers, but since it was something that men couldn’t/would rather not sort out between themselves it took on a great degree of value. It could even be argued that women have been ‘selling’ sex in exchange for being allowed to exist since the earliest origins of a male-dominated tribe structure, although you’d have to check with an actual anthropologist to clarify that point.

Since those early days of human history, prostitution has always remained one of those things that was always there, sort of tucked into the background and that never made most history books. However, that’s  not to say it has not affected history- the availability of pleasures of the flesh has kept more than one king away from his duties and sent his country into some degree of turmoil, and even Pope Alexander VI (a la, among other things, Assassin’s Creed II) once famously hired 50 prostitutes for a party known as the Ballet of the Chestnuts, where their clothes were auctioned off before both courtesans and guests (including several clergymen) crawled naked over the floor to first pick up chestnuts, and later compete to see who could have the most sex. In fact, for large swathes of history, prostitution was considered a relatively popular profession among lowborn women, whose only other choices were generally the church (if you could afford to get in), agriculture (which involved backbreaking toil, malnourishment and a generally poor quality of life), or serving work if you were lucky. It was relatively well-paid, required no real skill, was more exciting than most other walks of life and far less risky than a life of crime. Even nowadays sex workers are held with a degree of respect in many countries (such as The Netherlands and New Zealand) as being people stuck in a difficult situation who really don’t need the law trying to screw over (if you’ll pardon the pun) what little they have.

However, that doesn’t mean, and never has done, that prostitution is just some harmless little sideshow that we should simply ignore. The annual death rate among female prostitutes in the USA is around 200 per 100,000, meaning that over a (say) 10 year career one in fifty are likely to be killed. Compare that to a rate of 118 per 100,000 for America’s supposedly most dangerous profession, being a lumberjack. Added to this is the fact that prostitutes, many of whom are illegal immigrants, runaways or imported slaves, are rarely missed or even noticed by society, so make easy victims for predators and serial killers. Prostitution is often seen as a major contributory factor in the continued spread of STD’s such as HIV/AIDS, and is often targeted by women’s rights groups as being both degrading to women both directly involved and indirectly associated as well as slowing the decline of chauvinist attitudes. Then there is sex tourism (aka travelling to somewhere like Thailand to hire prostitutes because at home people might see you coming out), which is rapidly becoming one of the most distasteful, as well as dangerous & counter-productive, aspects of 21st century tourism. And then, of course, there is sex trafficking, perhaps the lowest of the low as far as all human activities go. Sex trafficking is the practice of abducting young women to sell into slavery as prostitutes, both within a country and across international borders, which would be morally repugnant enough if it wasn’t for the fact that a significant proportion of those trafficked are children, sometimes sold even by their own families. Around three-quarters of human trafficking today, the largest slavery operation in the history of the world, is concerned with the global sex trade, and is the fastest growing criminal activity on the planet. Much of it is connected to other aspects of organised crime, such as the drugs wars in Mexico, and can therefore be directly linked to large-scale theft, murder and smuggling, amongst other crimes. In India & Bangaladesh, some 40% of prostitutes are thought to be children, many of whom use a highly addictive drug linked to diabetes and high blood pressure to make them seem older & fatter (research suggests that men find fuller physiques more attractive when under stress or hardship). Looking through some of these figures & reading some of the stories surrounding them, it’s hard not to be struck by how low humanity has the potential to stoop when it ceases to think or care.

Over the last 100 or so years, as life has got less hard for the average woman and job opportunities have expanded, prevailing attitudes towards, and the prevalence & amount of, prostitution have declined heavily, and it is now frequently seen more as a rather distasteful sideshow to modern living that most would rather avoid. But to contrast against this we have the fact that the industry is both very much alive across the world, but could even be said to be thriving- the ‘labour’ of slave prostitutes is worth tens of billions of dollars worldwide. The trouble is, because it is an inherently seedy sideshow, it is impossible to get rid of, with legislation usually causing it to merely go underground and leading to further degradation in living conditions and welfare of sex workers, and regulating it is similarly tricky. Thus, it’s very hard for governments to know what to do about an industry that they recognise will always be there but is immensely prone to crime, human rights abuse and health issues. Unless the world, in a rather unlikely twist learns to live largely without prostitutes, a black stain is unfortunately likely to remain on our pride and dignity as a race. Exactly how this should be dealt with is still a little unclear.

Think of the CHILDREN!

My last post dealt with the way that sex in our society is something kept very much under wraps, dusted under the carpet and kept out of the conversation of everyday life as much as possible. This post however could be said to be completely debunking every point I made in the last one, for today I will be considering the issue of the increasing use & prevalence of sex, sexuality and sexual connotations in society today.

The main people voicing a strong opinion against this trend are, of course, the kind of militant parents who started a war in the South Park movie (good film, see it if you can). They argue that modern media and marketing strategies place a lot of emphasis on the use of sex symbols and sexual connotations, and that these strategies are, more worryingly, being aimed at a steadily younger audience. Young girls in particular are often quoted as being aggressively targeted by clothing companies from as young as 8, companies trying to buy them into the whole ‘looks and clothes are the most important thing ever’ mentality in order to turn them into fashion-obsessed consumers as early as possible.

There’s certainly a lot of evidence to support their theory as to the increased prevalence of sexual symbolism in today’s culture. Sport may be a good place to look for examples- modern female sports stars are nowadays judged mainly by the way they look, and in many sports where men and women have roughly equal exposure (such as tennis) female competitors often have larger sponsorship deals. Is this because they are better at persuading people that sports equipment is awesome? No, it’s because they are capable of advertising perfume by wearing hardly any clothes and exploiting their sex appeal (think Maria Sharapova, whose game suffered heavily in the few years after she won Wimbledon as she turned into more of a model than a tennis player). And then what about tabloid newspapers and their page 3 hooks for readers, ‘lads mags’ that now have enough status to be invited as judges for the nomination of Sports Personality of The Year (not the BBC’s proudest moment), and clothes companies that now market ‘sexy high heels’ at under 10s?

So… where can this be traced back to? Well, if we, as the pressure groups tend to, blame everything on businesses and clothing companies, their reasoning is actually very simple. Firstly, to consider the issue of children being targeted in one way or another, it’s a well recognised fact that kids love to appear grown-up. They get fussy about their ages (“I’m not 10, I’m 10 and a half!”), copy their parents’ habits and what they see on TV, hate not being able to do stuff on account of age or size and might even try on Mummy and Daddy’s clothes when they’re a bit younger. A child’s ultimate fantasy (and probably one shared by a few adults as well) is to live with all the opportunity and ability of an adult, and without any of the responsibility. For them, therefore, all this sexually-related material that permeates their life is not about sex (which they probably don’t understand properly if at all), but about adulthood, and this just screams ‘awesome’ directly at them. We must also remember that it’s not just the kids who’re at it either; parents love it when their children appear ‘grown-up’ and mature because it makes them seem special, a cut above their peers, subtly suggesting to parents that not only are their kids better than everyone else’s, but that they themselves are better parents. Therefore, whilst some parents might be appalled at the sight of a 9 year old in heels and a miniskirt, others might think of her as quite the young woman, and perhaps even be jealous of the maturity that child seems to have compared to theirs.

And then we must consider a fact that countless bits of market research has shown- sex appeal sells stuff. Even if children don’t get the symbolism, their parents do, and whether the stuff they’re buying is for them or their kids, a bright, smiling, good-looking woman is more likely to encourage them to buy something than an advert featuring a dour looking bloke showing no interest whatsoever. This is especially true when we consider fields such as scent, beauty products and fashionable clothing, all of which are selling products actively designed to make you seem more attractive and, according to Freud at least, get you more sex. Even if you don’t make that connection consciously, there’s no doubt that your subconscious mind picks up on the connection, and that’s before we even consider how totally blatant use of sex, such as in tabloid page 3 columns, acts as a straight marketing hook to sell things. Put simply, sex appeal is an undeniably successful marketing strategy that makes perfect sense, from a purely fiscal point of view, to use.

To finish off, I would like to offer just a snippet of a history lesson. The 1920s were a great time for the USA, producing an economic boom thanks to the likes of Henry Ford,  massive growths in cultural areas such as major league sport, and reinventing social mobility. For the first time, women had a degree of social freedom, particularly among those known as ‘flappers’, who would cut their hair short, drink and smoke in direct and deliberate contravention of the classical female norm. The invention of the car gave young people freedom from their parents and invented the date for the first time, and in jazz music the young of the Roaring Twenties had their own music and social scene as well. This lead, among other things, to a huge increase in sexual freedom among the young, and the media of the time reflected this. This was especially true in the cinema, a relatively new phenomenon, which quickly developed the first sex symbols in the likes of Rudolf Valentino and Clara Bow, prompting advertising and marketing of the time to begin exploiting sex appeal as a means to sell their products. Understandably, the older generation went into uproar over this cultural revolution, but it didn’t make a scrap of difference, and a fresh wave of American culture swept across the world.

Sound familiar? It should do- it’s the same thing people are complaining about now, and people have complained in the same way about the changes in every successive generation, be it teenagers in the 50s, hippies in the 60s, or metal in the 70s. Culture changes, and that’s just a fact of life. There’s nothing wrong with being angry about it, but we must remember that society has survived each new wave of culture and come through each none the worse for wear. If you want to uphold society, then forming a pressure group for each successive thing that offends you probably isn’t the bet way to weather the storm. You’ll have far better results just sticking to what you do like, upholding the values you think are important, and trying to pass those off to your children. It’ll be a lot less painful.