…but some are more equal than others

Seemingly the key default belief of any modern, respectable government and, indeed, a well brought-up child of the modern age, is that of egalitarianism- that all men are born equal. Numerous documents, from the US Declaration of Independence to the UN Bill of Human Rights, have proclaimed this as a ‘self-evident truth’, and anyone who still blatantly clings onto the idea that some people are born ‘better’ than others by virtue of their family having more money is dubbed out of touch at best, and (bizarrely) a Nazi at worst. And this might be considered surprising given the amount of approval and the extent to which we set store by a person’s rank or status.

I mean, think about it. A child from a well-respected, middle class family with two professional parents will invariably get more opportunities in life, and will frequently be considered more ‘trustworthy’, than a kid born into a broken home with a mother on benefits and a father in jail, particularly if his accent (especially) or skin colour (possibly to a slightly lesser extent in Europe than the US) suggests this fact. Someone with an expensive, tailored suit can stand a better chance at a job interview to a candidate with an old, fading jacket and worn knees on his trousers that he has never been rich enough to replace, and I haven’t even started on the wage and job availability gap between men and women, despite that there are nowadays more female university graduates than males. You get the general idea. We might think that all are born equal, but that doesn’t mean we treat them like that.

Some have said that this, particularly in the world of work, is to do with the background and age of the people concerned. Particularly in large, old and incredibly valuable corporate enterprises such as banks, the average age of senior staff and shareholders tends to be on the grey end of things, the majority of them are male and many of them will have had the top-quality private education that allowed them to get there, so the argument put forward is that these men were brought up surrounded by this sort of ‘public schoolers are fantastic and everyone else is a pleb’ mentality. And it is without doubt true that very few companies have an average age of a board member below 50, and many above 65; in fact the average age of a CEO in the UK has recently gone up from a decade-long value of 51 to nearly 53.  However, the evidence suggests that the inclusion of younger board members and CEOs generally benefits a company by providing a fresher understanding of the modern world; data that could only be gathered by the fact that there are a large number of young, high-ranking businesspeople to evaluate. And anyway; in most job interviews, it’s less likely to be the board asking the questions than it is a recruiting officer of medium business experience- this may be an issue, but I don’t think it’s the key thing here.

It could well be possible that the true answer is that there is no cause at all, and the whole business is nothing more than a statistical blip. In Freakonomics, an analysis was done to find the twenty ‘blackest’ and ‘whitest’ boy’s names in the US (I seem to remember DeShawn was the ‘blackest’ and Jake the ‘whitest’), and then compared the job prospects of people with names on either of those two lists. The results suggested that people with one of the ‘white’ names did better in the job market than those with ‘black’ names, perhaps suggesting that interviewers are being, subconsciously or not, racist. But, a statistical analysis revealed this to not, in fact, be the case; we must remember that black Americans are, on average, less well off than their white countrymen, meaning they are more likely to go to a dodgy school, have problems at home or hang around with the wrong friends. Therefore, black people do worse, on average, on the job market because they are more likely to be not as well-qualified as white equivalents, making them, from a purely analytical standpoint, often worse candidates. This meant that Jake was more likely to get a job than DeShawn because Jake was simply more likely to be a better-educated guy, so any racism on the part of job interviewers is not prevalent enough to be statistically significant. To some extent, we may be looking at the same thing here- people who turn up to an interview with cheap or hand-me-down clothes are likely to have come from a poorer background to someone with a tailored Armani suit, and are therefore likely to have had a lower standard of education and make less attractive candidates to an interviewing panel. Similarly, women tend to drop their careers earlier in life if they want to start a family, since the traditional family model puts the man as chief breadwinner, meaning they are less likely to advance up the ladder and earn the high wages that could even out the difference in male/female pay.

But statistics cannot quite cover anything- to use another slightly tangential bit of research, a study done some years ago found that teachers gave higher marks to essays written in neat handwriting than they did to identical essays that were written messier. The neat handwriting suggested a diligent approach to learning, a good education in their formative years, making the teacher think the child was cleverer, and thus deserving of more marks, than a scruffier, less orderly hand. Once again, we can draw parallels to our two guys in their different suits. Mr Faded may have good qualifications and present himself well, but his attire suggests to his interviewers that he is from a poorer background. We have a subconscious understanding of the link between poorer backgrounds and the increased risk of poor education and other compromising factors, and so the interviewers unconsciously link our man to the idea that he has been less well educated than Mr Armani, even if the evidence presented before them suggests otherwise. They are not trying to be prejudiced, they just think the other guy looks more likely to be as good as his paperwork suggests. Some of it isn’t even linked to such logical connections; research suggests that interviewers, just as people in everyday life, are drawn to those they feel are similar to them, and they might also make the subconscious link that ‘my wife stays at home and looks after the kids, there aren’t that many women in the office, so what’s this one doing here?’- again, not deliberate discrimination, but it happens.

In many ways this is an unfortunate state of affairs, and one that we should attempt to remedy in everyday life whenever and wherever we can. But a lot of the stuff that to a casual observer might look prejudiced, might be violating our egalitarian creed, we do without thinking, letting out brain make connections that logic should not. The trick is not to ‘not judge a book by it’s cover’, but not to let your brain register that there’s a cover at all.

Advertisement

Other Politicky Stuff

OK, I know I talked about politics last time, and no I don’t want to start another series on this, but I actually found when writing my last post that I got very rapidly sidetracked when I tried to use voter turnout as a way of demonstrating the fact that everyone hates their politicians, and I thought I might dedicate a post to this particular train of thought as well.

You see, across the world, but predominantly in the developed west where the right to choose our leaders has been around for ages, less and less people are turning out each time to vote.  By way of an example, Ronald Reagan famously won a ‘landslide’ victory when coming to power in 1980- but only actually attracted the vote of 29% of all eligible voters. In some countries, such as Australia, voting is mandatory, but thoughts about introducing such a system elsewhere have frequently met with opposition and claims that it goes against people’s democratic right to abstain from doing so (this argument is largely rubbish, but no time for that now).

A lot of reasons have been suggested for this trend, among them a sense of political apathy, laziness, and the idea that we having the right to choose our leaders for so long has meant we no longer find such an idea special or worth exercising. For example, the presidential election in Venezuela – a country that underwent something of a political revolution just over a decade ago and has a history of military dictatorships, corruption and general political chaos – a little while ago saw a voter turnout of nearly 90% (incumbent president Hugo Chavez winning with 54% of the vote to win his fourth term of office in case you were interested) making Reagan look boring by comparison.

However, another, more interesting (hence why I’m talking about it) argument has also been proposed, and one that makes an awful lot of sense. In Britain there are 3 major parties competing for every seat, and perhaps 1 or two others who may be standing in your local area. In the USA, your choice is pretty limited to either Obama or Romney, especially if you’re trying to avoid the ire of the rabidly aggressive ‘NO VOTE IS A VOTE FOR ROMNEY AND HITLER AND SLAUGHTERING KITTENS’ brigade. Basically, the point is that your choice of who to vote for is limited to usually less than 5 people, and given the number of different issues they have views on that mean something to you the chance of any one of them following your precise political philosophy is pretty close to zero.

This has wide reaching implications extending to every corner of democracy, and is indicative of one simple fact; that when the US Declaration of Independence was first drafted some 250 years ago and the founding fathers drew up what would become the template for modern democracy, it was not designed for a state, or indeed a world, as big and multifaceted as ours. That template was founded on the basis of the idea that one vote was all that was needed to keep a government in line and following the will of the masses, but in our modern society (and quite possibly also in the one they were designing for) that is simply not the case. Once in power, a government can do almost what it likes (I said ALMOST) and still be confident that they will get a significant proportion of the country voting for them; not only that, but that their unpopular decisions can often be ‘balanced out’ by more popular, mass-appeal ones, rather than their every decision being the direct will of the people.

One solution would be to have a system more akin to Greek democracy, where every issue is answered by referendum which the government must obey. However, this presents just as many problems as it answers; referendums are very expensive and time-consuming to set up and perform, and if they became commonplace it could further enhance the existing issue of voter apathy. Only the most actively political would vote in every one, returning the real power to the hands of a relative few who, unlike previously, haven’t been voted in. However, perhaps the most pressing issue with this solution is that it rather renders the role of MPs, representatives, senators and even Prime Ministers & Presidents rather pointless. What is the point of our society choosing those who really care about the good of their country, have worked hard to slowly rise up the ranks and giving them a chance to determine how their country is governed, if we are merely going to reduce their role to ones of administrators and form fillers? Despite the problems I mentioned last time out, of all the people we’ve got to choose from politicians are probably the best people to have governing us (or at least the most reliably OK, even if it’s simply because we picked them).

Plus, politics is a tough business, and what is the will of the people is not necessarily always what’s best for the country as a whole. Take Greece at the moment; massive protests are (or at least were; I know everyone’s still pissed off about it) underway due to the austerity measures imposed by the government, because of the crippling economic suffering that is sure to result. However, the politicians know that such measures are necessary and are refusing to budge on the issue- desperate times call for difficult decisions (OK, I know there were elections that almost entirely centred on this decision that sided with austerity, but shush- you’re ruining my argument). To pick another example, President Obama (and several Democrat candidates before him) have met with huge opposition to the idea of introducing a US national healthcare system, basically because Americans hate taxes. Nonetheless, this is something he believes very strongly in, and has finally managed to get through congress; if he wins the elections later this year, we’ll see how well he executes.

In short, then, there are far too many issues, too many boxes to balance and ideas to question, for all protesting in a democratic society to take place at the ballot box. Is there a better solution to waving placards in the street and sending strongly worded letters? Do those methods at all work? In all honesty, I don’t know- that whole internet petitions get debated in parliament thing the British government recently imported from Switzerland is a nice idea, but, just like more traditional forms of protest, gives those in power no genuine categorical imperative to change anything. If I had a solution, I’d probably be running for government myself (which is one option that definitely works- just don’t all try it at once), but as it is I am nothing more than an idle commentator thinking about an imperfect system.

Yeah, I struggle for conclusions sometimes.

Bouncing horses

I have , over recent months, built up a rule concerning posts about YouTube videos, partly on the grounds that it’s bloody hard to make a full post out of them but also because there are most certainly a hell of a lot of good ones out there that I haven’t heard of, so any discussion of them is sure to be incomplete and biased, which I try to avoid wherever possible. Normally, this blog also rarely delves into what might be even vaguely dubbed ‘current affairs’, but since it regularly does discuss the weird and wonderful world of the internet and its occasional forays into the real world I thought that I might make an exception; today, I’m going to be talking about Gangnam Style.

Now officially the most liked video in the long and multi-faceted history of YouTube (taking over from the previous record holder and a personal favourite, LMFAO’s Party Rock Anthem), this music video by Korean rapper & pop star PSY was released over two and a half months ago, and for the majority of that time it lay in some obscure and foreign corner of the internet. Then, in that strange way that random videos, memes and general random bits and pieces are wont to do online, it suddenly shot to prominence thanks to the web collectively pissing itself over the sight of a chubby Korean bloke in sunglasses doing ‘the horse riding dance’. Quite how this was even discovered by some casual YouTube-surfer is something of a mystery to me given that said dance doesn’t even start for a good minute and a half or so, but the fact remains that it was, and that it is now absolutely bloody everywhere. Only the other day it became the first ever Korean single to reach no.1 in the UK charts, despite not having been translated from its original language, and has even prompted a dance off between rival Thai gangs prior to a gunfight. Seriously.

Not that it has met with universal appeal though. I’m honestly surprised that more critics didn’t get up in their artistic arms at the sheer ridiculousness of it, and the apparent lack of reason for it to enjoy the degree of success that it has (although quite a few probably got that out of their system after Call Me Maybe), but several did nonetheless. Some have called it ‘generic’ in music terms, others have found its general ridiculousness more tiresome and annoying than fun, and one Australian journalist commented that the song “makes you wonder if you have accidentally taken someone else’s medication”. That such criticism has been fairly limited can be partly attributed to the fact that the song itself is actually intended to be a parody anyway. Gangnam is a classy, fashionable district of the South Korean capital Seoul (PSY has likened it to Beverly Hills in California), and gangnam style is a Korean phrase referring to the kind of lavish & upmarket (if slightly pretentious) lifestyle of those who live there; or, more specifically, the kind of posers & hipsters who claim to affect ‘the Gangnam Style’. The song’s self-parody comes from the contrast between PSY’s lyrics, written from the first-person perspective of such a poser, and his deliberately ridiculous dress and dance style.

Such an act of deliberate self-parody has certainly helped to win plaudits from serious music critics, who have found themselves to be surprisingly good-humoured once told that the ridiculousness is deliberate and therefore actually funny- however, it’s almost certainly not the reason for the video’s over 300 million YouTube views, most of which surely go to people who’ve never heard of Gangnam, and certainly have no idea of the people PSY is mocking. In fact, there have been several different theories proposed as to why its popularity has soared quite so violently.

Most point to PSY’s very internet-friendly position on his video’s copyright. The Guardian claim that PSY has in fact waived his copyright to the video, but what is certain is that he has neglected to take any legal action on the dozens of parodies and alternate versions of his video, allowing others to spread the word in their own, unique ways and giving it enormous potential to spread, and spread far. These parodies have been many and varied in content, author and style, ranging from the North Korean government’s version aimed at satirising the South Korean president Park Guen-hye (breaking their own world record for most ridiculous entry into a political pissing contest, especially given that it mocks her supposed devotion to an autocratic system of government, and one moreover that ended over 30 years ago), to the apparently borderline racist “Jewish Style” (neither of which I have watched, so cannot comment on). One parody has even sparked a quite significant legal case, with 14 California lifeguards being fired for filming, dancing in, or even appearing in the background of, their parody video “Lifeguard Style” and investigation has since been launched by the City Council in response to the thousands of complaints and suggestions, one even by PSY himself, that the local government were taking themselves somewhat too seriously.

However, by far the most plausible reason for he mammoth success of the video is also the simplest; that people simply find it funny as hell. Yes, it helps a lot that such a joke was entirely intended (let’s be honest, he probably couldn’t have come up with quite such inspired lunacy by accident), and yes it helps how easily it has been able to spread, but to be honest the internet is almost always able to overcome such petty restrictions when it finds something it likes. Sometimes, giggling ridiculousness is just plain funny, and sometimes I can’t come up with a proper conclusion to these posts.

P.S. I forgot to mention it at the time, but last post was my 100th ever published on this little bloggy corner of the internet. Weird to think it’s been going for over 9 months already. And to anyone who’s ever stumbled across it, thank you; for making me feel a little less alone.

What we know and what we understand are two very different things…

If the whole Y2K debacle over a decade ago taught us anything, it was that the vast majority of the population did not understand the little plastic boxes known as computers that were rapidly filling up their homes. Nothing especially wrong or unusual about this- there’s a lot of things that only a few nerds understand properly, an awful lot of other stuff in our life to understand, and in any case the personal computer had only just started to become commonplace. However, over 12 and a half years later, the general understanding of a lot of us does not appear to have increased to any significant degree, and we still remain largely ignorant of these little feats of electronic witchcraft. Oh sure, we can work and operate them (most of us anyway), and we know roughly what they do, but as to exactly how they operate, precisely how they carry out their tasks? Sorry, not a clue.

This is largely understandable, particularly given the value of ‘understand’ that is applicable in computer-based situations. Computers are a rare example of a complex system that an expert is genuinely capable of understanding, in minute detail, every single aspect of the system’s working, both what it does, why it is there, and why it is (or, in some cases, shouldn’t be) constructed to that particular specification. To understand a computer in its entirety, therefore, is an equally complex job, and this is one very good reason why computer nerds tend to be a quite solitary bunch, with quite few links to the rest of us and, indeed, the outside world at large.

One person who does not understand computers very well is me, despite the fact that I have been using them, in one form or another, for as long as I can comfortably remember. Over this summer, however, I had quite a lot of free time on my hands, and part of that time was spent finally relenting to the badgering of a friend and having a go with Linux (Ubuntu if you really want to know) for the first time. Since I like to do my background research before getting stuck into any project, this necessitated quite some research into the hows and whys of its installation, along with which came quite a lot of info as to the hows and practicalities of my computer generally. I thought, then, that I might spend the next couple of posts or so detailing some of what I learned, building up a picture of a computer’s functioning from the ground up, and starting with a bit of a history lesson…

‘Computer’ was originally a job title, the job itself being akin to accountancy without the imagination. A computer was a number-cruncher, a supposedly infallible data processing machine employed to perform a range of jobs ranging from astronomical prediction to calculating interest. The job was a fairly good one, anyone clever enough to land it probably doing well by the standards of his age, but the output wasn’t. The human brain is not built for infallibility and, not infrequently, would make mistakes. Most of these undoubtedly went unnoticed or at least rarely caused significant harm, but the system was nonetheless inefficient. Abacuses, log tables and slide rules all aided arithmetic manipulation to a great degree in their respective fields, but true infallibility was unachievable whilst still reliant on the human mind.

Enter Blaise Pascal, 17th century mathematician and pioneer of probability theory (among other things), who invented the mechanical calculator aged just 19, in 1642. His original design wasn’t much more than a counting machine, a sequence of cogs and wheels so constructed as to able to count and convert between units, tens, hundreds and so on (ie a turn of 4 spaces on the ‘units’ cog whilst a seven was already counted would bring up eleven), as well as being able to work with currency denominations and distances as well. However, it could also subtract, multiply and divide (with some difficulty), and moreover proved an important point- that a mechanical machine could cut out the human error factor and reduce any inaccuracy to one of simply entering the wrong number.

Pascal’s machine was both expensive and complicated, meaning only twenty were ever made, but his was the only working mechanical calculator of the 17th century. Several, of a range of designs, were built during the 18th century as show pieces, but by the 19th the release of Thomas de Colmar’s Arithmometer, after 30 years of development, signified the birth of an industry. It wasn’t a large one, since the machines were still expensive and only of limited use, but de Colmar’s machine was the simplest and most reliable model yet. Around 3,000 mechanical calculators, of various designs and manufacturers, were sold by 1890, but by then the field had been given an unexpected shuffling.

Just two years after de Colmar had first patented his pre-development Arithmometer, an Englishmen by the name of Charles Babbage showed an interesting-looking pile of brass to a few friends and associates- a small assembly of cogs and wheels that he said was merely a precursor to the design of a far larger machine: his difference engine. The mathematical workings of his design were based on Newton polynomials, a fiddly bit of maths that I won’t even pretend to understand, but that could be used to closely approximate logarithmic and trigonometric functions. However, what made the difference engine special was that the original setup of the device, the positions of the various columns and so forth, determined what function the machine performed. This was more than just a simple device for adding up, this was beginning to look like a programmable computer.

Babbage’s machine was not the all-conquering revolutionary design the hype about it might have you believe. Babbage was commissioned to build one by the British government for military purposes, but since Babbage was often brash, once claiming that he could not fathom the idiocy of the mind that would think up a question an MP had just asked him, and prized academia above fiscal matters & practicality, the idea fell through. After investing £17,000 in his machine before realising that he had switched to working on a new and improved design known as the analytical engine, they pulled the plug and the machine never got made. Neither did the analytical engine, which is a crying shame; this was the first true computer design, with two separate inputs for both data and the required program, which could be a lot more complicated than just adding or subtracting, and an integrated memory system. It could even print results on one of three printers, in what could be considered the first human interfacing system (akin to a modern-day monitor), and had ‘control flow systems’ incorporated to ensure the performing of programs occurred in the correct order. We may never know, since it has never been built, whether Babbage’s analytical engine would have worked, but a later model of his difference engine was built for the London Science Museum in 1991, yielding accurate results to 31 decimal places.

…and I appear to have run on a bit further than intended. No matter- my next post will continue this journey down the history of the computer, and we’ll see if I can get onto any actual explanation of how the things work.

Misnomers

I am going to break two of my cardinal rules  at once over the course of this post, for it is the first in the history of this blog that could be adequately described as a whinge. I have something of a personal hatred against these on principle that they never improve anybody’s life or even the world in general, but I’m hoping that this one is at least well-meaning and not as hideously vitriolic as some ‘opinion pieces’ I have had the misfortune to read over the years.

So…

A little while ago, the BBC published an article concerning the arrest of a man suspected of being a part of hacking group Lulzsec, an organised and select offshoot of the infamous internet hacking advocates and ‘pressure group’ Anonymous. The FBI have accused him of being part of a series of attacks on Sony last May & June, in which thousands of personal details on competition entries were published online. Lulzsec at the time made a statement to the effect that ‘we got all these details from one easy sting, so why do you trust them?’, which might have made the attack a case of trying to prove a point had the point not been directed at an electronics company and was thus kind of stupid. Had it been aimed at a government I might have understood, but to me this just looks like the internet doing what it does best- doing stuff simply for the fun of it. This is in fact the typical motive behind most Lulzsec activities, doing things ‘for teh lulz’, hence the first half of their name and the fact that their logo is a stick figure in typical meme style.

The BBC made reference to their name too in their coverage of the event, but since the journalist involved had clearly taken their information from a rather poorly-worded sentence of a Wikipedia article he claimed that ‘lulz’ was a play on words of lol, aka laugh out loud. This is not, technically speaking, entirely wrong, but is a bit like claiming the word ‘gay’ can now be used to mean happy in general conversation- something of an anachronism, albeit a very recent one. Lulz in the modern internet sense is used more to mean ‘laughs’ or ‘entertainment’, and  ‘for teh lulz’ could even be translated as simply ‘for the hell of it’. As I say, the argument was not expressly wrong as it was revealing that this journalist was either not especially good at getting his point across or dealing with slightly unfamiliar subject matter.

This is not the only example of the media getting things a little wrong when it comes to the internet. A few months ago, after a man was arrested for viciously abusing a celebrity (I forget who) using twitter, he was dubbed a ‘troll’, a term that, according to the BBC article I read, denotes somebody who uses the internet to bully and abuse people (sorry for picking on the BBC because a lot of others do it too, but I read them more than most other news sources). However, any reasonably experienced denizen of the internet will be able to tell you that the word ‘troll’ originated from the activity known as ‘trolling’, etymologically thought to originate from fishing (from a similar route as ‘trawling’). The idea behind this is that the original term was used in the context of ‘trolling for newbies’, ie laying down an obvious feeder line that an old head would recognise as being both obvious and discussed to its death, but that a newer face would respond to earnestly. Thus ‘newbies’ were fished for and identified, mostly for the amusement of the more experienced faces. Thus, trolling has lead to mean making jokes or provocative comments for one’s own amusement and at the expense of others, and ‘troll’ has become descriptive of somebody who trolls others. Whilst it is perhaps not the most noble of human activities, and some repeat offenders could definitely do with a bit more fresh air now and again, it is mostly harmless and definitely not to be taken altogether too seriously. What it is also not is a synonym for internet abuse or even (as one source has reported it) ‘defac[ing] Internet tribute sites with the aim of causing grief to families’. That is just plain old despicable bullying, something that has no place on the internet or the world in general, and dubbing casual humour-seekers such is just giving mostly alright people an unnecessarily bad name.

And here we get onto the bone I wish to pick- that the media, as a rule, do not appear to understand the internet or its culture, and instead treat it almost like a child’s plaything, a small distraction whose society is far less important than its ability to spawn companies. There may be an element of fear involved, an intentional mistrust of the web and a view to hold off embracing it as long as possible, for mainstream media is coming under heavy competition from the web and many have argued that the latter may soon kill the former altogether. This is as maybe, but news organisations should be obliged to act with at least a modicum of neutrality and respectability, especially for a service such as the BBC that does not depend on commercial funding anyway. It would perhaps not be too much to ask for a couple of organisations to hire an internet correspondent, to go with their food, technology, sports, science, environment, every country around the world, domestic, travel and weather ones, if only to allow issues concerning it to be conveyed accurately by someone who knows what he’s talking about. If it’s good enough for the rest of the world, then it’s surely good enough for the culture that has made mankind’s greatest invention what it is today.

OK, rant over, I’ll do something a little more normal next time out.

A Brief History of Copyright

Yeah, sorry to be returning to this topic yet again, I am perfectly aware that I am probably going to be repeating an awful lot of stuff that either a) I’ve said already or b) you already know. Nonetheless, having spent a frustrating amount of time in recent weeks getting very annoyed at clever people saying stupid things, I feel the need to inform the world if only to satisfy my own simmering anger at something really not worth getting angry about. So:

Over the past year or so, the rise of a whole host of FLLAs (Four Letter Legal Acronyms) from SOPA to ACTA has, as I have previously documented, sent the internet and the world at large in to paroxysms of mayhem at the very idea that Google might break and/or they would have to pay to watch the latest Marvel film. Naturally, they also provoked a lot of debate, ranging in intelligence from intellectual to average denizen of the web, on the subject of copyright and copyright law. I personally think that the best way to understand anything is to try and understand exactly why and how stuff came to exist in the first place, so today I present a historical analysis of copyright law and how it came into being.

Let us travel back in time, back to our stereotypical club-wielding tribe of stone age human. Back then, the leader not only controlled and lead the tribe, but ensured that every facet of it worked to increase his and everyone else’s chance of survival, and chance of ensuring that the next meal would be coming along. In short, what was good for the tribe was good for the people in it. If anyone came up with a new idea or technological innovation, such as a shield for example, this design would also be appropriated and used for the good of the tribe. You worked for the tribe, and in return the tribe gave you protection, help gathering food and such and, through your collective efforts, you stayed alive. Everybody wins.

However, over time the tribes began to get bigger. One tribe would conquer their neighbours, gaining more power and thus enabling them to take on bigger, larger, more powerful tribes and absorb them too. Gradually, territories, nations and empires form, and what was once a small group in which everyone knew everyone else became a far larger organisation. The problem as things get bigger is that what’s good for a country starts to not necessarily become as good for the individual. As a tribe gets larger, the individual becomes more independent of the motions of his leader, to the point at which the knowledge that you have helped the security of your tribe does not bear a direct connection to the availability of your next meal- especially if the tribe adopts a capitalist model of ‘get yer own food’ (as opposed to a more communist one of ‘hunters pool your resources and share between everyone’ as is common in a very small-scale situation when it is easy to organise). In this scenario, sharing an innovation for ‘the good of the tribe’ has far less of a tangible benefit for the individual.

Historically, this rarely proved to be much of a problem- the only people with the time and resources to invest in discovering or producing something new were the church, who generally shared between themselves knowledge that would have been useless to the illiterate majority anyway, and those working for the monarchy or nobility, who were the bosses anyway. However, with the invention of the printing press around the start of the 16th century, this all changed. Public literacy was on the up and the press now meant that anyone (well, anyone rich enough to afford the printers’ fees)  could publish books and information on a grand scale. Whilst previously the copying of a book required many man-hours of labour from a skilled scribe, who were rare, expensive and carefully controlled, now the process was quick, easy and available. The impact of the printing press was made all the greater by the social change of the few hundred years between the Renaissance and today, as the establishment of a less feudal and more merit-based social system, with proper professions springing up as opposed to general peasantry, meaning that more people had the money to afford such publishing, preventing the use of the press being restricted solely to the nobility.

What all this meant was that more and more normal (at least, relatively normal) people could begin contributing ideas to society- but they weren’t about to give them up to their ruler ‘for the good of the tribe’. They wanted payment, compensation for their work, a financial acknowledgement of the hours they’d put in to try and make the world a better place and an encouragement for others to follow in their footsteps. So they sold their work, as was their due. However, selling a book, which basically only contains information, is not like selling something physical, like food. All the value is contained in the words, not the paper, meaning that somebody else with access to a printing press could also make money from the work you put in by running of copies of your book on their machine, meaning they were profiting from your work. This can significantly cut or even (if the other salesman is rich and can afford to undercut your prices) nullify any profits you stand to make from the publication of your work, discouraging you from putting the work in in the first place.

Now, even the most draconian of governments can recognise that your citizens producing material that could not only benefit your nation’s happiness but also potentially have great material use is a valuable potential resource, and that they should be doing what they can to promote the production of that material, if only to save having to put in the large investment of time and resources themselves. So, it makes sense to encourage the production of this material, by ensuring that people have a financial incentive to do it. This must involve protecting them from touts attempting to copy their work, and hence we arrive at the principle of copyright: that a person responsible for the creation of a work of art, literature, film or music, or who is responsible for some form of technological innovation, should have legal control over the release & sale of that work for at least a set period of time. And here, as I will explain next time, things start to get complicated…

The Land of the Red

Nowadays, the country to talk about if you want to be seen as being politically forward-looking is, of course, China. The most populous nation on Earth (containing 1.3 billion souls) with an economy and defence budget second only to the USA in terms of size, it also features a gigantic manufacturing and raw materials extraction industry, the world’s largest standing army and one of only five remaining communist governments. In many ways, this is China’s second boom as a superpower, after its early forays into civilisation and technological innovation around the time of Christ made it the world’s largest economy for most of the intervening time. However, the technological revolution that swept the Western world in the two or three hundred years during and preceding the Industrial Revolution (which, according to QI, was entirely due to the development and use of high-quality glass in Europe, a material almost totally unheard of in China having been invented in Egypt and popularised by the Romans) rather passed China by, leaving it a severely underdeveloped nation by the nineteenth century. After around 100 years of bitter political infighting, during which time the 2000 year old Imperial China was replaced by a republic whose control was fiercely contested between nationalists and communists, the chaos of the Second World War destroyed most of what was left of the system. The Second Sino-Japanese War (as that particular branch of WWII was called) killed around 20 million Chinese civilians, the second biggest loss to a country after the Soviet Union, as a Japanese army fresh from an earlier revolution from Imperial to modern systems went on a rampage of rape, murder and destruction throughout the underdeveloped northern China, where some war leaders still fought with swords. The war also annihilated the nationalists, leaving the communists free to sweep to power after the Japanese surrender and establish the now 63-year old People’s Republic, then lead by former librarian Mao Zedong.

Since then, China has changed almost beyond recognition. During the idolised Mao’s reign, the Chinese population near-doubled in an effort to increase the available worker population, an idea tried far less successfully in other countries around the world with significantly less space to fill. This population was then put to work during Mao’s “Great Leap Forward”, in which he tried to move his country away from its previously agricultural economy and into a more manufacturing-centric system. However, whilst the Chinese government insists to this day that three subsequent years of famine were entirely due to natural disasters such as drought and poor weather, and only killed 15 million people, most external commentators agree that the sudden change in the availability of food thanks to the Great Leap certainly contributed to the death toll estimated to actually be in the region of 20-40 million. Oh, and the whole business was an economic failure, as farmers uneducated in modern manufacturing techniques attempted to produce steel at home, resulting in a net replacement of useful food for useless, low-quality pig iron.

This event in many ways typifies the Chinese way- that if millions of people must suffer in order for things to work out better in the long run and on the numbers sheet, then so be it, partially reflecting the disregard for the value of life historically also common in Japan. China is a country that has said it would, in the event of a nuclear war, consider the death of 90% of their population acceptable losses so long as they won, a country whose main justification for this “Great Leap Forward” was to try and bring about a state of social structure & culture that the government could effectively impose socialism upon, as it tried to do during its “Cultural Revolution” during the mid-sixties. All this served to do was get a lot of people killed, resulted in a decade of absolute chaos, literally destroyed China’s education system and, despite reaffirming Mao’s godlike status (partially thanks to an intensification in the formation of his personality cult), some of his actions rather shamed the governmental high-ups, forcing the party to take the angle that, whilst his guiding thought was of course still the foundation of the People’s Republic and entirely correct in every regard, his actions were somehow separate from that and got rather brushed under the carpet. It did help that, by this point, Mao was now dead and was unlikely to have them all hung for daring to question his actions.

But, despite all this chaos, all the destruction and all the political upheaval (nowadays the government is still liable to arrest anyone who suggests that the Cultural Revolution was a good idea), these things shaped China into the powerhouse it is today. It may have slaughtered millions of people and resolutely not worked for 20 years, but Mao’s focus on a manufacturing economy has now started to bear fruit and give the Chinese economy a stable footing that many countries would dearly love in these days of economic instability. It may have an appalling human rights record and have presided over the large-scale destruction of the Chinese environment, but Chinese communism has allowed for the government to control its labour force and industry effectively, allowing it to escape the worst ravages of the last few economic downturns and preventing internal instability. And the extent to which it has forced itself upon the people of China for decades, forcing them into the party line with an iron fist, has allowed its controls to be gently relaxed in the modern era whilst ensuring the government’s position is secure, to an extent satisfying the criticisms of western commentators. Now, China is rich enough and positioned solidly enough to placate its people, to keep up its education system and build cheap housing for the proletariat. To an accountant, therefore,  this has all worked out in the long run.

But we are not all accountants or economists- we are members of the human race, and there is more for us to consider than just some numbers on a spreadsheet. The Chinese government employs thousands of internet security agents to ensure that ‘dangerous’ ideas are not making their way into the country via the web, performs more executions annually than the rest of the world combined, and still viciously represses every critic of the government and any advocate of a new, more democratic system. China has paid an enormously heavy price for the success it enjoys today. Is that price worth it? Well, the government thinks so… but do you?

Fitting in

This is my third post in this little mini-series on the subject of sex & sexuality in general, and this time I would like to focus on the place that sex has in our society. I mean, on the face of it, we as a culture appear to be genuinely embarrassed by its existence a lot of the time, rarely being referred to explicitly if at all (at least certainly not among ‘correct’ and polite company), and making any mention of it a cause of scandal and embarrassment. Indeed, an entire subset of language has seemed to develop over the last few years to try and enable us to talk about sexually-related things without ever actually making explicit references to it- it’s an entire area where you just don’t go in conversation. It’s almost as if polite society has the mental age of a 13 year old in this regard, and is genuinely embarrassed as to its existence.

Compare this to the societal structure of one of our closest relatives, the ‘pygmy great ape’ called the bonobo. Bonobos adopt a matriarchal (female-led) society, are entirely bisexual, and for them sex is a huge part of their social system. If a pair of bonobos are confronted with any new and unusual situation, be it merely the introduction of a cardboard box, their immediate response will be to briefly start having sex with one another almost to act as an icebreaker, before they go and investigate whatever it was that excited them. Compared to bonobos, humans appear to be acting like a bunch of nuns.

And this we must contrast against the fact that sex is something that we are not only designed for but that we actively seek and enjoy. Sigmund Freud is famous for claiming that most of human behaviour can be put down to the desire for sex, and as I have explained in previous place, it makes evolutionary sense for us to both enjoy sex and seek it wherever we can. It’s a fact of life, something very few of us would be comfortable to do without, and something our children are going to have to come to terms with eventually- yet it’s something that culture seems determined to brush under the carpet, and that children are continually kept away from for as long as is physically possible in a last-ditch attempt to protect whatever innocence they have left. So why is this?

Part of the reasoning behind this would be the connection between sex and nakedness, as well as the connection to privacy. Human beings do not, obviously, have thick fur to protect themselves or keep them warm (nobody knows exactly why we lost ours, but it’s probably to do with helping to regulate temperature, which we humans do very well), and as such clothes are a great advantage to us. They can shade us when its warm and allow for more efficient cooling, protect us from harsh dust, wind & weather, keep us warm when we venture into the world’s colder climates, help stem blood flow and lessen the effect of injuries, protect us against attack from predators or one another, help keep us a little cleaner and replace elaborate fur & feathers for all manner of ceremonial rituals. However, they also carry a great psychological value, placing a barrier between our bodies and the rest of the world, and thus giving us a sense of personal privacy about our own bodies. Of particular interest to our privacy are those areas most commonly covered, including (among other things), the genital areas, which must be exposed for sexual activity. This turns sex into a private, personal act in our collective psyche, something to be shared only between the partners involved, and making any exploration of it seem like an invasion of our personal privacy. In effect, then, it would seem the Bible got it the wrong way around- it was clothes that gave us the knowledge and shame of nakedness, and thus the ‘shame’ of sex.

Then we must consider the social importance of sex & its consequences in our society generally. For centuries the entire governmental structure of most of the world was based around who was whose son, who was married to who  and, in various roundabout ways, who either had, was having, or could in the future be having, sex with whom. Even nowadays the due process of law usually means inheritance by either next of kin, spouse or partner, and so the consequences of sex carry immense weight. Even in the modern world, with the invention of contraceptives and abortion and the increasing prevalence of ‘casual sex’, sex itself carries immense societal weight, often determining how you are judged by your peers, your ‘history’ among them and your general social standing. To quote a favourite song of a friend of mine: ‘The whole damn world is just as obsessed/ With who’s the best dressed and who’s having sex’. And so, sex becomes this huge social thing, its pursuit full of little rules and nuances, all about who with who (and even with the where & how among some social groups) and it is simply not allowed to become ‘just this thing everyone does’ like it is with the bonobos. Thus, everything associated with sex & sexuality becomes highly strung and almost political in nature, making it a semi-taboo to talk about for fear of offending someone.

Finally, we must consider the development of the ‘sexual subculture’ that seems to run completely counter to this taboo attitude. For most of human history we have comfortably accepted and even encouraged the existence of brothels and prostitution, and whilst this has become very much frowned upon in today’s culture the position has been filled by strip clubs, lap dancing bars and the sheer mountains of pornography that fill the half-hidden corners of newsagents, small ads and the internet. This is almost a reaction to the rather more prim aloofness adopted by polite society, an acknowledgement and embracing of our enjoyment of sex (albeit one that caters almost exclusively to heterosexual men and has a dubious record for both women’s and, in places, human rights). But because this is almost a direct response to the attitudes of polite culture, it has naturally attracted connotations of being seedy and not respectable. Hundreds of men may visit strip clubs every night, but that doesn’t make it an OK career move for a prominent judge to be photographed walking out of one. Thus, as this sex-obsessed underworld has come into being on the wrong side of the public eye, so sex itself has attracted the same negative connotations, the same sense of lacking in respectability, among the ‘proper’ echelons of society, and has gone even more into the realms of ‘Do Not Discuss’.

But, you might say, sex appears to be getting even more prevalent in the modern age. You’ve mentioned internet porn, but what about the sexualisation of the media, the creation and use of sex symbols, the targeting of sexual content at a steadily younger audience? Good question, and one I’ll give a shot at answering next time…

The Power of the Vote

Winston Churchill once described democracy as ‘the worst form of government, except for all the others that have been tried’, and to be honest he may have had a point. Despite being championed throughout modern culture and interpretations of history as the ultimate in terms of freedom and representation of the common people, it is certainly not without its flaws. Today I would like to focus on just one in particular, one whose relevance has become ever more important in today’s multi-faceted existence: the power of a vote.

Voting is, of course, the core principle of democracy, a simple and unequivocal solution of indicating who most people would prefer as their tyrannical overlord/general manager and representative for the next five or so years. Not does it allow the common people to control who is in power, it also allows them control over that person once they are in that position, for any unpopular decisions they make over the course of his tenure will surely come back to haunt them come the next election.

This principle works superbly so long as the candidate in question can be judged against a simple criteria, a sort of balance sheet of good and bad as directly applicable to you. As such, the voting principle works absolutely fine given a small enough situation, where there are only a few issues directly applicable to the candidate in question- a small local community for instance. Thus, the performance of the incumbent candidate and the promises of any challengers can be evaluated against simple, specific issues and interests, and the voting process is representative of who people think will do the best for them.

Now, let us consider the situation when things get bigger- take the electoral process for electing a Prime Minister, for example. Now, a voter in Britain does not directly elect his or her PM, but instead elects a Member of Parliament for his area, and whichever party he is in affects who will get the top job eventually. The same thing actually happens in the US Presidential elections to help prevent hung parliaments, but the difference in Britain is that MP’s actually have power and fulfil the roles of Representatives/Congressmen as well. Thus, any voter has to consider a whole host of issues: which party each candidate is from and where said party stands on the political spectrum; what the policies of the various competitors are; how many of that myriad of policies agree or disagree with your personal opinion; how their standpoints on internet freedom/abortion/whatever else is of particular interest to you compare to yours; whether they look like they will represent you in Parliament or just chirp away party lines, and what issues they are particularly keen on addressing, to name but a few in the most concise way possible. That’s an awful lot of angles to consider, and the chance of any one candidate agreeing with any one voter on every one of those issues is fairly slim. This means that no matter who you vote for (unless you run for office yourself, a tactic that is happily becoming more and more popular lately), no candidate is ever going to accurately represent your views of their own accord, and you’ll simply have to make do with the best of a bad job. The other, perhaps even more unfortunate, practical upshot of this is that a candidate can make all sorts of unpopular decisions, but still get in come the next election on the grounds that his various other, more popular, policies or standpoints are still considered preferable to his opponent.

Then, we must take into account the issue of ‘safe seats’- areas where the majority of voters are so set in their ways when it comes to supporting one party or another that a serial killer wearing the appropriate rosette could still get into power. Here the floating voters, those who are most likely to swing one way or the other and thus affect who gets in, have next to no influence on the eventual outcome. In these areas especially, the candidate for the ‘safe’ party can be held responsible for next to nothing that he does, because those who would be inclined to punish him at the ballot box are unable to swing the eventual outcome.

All this boils down to a simple truth- that in a large situation involving a lot of people and an awful lot of mitigating factors, one candidate can never be truly representative of all his constituents’ wishes and can often not be held accountable at the ballot box for his more unpopular decisions. Sure, as a rule candidates do like to respect the wishes of the mob where possible just on the off-chance that one unpopular decision could be the straw that breaks his next re-election campaign. But it nonetheless holds true that a candidate can (if he wants) usually go ‘you know what, screw what they want’ a few times during his tenure and get away with it, particularly if those decisions occur early in his time in office and are thus largely forgotten come the next election- and that can cause the fundamental principles of democracy to break down.

However, whilst this might seem like a depressing prospect, there is a glimmer of hope, and it comes from a surprising source. You see, whilst one is perfectly capable of making a very good living out of politics, it is certainly not the best paid career in the world- if you have a lust for money, you would typically go into business or perhaps medicine (if you were really going to get cynical about it). Most politicians go into politics nowadays not because they have some all-consuming lust for power or because they want to throw their country’s finances around, but because they have strong political views and would like to be able to change the world for the better, and because they care about the political system. It is simply too much effort to try and work up the political ladder for personal and corrupt reasons when there are far easier and more lucrative roads to power and riches elsewhere. Thus, your average politician is not simply some power-hungry arch bureaucrat who wishes to see his people crushed beneath his feet in the pursuit of making him more cash, but a genuine human being who cares about making things better for people- for from this pursuit does he get his job satisfaction. That, if anything, is the true victory of a stable democracy- it gets the right kind of people pursuing power.

Still doesn’t mean they should be let off the hook, though.

The Great Madiba*

I have previously mentioned on this blog that I have a bit of a thing for Nelson Mandela. I try not too bring this up too much, but when you happen to think that someone was the greatest human who has ever lived then it can be a touch tricky. I also promised myself that I would not do another 1 man adulation-fest for a while either, but today happens to be his ninety fourth (yes, 94th) birthday, so I felt that one might be appropriate.

Nelson Mandela was born in 1918 as the son of a Xhosa tribeschief, and was originally named Rolihlahla, or ‘troublemaker’ (the name Nelson was given to him when he attended school). South Africa at the time was still not far out of the Boer war, which has been a difficult one for historians to take sides in- the British, lead by Lord Kitchener of the ‘Your Country Needs You’ WWI posters, took the opportunity to invent the concentration camp whilst the Dutch/German descended Boers who both preached and practiced brutal racial segregation. It wasn’t until 1931 that South Africa was awarded any degree of independence from Britain, and not until 1961 that it became officially independent.

However, a far more significant political event occurred in 1948, with the coming to power of the National Party of South Africa, which was dominated by white Afrikaners. They were the first government to come up with apartheid, a legal and political system that enforced the separation of white & black South Africans in order to maintain the (minority group) whites’ political power. Its basic tenet was the dividing of all people into one of four groups. In descending order of rank, they were White, Coloured, Indian (a large racial group in South Africa- in fact a young Mahatma Gandhi spent a lot of time in the country before Mandela was born and pioneered his methods of peaceful protest there) and Black. All had to carry identification cards and all bar whites were effectively forbidden to vote. The grand plan was to try and send all ‘natives’ bar a few workers to one of ten ‘homelands’ to leave the rest of the country for white South Africans. There were a huge number of laws, many of which bore a striking resemblance to those used by Hitler to segregate Jews, to enforce separation (such as the banning of mixed marriages), and even a system to be up- (or even down-) graded in rank.

Mandela was 30 when apartheid was introduced, and began to take an active role in politics. He joined the black-dominated African National Congress (ANC) and began to oppose the apartheid system. He originally stuck to Gandhi’s methods of nonviolent protest and was arrested several times, but he became frustrated as protests against the government were brutally opposed and he began to turn to more aggressive measures. In the early sixties he co-founded and lead the ANC’s militant (some would say terrorist) wing, coordinating attacks on symbols of the Apartheid regime. This mainly took the form of sabotage attacks against government offices & such (he tried to avoid targeting or hurting people), and Mandela later admitted that his party did violate human rights on a number of occasions. Mandela was even forbidden to enter the United States without permission until 2008, because as an ANC member he had been classified a terrorist.

Eventually the law caught up with him, and Mandela was arrested in 1962. Initially jailed for 5 years for inciting workers to strike, he was later found guilty of multiple counts of sabotage and sentenced to life imprisonment (only narrowly escaping the death penalty, and once turning up to court in full Xhosa ceremonial dress). He was transported to the imfamously tough Robben Island prison and spent the next 18 years, between the ages of 45 and 58, working in a lime quarry. As a black, and a notorious political prisoner, Mandela was granted few, if any, privileges, and his cell was roughly the same size as a toilet cubicle. However, whilst inside, his fame grew- his image of a man fighting the oppressive system spread around the world and gained the apartheid system notoriety and hatred. In fact, the South African intelligence services even tried to get him to escape so they could shoot him and remove him from his iconic status. There were numerous pleas and campaigns to release him, and by the 1980s things had come to a head- South African teams were ostracised in virtually every sport (including rugby, a huge part of the Afrikaner lifestyle), and the South African resort of Sun City had become a total pariah for almost every western rock act to visit, all amidst a furious barrage of protests.

After Robben Island, Mandela spent a further 9 years in mainland prisons during which time he refined his political philosophy. He had also learned to speak Afrikaans and held many talks with key government figures who were overblown by both his physical presence (he had been a keen boxer in his youth) and his powerful, engaging and charming force of personality. In 1989, things took a whole new turn with the coming to power of FW de Klerk, who I rate as the South African equivalent of Mikhael Gorbachev. Recognising that the tides of power were against his apartheid system, he began to grant the opposition concessions, unbanning the ANC and, in 1990, releasing Mandela after nearly three decades in prison (Mandela holds the world record for the longest imprisonment of a future president). Then followed four long, strained years of negotiations of how to best redress the system, broken by a famous visit to the Barcelona Olympics and a joint awarding, in 1993, of the Nobel Peace prize to both Mandela and de Klerk, before the ANC got what it had spent all its years campaigning for- the right for black citizens to vote.

Unsurprisingly Mandela (by now aged 75) won a landslide in the elections of 1994 and quickly took apart the apartheid regime. However, many white South Africans lived in fear of what was to come- the prospect of ‘the terrorist’ Mandela now having free reign to persecute them as much as he liked was quite terrifying one, and one that had been repeated multiple times in other local African nations (perhaps the best example is Zimbabwe, where Robert Mugabe went from the first black leader of a new nation to an aggressive dictator who oppressed his people and used the race card as justification). Added to that, Mandela faced the huge political challenges of a country racked by crime, unemployment and numerous issues ranging from healthcare to education.

However, Mandela recognised that the white population were the best educated and controlled most of the government, police force and business of his country, so had to be placated. He even went so far as to interrupt a meeting of the national sports council to persuade them to revoke a decision to drop the name and symbol of the Springboks (South Africa’s national rugby side, and a huge symbol of the apartheid regime) to try and keep them happy. His perseverance paid off- the white population responded to his lack of prejudice by turning a boom in international trade caused by apartheid’s end into a quite sizeable economic recovery. Even Springboks became unifying force for his country, being sent off to coaching clinics in black townships and being inspired to such an extent by Mandela and his request for South Africans of all creeds to get behind the team that they overcame both their underdogs tag and the mighty New Zealand (and more specifically their 19 stone winger who ran 100m in under 11 seconds, Jonah Lomu) to win their home World Cup in 1995, igniting celebrations across the country and presenting South Africa as the Rainbow Nation Mandela had always wanted it to be. Despite his age, declining health he would only ever sleep for a few hours every night (claiming he rested long enough in prison). donated a quarter of his salary to charity on the grounds that he felt it was too much, and had to juggle his active political life around a damaged family life (his second wife having divorced from him & his children having some disagreements with his politics).

It would have been easy for Mandela to exact revenge upon his former white oppressors, stripping them of their jobs, wealth and privilege in favour for a new, black-orientated system- after all, blacks were the majority racial group in the country. But this is what makes Mandela so special- he didn’t take the easy option. He was not, and has never been, a black supremacist, nor one given to knee-jerk reactions- he believed in equality for all, including the whites who had previously not extended such a fair hand to him. He showed the world how to ‘offer the other cheek’ (in Gandhi’s words), and how to stand up for something you believe in. But most importantly, he showed us all that the world works best when we all give up thoughts of vengeance, and petty selfishness, and we instead come together as a brotherhood of humanity. Mandela’s legacy to the world will none be of his brilliant political mind, nor the education, healthcare or economic systems he put in place to revive his country, or even the extraordinary dedication, perseverance and strength of will he showed throughout his long years behind bars. Nelson Mandela taught the world how to be a human being.

*Madiba was Mandela’s Xhosa name, and he is referred to affectionately as such by many South Africans