Why the chubs?

My last post dealt with the thorny issue of obesity, both it’s increasing presence in our everyday lives, and what for me is the underlying reason behind the stats that back up media scare stories concerning ‘the obesity epidemic’- the rise in size of the ‘average’ person over the last few decades. The precise causes of this trend can be put down to a whole host of societal factors within our modern age, but that story is boring as hell and has been repeated countless times by commenters far more adept in this field than me. Instead, today I wish present the case for modern-day obesity as a problem concerning the fundamental biology of a human being.

We, and our dim and distant ancestors of the scaly/furry variety, have spent the last few million years living wild; hunting, fighting and generally acting much like any other evolutionary pathway. Thus, we can learn a lot about our own inbuilt biology and instincts by studying the behaviour of animals currently alive today, and when we do so, several interesting animal eating habits become apparent. As anyone who has tried it as a child can attest (and I speak from personal experience), grass is not good stuff to eat. It’s tough, it takes a lot of chewing and processing (many herbivores have multiple stomachs to make sure they squeeze the maximum nutritional value out of their food), and there really isn’t much of it to power a fully-functional being. As such, grazers on grass and other such tough plant matter (such as leaves) will spend most of their lives doing nothing but guzzle the stuff, trying to get as much as possible through their system. Other animals will favour food with a higher nutritional content, such as fruits, tubers or, in many cases, meat, but these frequently present issues. Fruits are highly seasonal and rarely available in a large enough volume to support a large population, as well as being quite hard to get a lot of down; plants try to ‘design’ fruits so that each visitor takes only a few at a time, so as best to spread their seeds far and wide, and as such there are few animals that can sustain themselves on such a diet.  Other food such as tubers or nuts are hard to get at, needing to be dug up or broken in highly energy-consuming activities, whilst meat has the annoying habit of running away or fighting back whenever you try to get at it. As anyone who watches nature documentaries will attest, most large predators will only eat once every few days (admittedly rather heavily).

The unifying factor of all of this is that food is, in the wild, highly energy- and time-consuming to get hold of and consume, since every source of it guards its prize jealously. Therefore, any animal that wants to survive in this tough world must be near-constantly in pursuit of food simply to fulfil all of its life functions, and this is characterised by being perpetually hungry. Hunger is a body’s way of telling us that we should get more food, and in the wild this constant desire for more is kept in check by the difficulty that getting hold of it entails. Similarly, animal bodies try to assuage this desire by being lazy; if something isn’t necessary, then there’s no point wasting valuable energy going after it (since this will mean spending more time going after food to replace lost energy.)

However, in recent history (and a spectacularly short period of time from evolution’s point of view), one particular species called homo sapiens came up with this great idea called civilisation, which basically entailed the pooling and sharing of skill and resources in order to best benefit everyone as a whole. As an evolutionary success story, this is right up there with developing multicellular body structures in terms of being awesome, and it has enabled us humans to live far more comfortable lives than our ancestors did, with correspondingly far greater access to food. This has proved particularly true over the last two centuries, as technological advances in a more democratic society have improved the everyman’s access to food and comfortable living to a truly astounding degree. Unfortunately (from the point of view of our waistline) the instincts of our bodies haven’t quite caught up to the idea that when we want/need food, we can just get food, without all that inconvenient running around after it to get in the way. Not only that, but a lack of pack hierarchy combined with this increased availability means that we can stock up on food until we have eaten our absolute fill if so we wish; the difference between ‘satiated’ and ‘stuffed’ can work out as well over 1000 calories per meal, and over a long period of time it only takes a little more than we should be having every day to start packing on the pounds. Combine that with our natural predilection to laziness meaning that we don’t naturally think of going out for some exercise as fun purely for its own sake, and the fact that we no longer burn calories chasing our food, or in the muscles we build up from said chasing, and we find ourselves consuming a lot more calories than we really should be.

Not only that, but during this time we have also got into the habit of spending a lot of time worrying over the taste and texture of our food. This means that, unlike our ancestors who were just fine with simply jumping on a squirrel and devouring the thing, we have to go through the whole rigmarole of getting stuff out of the fridge, spending two hours slaving away in a kitchen and attempting to cook something vaguely resembling tasty. This wait is not something out bodies enjoy very much, meaning we often turn to ‘quick fixes’ when in need of food; stuff like bread, pasta or ready meals. Whilst we all know how much crap goes into ready meals (which should, as a rule, never be bought by anyone who cares even in the slightest about their health; salt content of those things is insane) and other such ‘quick fixes’, fewer people are aware of the impact a high intake of whole grains can have on our bodies. Stuff like bread and rice only started being eaten by humans a few thousand years ago, as we discovered the benefits of farming and cooking, and whilst they are undoubtedly a good food source (and are very, very difficult to cut from one’s diet whilst still remaining healthy) our bodies have simply not had enough time, evolutionarily speaking, to get used to them. This means they have a tendency to not make us feel as full as their calorie content should suggest, thus meaning that we eat more than our body in fact needs (if you want to feel full whilst not taking in so many calories, protein is the way to go; meat, fish and dairy are great for this).

This is all rather academic, but what does it mean for you if you want to lose a bit of weight? I am no expert on this, but then again neither are most of the people acting as self-proclaimed nutritionists in the general media, and anyway, I don’t have any better ideas for posts. So, look at my next post for my, admittedly basic, advice for anyone trying to make themselves that little bit healthier, especially if you’re trying to work of a few of the pounds built up over this festive season.

Advertisement

One Year On

A year is a long time.

On the 16th of December last year, I was on Facebook. Nothing unusual about this (I spent and indeed, to a slightly lesser extent, still spend rather too much time with that little blue f in the top corner of my screen), especially given that it was the run up to Christmas and I was bored, and neither was the precise content of the bit of Facebook I was looking at- an argument. Such things are common in the weird world of social networking, although they surely shouldn’t be, and this was just another such time. Three or four people were posting long, eloquent, semi-researched and furiously defended messages over some point of ethics, politics or internet piracy, I know not which (it was probably one of those anyway, since that’s what most of them seem to be about among my friends list). Unfortunately, one of those people was me, and I was losing. Well, I say losing; I don’t think anybody could be said to be winning, but I was getting angry and upset all the same, made worse by the realisation that what I was doing was a COMPLETE WASTE OF TIME. I am not in any position whereby my Views are going to have a massive impact on the lives of everyone else, nobody wants to hear what they are, and there was no way in hell that I was going to convince anyone that my opinion was more ‘right’ than their strongly-held conviction- all I and my fellow arguees were achieving was getting very, very angry at one another, actively making us all more miserable. We could pretend that we were debating an important issue, but in reality were just another group of people screaming at one another via the interwebs.

A little under a week later, the night after the winter solstice (22nd of December, which you should notice was exactly 366 days ago), I was again to be found watching an argument unfold on Facebook. Thankfully this time I was not participating, merely looking on with horror as another group of four or five people made their evening miserable by pretending they could convince others that they were ‘wrong’. The provocativeness of the original post, spouting one set of Views as gospel truth over the web, the self-righteousness of the responses and the steadily increasing vitriol of the resulting argument, all struck me as a terrible waste of some wonderful brains. Those participating I knew to be good people, smart people, capable of using their brains for, if not betterment of the world around them, then perhaps a degree of self-betterment or at the very least something that was not making the world a more unhappy place. The moment was not a happy one.

However, one of the benefits of not competing in such an argument is that I didn’t have to be reminded of it or spend much time watching it unfold, so I turned back to my news feed and began scrolling down. As I did so, I came to another friend, putting a link up to his blog. This was a recent experiment for him, only a few posts old at the time, and he self-publicised it religiously every time a post went up. He has since discontinued his blogging adventures, to my disappointment, but they made fun reading whilst they lasted; short (mostly less than 300 words) and covering a wide range of random topics. He wasn’t afraid to just be himself online, and wasn’t concerned about being definitively right; if he offered an opinion, it was just something he thought, no more & no less, and there was no sense that it was ever combative. Certainly it was never the point of any post he made; each was just something he’d encountered in the real world or online that he felt would be relatively cool and interesting to comment on. His description described his posts as ‘musings’, and that was the right word for them; harmless, fun and nice. They made the internet and world in general, in some tiny little way, a nicer place to explore.

So, I read through his post. I smirked a little, smiled and closed the tab, returning once more to Facebook and the other distractions & delights the net had to offer. After about an hour or so, my thoughts once again turned to the argument, and I rashly flicked over to look at how it was progressing. It had got to over 100 comments and, as these things do, was gradually wandering off-topic to a more fundamental, but no less depressing, point of disagreement. I was once again filled with a sense that these people were wasting their lives, but this time my thoughts were both more decisive and introspective. I thought about myself; listless, counting down the last few empty days before Christmas, looking at the occasional video or blog, not doing much with myself. My schedule was relatively free, I had a lot of spare time, but I was wasting it. I thought of all the weird and wonderful thoughts that flew across my brain, all the ideas that would spring and fountain of their own accord, all of the things that I thought were interesting, amazing or just downright wonderful about our little mental, spinning ball of rock and water and its strange, pink, fleshy inhabitants that I never got to share. Worse, I never got to put them down anywhere, so after time all these thoughts would die in some forgotten corner of my brain, and the potential they had to remind me of themselves was lost. Once again, I was struck by a sense of waste, but also of resolve; I could try to remedy this situation. So, I opened up WordPress, I filled out a few boxes, and I had my own little blog. My fingers hovered over the keyboard, before falling to the keys. I began to write a little introduction to myself.

Today, the role of my little corner of the interwebs has changed somewhat. Once, I would post poetry, lists, depressed trains of thought and last year’s ’round robin letter of Planet Earth’, which I still regard as one of the best concepts I ever put onto the net (although I don’t think I’ll do one this year- not as much major stuff has hit the news). Somewhere along the line, I realised that essays were more my kind of thing, so I’ve (mainly) stuck to them since; I enjoy the occasional foray into something else, but I find that I can’t produce as much regular stuff this was as otherwise. In any case, the essays have been good for me; I can type, research and get work done so much faster now, and it has paid dividends to my work rate and analytical ability in other fields. I have also found that in my efforts to add evidence to my comments, I end up doing a surprising amount of research that turns an exercise in writing down what I know into one of increasing the kind of stuff I know, learning all sorts of new and random stuff to pack into my brain. I have also violated my own rules about giving my Views on a couple of occasions (although I would hope that I haven’t been too obnoxious about it when I have), but broadly speaking the role of my blog has stayed true to those goals stated in my very first post; to be a place free from rants, to be somewhere to have a bit of a laugh and to be somewhere to rescue unwary travellers dredging the backwaters of the internet who might like what they’ve stumbled upon. But, really, this little blog is like a diary for me; a place that I don’t publicise on my Facebook feed, that I link to only rarely, and that I keep going because I find it comforting. It’s a place where there’s nobody to judge me, a place to house my mind and extend my memory. It’s stressful organising my posting time and coming up with ideas, but whilst blogging, the rest of the world can wait for a bit. It’s a calming place, a nice place, and over the last year it has changed me.

A year is a long time.

An Opera Posessed

My last post left the story of JRR Tolkein immediately after his writing of his first bestseller; the rather charming, lighthearted, almost fairy story of a tale that was The Hobbit. This was a major success, and not just among the ‘children aged between 6 and 12’ demographic identified by young Rayner Unwin; adults lapped up Tolkein’s work too, and his publishers Allen & Unwin were positively rubbing their hands in glee. Naturally, they requested a sequel, a request to which Tolkein’s attitude appears to have been along the lines of ‘challenge accepted’.

Even holding down the rigours of another job, and even accounting for the phenomenal length of his finished product, the writing of a book is a process that takes a few months for a professional writer (Dame Barbara Cartland once released 25 books in the space of a year, but that’s another story), and perhaps a year or two for an amateur like Tolkein. He started writing the book in December 1937, and it was finally published 18 years later in 1955.

This was partly a reflection of the difficulties Tolkein had in publishing his work (more on that later), but this also reflects the measured, meticulous and very serious approach Tolkein took to his writing. He started his story from scratch, each time going in a completely different direction with an entirely different plot, at least three times. His first effort, for instance, was due to chronicle another adventure of his protagonist Bilbo from The Hobbit, making it a direct sequel in both a literal and spiritual sense. However, he then remembered about the ring Bilbo found beneath the mountains, won (or stolen, depending on your point of view) from the creature Gollum, and the strange power it held; not just invisibility, as was Bilbo’s main use for it, but the hypnotic effect it had on Gollum (he even subsequently rewrote that scene for The Hobbit‘s second edition to emphasise that effect). He decided that the strange power of the ring was a more natural direction to follow, and so he wrote about that instead.

Progress was slow. Tolkein went months at a time without working on the book, making only occasional, sporadic yet highly focused bouts of progress. Huge amounts were cross-referenced or borrowed from his earlier writings concerning the mythology, history & background of Middle Earth, Tolkein constantly trying to make his mythic world feel and, in a sense, be as real as possible, but it was mainly due to the influence of his son Christopher, who Tolkein would send chapters to whilst he was away fighting the Second World War in his father’s native South Africa, that the book ever got finished at all. When it eventually did, Tolkein had been working the story of Bilbo’s son Frodo and his adventure to destroy the Ring of Power for over 12 years. His final work was over 1000 pages long, spread across six ‘books’, as well as being laden with appendices to explain & offer background information, and he called it The Lord of The Rings (in reference to his overarching antagonist, the Dark Lord Sauron).

A similar story had, incidentally, been attempted once before; Der Ring des Nibelungen is an opera (well, four operas) written by German composer Richard Wagner during the 19th century, traditionally performed over the course of four consecutive nights (yeah, you have to be pretty committed to sit through all of that) and also known as ‘The Ring Cycle’- it’s where ‘Ride of The Valkyries’ comes from. The opera follows the story of a ring, made from the traditionally evil Rhinegold (gold panned from the Rhine river), and the trail of death, chaos and destruction it leaves in its wake between its forging & destruction. Many commentators have pointed out the close similarities between the two, and as a keen follower of Germanic mythology Tolkein certainly knew the story, but Tolkein rubbished any suggestion that he had borrowed from it, saying “Both rings were round, and there the resemblance ceases”. You can probably work out my approximate personal opinion from the title of this post, although I wouldn’t read too much into it.

Even once his epic was finished, the problems weren’t over. Once finished, he quarrelled with Allen & Unwin over his desire to release LOTR in one volume, along with his still-incomplete Silmarillion (that he wasn’t allowed to may explain all the appendices). He then turned to Collins, but they claimed his book was in urgent need of an editor and a license to cut (my words, not theirs, I should add). Many other people have voiced this complaint since, but Tolkein refused and ordered Collins to publish by 1952. This they failed to do, so Tolkein wrote back to Allen & Unwin and eventually agreed to publish his book in three parts; The Fellowship of The Ring, The Two Towers, and The Return of The King (a title Tolkein, incidentally, detested because it told you how the book ended).

Still, the book was out now, and the critics… weren’t that enthusiastic. Well, some of them were, certainly, but the book has always had its detractors among the world of literature, and that was most certainly the case during its inception. The New York Times criticised Tolkein’s academic approach, saying he had “formulated a high-minded belief in the importance of his mission as a literary preservationist, which turns out to be death to literature itself”, whilst others claimed it, and its characters in particular, lacked depth. Even Hugo Dyson, one of Tolkein’s close friends and a member of his own literary group, spent public readings of the book lying on a sofa shouting complaints along the lines of “Oh God, not another elf!”. Unlike The Hobbit, which had been a light-hearted children’s story in many ways, The Lord of The Rings was darker & more grown up, dealing with themes of death, power and evil and written in a far more adult style; this could be said to have exposed it to more serious critics and a harder gaze than its predecessor, causing some to be put off by it (a problem that wasn’t helped by the sheer size of the thing).

However, I personally am part of the other crowd, those who have voiced their opinions in nearly 500 five-star reviews on Amazon (although one should never read too much into such figures) and who agree with the likes of CS  Lewis, The Sunday Telegraph and Sunday Times of the time that “Here is a book that will break your heart”, that it is “among the greatest works of imaginative fiction of the twentieth century” and that “the English-speaking world is divided into those who have read The Lord of the Rings and The Hobbit and those who are going to read them”. These are the people who have shown the truth in the review of the New York Herald Tribune: that Tolkein’s masterpiece was and is “destined to outlast our time”.

But… what exactly is it that makes Tolkein’s epic so special, such a fixture; why, even years after its publication as the first genuinely great work of fantasy, it is still widely regarded as the finest work the genre has ever produced? I could probably write an entire book just to try and answer that question (and several people probably have done), but to me it was because Tolkein understood, absolutely perfectly and fundamentally, exactly what he was trying to write. Many modern fantasy novels try to be uber-fantastical, or try to base themselves around an idea or a concept, in some way trying to find their own level of reality on which their world can exist, and they often find themselves in a sort of awkward middle ground, but Tolkein never suffered that problem because he knew that, quite simply, he was writing a myth, and he knew exactly how that was done. Terry Pratchett may have mastered comedic fantasy, George RR Martin may be the king of political-style fantasy, but only JRR Tolkein has, in recent times, been able to harness the awesome power of the first source of story; the legend, told around the campfire, of the hero and the villain, of the character defined by their virtues over their flaws, of the purest, rawest adventure in the pursuit of saving what is good and true in this world. These are the stories written to outlast the generations, and Tolkein’s mastery of them is, to me, the secret to his masterpiece.

Big Pharma

The pharmaceutical industry is (some might say amazingly) the second largest on the planet, worth over 600 billion dollars in sales every year and acting as the force behind the cutting edge of science that continues to push the science of medicine onwards as a field- and while we may never develop a cure for everything you can be damn sure that the modern medical world will have given it a good shot. In fact the pharmaceutical industry is in quite an unusual position in this regard, forming the only part of the medicinal public service, and indeed any major public service, that is privatised the world over.

The reason for this is quite simply one of practicality; the sheer amount of startup capital required to develop even one new drug, let alone form a public service of this R&D, would feature in the hundreds of millions of dollars, something that no government would be willing to set aside for a small immediate gain. All modern companies in the ‘big pharma’ demographic were formed many decades ago on the basis of a surprise cheap discovery or suchlike, and are now so big that they are the only people capable of fronting such a big initial investment. There are a few organisations (the National Institute of Health, the Royal Society, universities) who conduct such research away from the private sectors, but they are small in number and are also very old institutions.

Many people, in a slightly different field, have voiced the opinion that people whose primary concern is profit are those we should least be putting in charge of our healthcare and wellbeing (although I’m not about to get into that argument now), and a similar argument has been raised concerning private pharmaceutical companies. However, that is not to say that a profit driven approach is necessarily a bad thing for medicine, for without it many of the ‘minor’ drugs that have greatly improved the overall healthcare environment would not exist. I, for example, suffer from irritable bowel syndrome, a far from life threatening but nonetheless annoying and inconvenient condition that has been greatly helped by a drug called mebeverine hydrochloride. If all medicine focused on the greater good of ‘solving’ life-threatening illnesses, a potentially futile task anyway, this drug would never have been developed and I would be even more hateful to my fragile digestive system. In the western world, motivated-by-profit makes a lot of sense when trying to make life just that bit more comfortable. Oh, and they also make the drugs that, y’know, save your life every time you’re in hospital.

Now, normally at this point in any ‘balanced argument/opinion piece’ thing on this blog, I try to come up with another point to try and keep each side of the argument at an about equal 500 words. However, this time I’m going to break that rule, and jump straight into the reverse argument straight away. Why? Because I can genuinely think of no more good stuff to say about big pharma.

If I may just digress a little; in the UK & USA (I think, anyway) a patent for a drug or medicine lasts for 10 years, on the basis that these little capsules can be very valuable things and it wouldn’t do to let people hang onto the sole rights to make them for ages. This means that just about every really vital lifesaving drug in medicinal use today, given the time it takes for an experimental treatment to become commonplace, now exists outside its patent and is now manufactured by either the lowest bidder or, in a surprisingly high number of cases, the health service itself (the UK, for instance, is currently trying to become self-sufficient in morphine poppies to prevent it from having to import from Afghanistan or whatever), so these costs are kept relatively low by market forces. This therefore means that during their 10-year grace period, drugs companies will do absolutely everything they can to extort cash out of their product; when the antihistamine drug loratadine (another drug I use relatively regularly, it being used to combat colds) was passing through the last two years of its patent, its market price was quadrupled by the company making it; they had been trying to get the market hooked onto using it before jacking up the prices in order to wring out as much cash as possible. This behaviour is not untypical for a huge number of drugs, many of which deal with serious illness rather than being semi-irrelevant cures for the snuffles.

So far, so much normal corporate behaviour. Reaching this point, we must now turn to consider some practices of the big pharma industry that would make Rupert Murdoch think twice. Drugs companies, for example, have a reputation for setting up price fixing networks, many of which have been worth several hundred million dollars. One, featuring what were technically food supplements businesses, subsidiaries of the pharmaceutical industry, later set the world record for the largest fines levied in criminal history- this a record that persists despite the fact that the cost of producing the actual drugs themselves (at least physically) rarely exceeds a couple of pence per capsule, hundreds of times less than their asking price.

“Oh, but they need to make heavy profits because of the cost of R&D to make all their new drugs”. Good point, well made and entirely true, and it would also be valid if the numbers behind it didn’t stack up. In the USA, the National Institute of Health last year had a total budget of $23 billion, whilst all the drug companies in the US collectively spent $32 billion on R&D. This might seem at first glance like the private sector has won this particular moral battle; but remember that the American drug industry generated $289 billion in 2006, and accounting for inflation (and the fact that pharmaceutical profits tend to stay high despite the current economic situation affecting other industries) we can approximate that only around 10% of company turnover is, on average, spent on R&D. Even accounting for manufacturing costs, salaries and such, the vast majority of that turnover goes into profit, making the pharmaceutical industry the most profitable on the planet.

I know that health is an industry, I know money must be made, I know it’s all necessary for innovation. I also know that I promised not to go into my Views here. But a drug is not like an iPhone, or a pair of designer jeans; it’s the health of millions at stake, the lives of billions, and the quality of life of the whole world. It’s not something to be played around with and treated like some generic commodity with no value beyond a number. Profits might need to be made, but nobody said there had to be 12 figures of them.

Other Politicky Stuff

OK, I know I talked about politics last time, and no I don’t want to start another series on this, but I actually found when writing my last post that I got very rapidly sidetracked when I tried to use voter turnout as a way of demonstrating the fact that everyone hates their politicians, and I thought I might dedicate a post to this particular train of thought as well.

You see, across the world, but predominantly in the developed west where the right to choose our leaders has been around for ages, less and less people are turning out each time to vote.  By way of an example, Ronald Reagan famously won a ‘landslide’ victory when coming to power in 1980- but only actually attracted the vote of 29% of all eligible voters. In some countries, such as Australia, voting is mandatory, but thoughts about introducing such a system elsewhere have frequently met with opposition and claims that it goes against people’s democratic right to abstain from doing so (this argument is largely rubbish, but no time for that now).

A lot of reasons have been suggested for this trend, among them a sense of political apathy, laziness, and the idea that we having the right to choose our leaders for so long has meant we no longer find such an idea special or worth exercising. For example, the presidential election in Venezuela – a country that underwent something of a political revolution just over a decade ago and has a history of military dictatorships, corruption and general political chaos – a little while ago saw a voter turnout of nearly 90% (incumbent president Hugo Chavez winning with 54% of the vote to win his fourth term of office in case you were interested) making Reagan look boring by comparison.

However, another, more interesting (hence why I’m talking about it) argument has also been proposed, and one that makes an awful lot of sense. In Britain there are 3 major parties competing for every seat, and perhaps 1 or two others who may be standing in your local area. In the USA, your choice is pretty limited to either Obama or Romney, especially if you’re trying to avoid the ire of the rabidly aggressive ‘NO VOTE IS A VOTE FOR ROMNEY AND HITLER AND SLAUGHTERING KITTENS’ brigade. Basically, the point is that your choice of who to vote for is limited to usually less than 5 people, and given the number of different issues they have views on that mean something to you the chance of any one of them following your precise political philosophy is pretty close to zero.

This has wide reaching implications extending to every corner of democracy, and is indicative of one simple fact; that when the US Declaration of Independence was first drafted some 250 years ago and the founding fathers drew up what would become the template for modern democracy, it was not designed for a state, or indeed a world, as big and multifaceted as ours. That template was founded on the basis of the idea that one vote was all that was needed to keep a government in line and following the will of the masses, but in our modern society (and quite possibly also in the one they were designing for) that is simply not the case. Once in power, a government can do almost what it likes (I said ALMOST) and still be confident that they will get a significant proportion of the country voting for them; not only that, but that their unpopular decisions can often be ‘balanced out’ by more popular, mass-appeal ones, rather than their every decision being the direct will of the people.

One solution would be to have a system more akin to Greek democracy, where every issue is answered by referendum which the government must obey. However, this presents just as many problems as it answers; referendums are very expensive and time-consuming to set up and perform, and if they became commonplace it could further enhance the existing issue of voter apathy. Only the most actively political would vote in every one, returning the real power to the hands of a relative few who, unlike previously, haven’t been voted in. However, perhaps the most pressing issue with this solution is that it rather renders the role of MPs, representatives, senators and even Prime Ministers & Presidents rather pointless. What is the point of our society choosing those who really care about the good of their country, have worked hard to slowly rise up the ranks and giving them a chance to determine how their country is governed, if we are merely going to reduce their role to ones of administrators and form fillers? Despite the problems I mentioned last time out, of all the people we’ve got to choose from politicians are probably the best people to have governing us (or at least the most reliably OK, even if it’s simply because we picked them).

Plus, politics is a tough business, and what is the will of the people is not necessarily always what’s best for the country as a whole. Take Greece at the moment; massive protests are (or at least were; I know everyone’s still pissed off about it) underway due to the austerity measures imposed by the government, because of the crippling economic suffering that is sure to result. However, the politicians know that such measures are necessary and are refusing to budge on the issue- desperate times call for difficult decisions (OK, I know there were elections that almost entirely centred on this decision that sided with austerity, but shush- you’re ruining my argument). To pick another example, President Obama (and several Democrat candidates before him) have met with huge opposition to the idea of introducing a US national healthcare system, basically because Americans hate taxes. Nonetheless, this is something he believes very strongly in, and has finally managed to get through congress; if he wins the elections later this year, we’ll see how well he executes.

In short, then, there are far too many issues, too many boxes to balance and ideas to question, for all protesting in a democratic society to take place at the ballot box. Is there a better solution to waving placards in the street and sending strongly worded letters? Do those methods at all work? In all honesty, I don’t know- that whole internet petitions get debated in parliament thing the British government recently imported from Switzerland is a nice idea, but, just like more traditional forms of protest, gives those in power no genuine categorical imperative to change anything. If I had a solution, I’d probably be running for government myself (which is one option that definitely works- just don’t all try it at once), but as it is I am nothing more than an idle commentator thinking about an imperfect system.

Yeah, I struggle for conclusions sometimes.

A Brief History of Copyright

Yeah, sorry to be returning to this topic yet again, I am perfectly aware that I am probably going to be repeating an awful lot of stuff that either a) I’ve said already or b) you already know. Nonetheless, having spent a frustrating amount of time in recent weeks getting very annoyed at clever people saying stupid things, I feel the need to inform the world if only to satisfy my own simmering anger at something really not worth getting angry about. So:

Over the past year or so, the rise of a whole host of FLLAs (Four Letter Legal Acronyms) from SOPA to ACTA has, as I have previously documented, sent the internet and the world at large in to paroxysms of mayhem at the very idea that Google might break and/or they would have to pay to watch the latest Marvel film. Naturally, they also provoked a lot of debate, ranging in intelligence from intellectual to average denizen of the web, on the subject of copyright and copyright law. I personally think that the best way to understand anything is to try and understand exactly why and how stuff came to exist in the first place, so today I present a historical analysis of copyright law and how it came into being.

Let us travel back in time, back to our stereotypical club-wielding tribe of stone age human. Back then, the leader not only controlled and lead the tribe, but ensured that every facet of it worked to increase his and everyone else’s chance of survival, and chance of ensuring that the next meal would be coming along. In short, what was good for the tribe was good for the people in it. If anyone came up with a new idea or technological innovation, such as a shield for example, this design would also be appropriated and used for the good of the tribe. You worked for the tribe, and in return the tribe gave you protection, help gathering food and such and, through your collective efforts, you stayed alive. Everybody wins.

However, over time the tribes began to get bigger. One tribe would conquer their neighbours, gaining more power and thus enabling them to take on bigger, larger, more powerful tribes and absorb them too. Gradually, territories, nations and empires form, and what was once a small group in which everyone knew everyone else became a far larger organisation. The problem as things get bigger is that what’s good for a country starts to not necessarily become as good for the individual. As a tribe gets larger, the individual becomes more independent of the motions of his leader, to the point at which the knowledge that you have helped the security of your tribe does not bear a direct connection to the availability of your next meal- especially if the tribe adopts a capitalist model of ‘get yer own food’ (as opposed to a more communist one of ‘hunters pool your resources and share between everyone’ as is common in a very small-scale situation when it is easy to organise). In this scenario, sharing an innovation for ‘the good of the tribe’ has far less of a tangible benefit for the individual.

Historically, this rarely proved to be much of a problem- the only people with the time and resources to invest in discovering or producing something new were the church, who generally shared between themselves knowledge that would have been useless to the illiterate majority anyway, and those working for the monarchy or nobility, who were the bosses anyway. However, with the invention of the printing press around the start of the 16th century, this all changed. Public literacy was on the up and the press now meant that anyone (well, anyone rich enough to afford the printers’ fees)  could publish books and information on a grand scale. Whilst previously the copying of a book required many man-hours of labour from a skilled scribe, who were rare, expensive and carefully controlled, now the process was quick, easy and available. The impact of the printing press was made all the greater by the social change of the few hundred years between the Renaissance and today, as the establishment of a less feudal and more merit-based social system, with proper professions springing up as opposed to general peasantry, meaning that more people had the money to afford such publishing, preventing the use of the press being restricted solely to the nobility.

What all this meant was that more and more normal (at least, relatively normal) people could begin contributing ideas to society- but they weren’t about to give them up to their ruler ‘for the good of the tribe’. They wanted payment, compensation for their work, a financial acknowledgement of the hours they’d put in to try and make the world a better place and an encouragement for others to follow in their footsteps. So they sold their work, as was their due. However, selling a book, which basically only contains information, is not like selling something physical, like food. All the value is contained in the words, not the paper, meaning that somebody else with access to a printing press could also make money from the work you put in by running of copies of your book on their machine, meaning they were profiting from your work. This can significantly cut or even (if the other salesman is rich and can afford to undercut your prices) nullify any profits you stand to make from the publication of your work, discouraging you from putting the work in in the first place.

Now, even the most draconian of governments can recognise that your citizens producing material that could not only benefit your nation’s happiness but also potentially have great material use is a valuable potential resource, and that they should be doing what they can to promote the production of that material, if only to save having to put in the large investment of time and resources themselves. So, it makes sense to encourage the production of this material, by ensuring that people have a financial incentive to do it. This must involve protecting them from touts attempting to copy their work, and hence we arrive at the principle of copyright: that a person responsible for the creation of a work of art, literature, film or music, or who is responsible for some form of technological innovation, should have legal control over the release & sale of that work for at least a set period of time. And here, as I will explain next time, things start to get complicated…