MOAR codey stuff

I turned my last post on cryptography into a two-parter because there was a fair ton of stuff that I wasn’t able to cover in that particular 1200 words that I consider to be interesting/relevant, so here the rest of it comes.  I’m not going to bother for an intro this time though, so go and read my last post (if you haven’t already’ before this one to make sure we’re all on the same level here.

We all good? OK, let’s talk about public keys.

When one encodes or decodes a cipher, you perform a slightly different process when performing each process, but each process is mathematically related to the other. For example, when encrypting a Caesar cipher you ‘add three’ to the ‘value’ of each letter, and when decrypting you subtract three; the one process is the inverse of the other. These different types of key, or parts of the overall key, are known as the encryption and decryption keys. Since the two are mathematically related, knowledge of the one allows an enemy cryptanalyst to discover the other with relative ease in most cases; thus, both keys have to be kept very secret to avoid exposure, and making the distribution of keys a dangerous business.

However, in the RSA algorithms talked about at the end of the last post the tool for its encryption (the massive number M and the power P it is raised to) are no use to a foe if he does not have the two prime factors of M needed to decrypt it (I still don’t get how that works mathematically) with any degree of ease. Thus the encryption key needed to send messages to a person secretly can be distributed freely and be known to anyone who wants to, without fear of these secret messages being decoded; incredibly useful for spy networks, since it allows multiple operatives to use the same key to send messages to someone without fear that the capture of one agent could compromise everyone else’s security. In this kind of cryptography, the key distributed publically and which anyone can access is known as the ‘public key’, whilst the secret key used to decrypt it is called the ‘private key’.

RSA algorithms are not the only methods employed in public key cryptography, but any cryptographical methods it does employ are inherently secure ones. Public and private keys have other uses too beyond secure encryption; when encrypting a message using somebody else’s public key, it is possible to add a digital ‘signature’ using your private key. The recipient of your message, upon decrypting it with their private key, can then use your public key and a special algorithm to verify your signature, confirming that the message came from you (or at least someone in possession of your private key- I still don’t know how the maths works here). You can also ‘share’ private and public keys with another person to produce a ‘shared secret’, but here my concept of what the hell is going on takes another large step back so I think I’ll leave this subject there.

Despite all its inherent security, there is one risk still associated with public-key cryptography and techniques similar to the RSA algorithm. The weak link lies in the key itself; the transferring of a private key is (mostly) only ever necessary when old lines of communication are insecure, meaning that a key can often be intercepted by a sharp enemy cryptanalyst. If he is smart, he’ll then send the key straight on to its intended recipient, meaning they are likely to carry on using it oblivious of the fact that the other side can intercept and translate every message sent to him. Therefore, it is advantageous to remove this weak link by ensuring the recipient can tell if the message has been intercepted; and here we enter the weird and wonderful world of quantum cryptography.

The name is actually a misnomer; quantum theory and effects cannot be used to encrypt secure messages, and the term refers to two ideas that are only related to cryptography. One is the theoretical possibility that future quantum computers may be able to crack the RSA problem and throw the world of cryptanalysis wide open again, whilst the other, far more practical, side of things refers to this method of confirming that a message has not bee intercepted (known as quantum key distribution, or QKD). The theory behind it is almost as hard to get your head around as the maths of the RSA algorithm, but I’ll try to explain the basics. The principle behind it concerns Heisenberg’s uncertainty principle; the idea that attempting to observe a quantum effect or system will change it in some way (just go with it). The two parties sending a message to one another communicate in two ways; one via a ‘quantum link’ with which to send the secret message, and another via an open channel (e.g. the internet). The first party (who convention dictates is called Alice) sends her message via the quantum channel, polarising each bit of quantum data in one of two types of direction (just go with it). The receiving party (traditionally called Bob) receives this polarised quantum data, but since he doesn’t know which type of polarisation Alice has uses he just picks one at random each time (just go with it). About half of the time, therefore, he’ll get the right answer. Alice then tells him over the open channel which polarisation she used for each bit (usually, for reasons of speed, this is all done automatically via computer), and Bob tells her which type of polarisation he checked for each bit. They both discard the ones where they did it a different way around, and keep the ones where they did it the same way as a shared key- thus is the key exchanged.

However, if somebody (Eve, conventionally) has been eavesdropping on this little conversation and has measured the polarisation of the quantum bits, then the polarisation of the bits will have been changed by this process (just go with it). This introduces error into Bob’s reading, some of which can just be put down to the mechanics of the process; if, however, more than p bits show an error (p is picked to be a suitable number- I couldn’t give you an example), then the line and key is presumed to be insecure and the whole process is started again. Simple, isn’t it?

Despite all the bother and complexity about these processes however, it is still acknowledged that perhaps the best way to conceal a message’s content is to simply hide the thing very, very well. The Egyptians would frequently tattoo a message into a slave’s head, send him to the recipient and then let him shave his head afterwards, but a more advanced version was employed during WW2 as a direct link between Franklin D. Roosevelt and Winston Churchill. Both had a set of identical tracks of white noise (ie random sound), which they would ‘add’ to a recorded audio message and their counterpart would ‘subtract’ when it got to the other end. The random nature of white noise made the link impossible to break (well, at the time; I don’t know what a computer might be able to do to with it) without access to the original track. The code was used throughout the war, and was never broken.

Advertisement

The Myth of Popularity

WARNING: Everything I say forthwith is purely speculative based on a rough approximation of a presented view of how a part of our world works, plus some vaguely related stuff I happen to know. It is very likely to differ from your own personal view of things, so please don’t get angry with me if it does.

Bad TV and cinema is a great source of inspiration; not because there’s much in it that’s interesting, but because there’s just so much of it that even without watching any it is possible to pick up enough information to diagnose trends, which are generally interesting to analyse. In this case, I refer to the picture of American schools that is so often portrayed by iteration after iteration of generic teenage romance/romcom/’drama’, and more specifically the people in it.

One of the classic plot lines of these types of things involves the ‘hopelessly lonely/unpopular nerd who has crush on Miss Popular de Cheerleader and must prove himself by [insert totally retarded idea]’. Needless to say these plot lines are more unintentionally hilarious and excruciating than anything else, but they work because they play on the one trope that so many of us are familiar with; that of the overbearing, idiotic, horrible people from the ‘popular’ social circle. Even if we were not raised within a sitcom, it’s a situation repeated in thousands of schools across the world- the popular kids are the arseholes at the top with inexplicable access to all the gadgets and girls, and the more normal, nice people lower down the social circle.

The image exists in our conciousness long after leaving school for a whole host of reasons; partly because major personal events during our formative years tend to have a greater impact on our psyche than those occurring later on in life, but also because it is often our first major interaction with the harsh unfairness life is capable of throwing at us. The whole situation seems totally unfair and unjust; why should all these horrible people be the popular ones, and get all the social benefits associated with that? Why not me, a basically nice, humble person without a Ralph Lauren jacket or an iPad 3, but with a genuine personality? Why should they have all the luck?

However, upon analysing the issue then this object of hate begins to break down; not because the ‘popular kids’ are any less hateful, but because they are not genuinely popular. If we define popular as a scale representative of how many and how much people like you (because what the hell else is it?), then it becomes a lot easier to approach it from a numerical, mathematical perspective. Those at the perceived top end of the social spectrum generally form themselves into a clique of superiority, where they all like one another (presumably- I’ve never been privy to being in that kind of group in order to find out) but their arrogance means that they receive a certain amount of dislike, and even some downright resentment, from the rest of the immediate social world. By contrast, members of other social groups (nerds, academics [often not the same people], those sportsmen not in the ‘popular’ sphere, and the myriad of groups of undefineable ‘normies’ who just splinter off into their own little cliques) tend to be liked by members of their selected group and treated with either neutrality or minor positive or negative feeling from everyone else, leaving them with an overall ‘popularity score’, from an approximated mathematical point of view, roughly equal to or even greater than the ‘popular’ kids. Thus, the image of popularity is really something of a myth, as these people are not technically speaking any more popular than anyone else.

So, then, how has this image come to present itself as one of popularity, of being the top of the social spectrum? Why are these guys on top, seemingly above group after group of normal, friendly people with a roughly level playing field when it comes to social standing?

If you were to ask George Orwell this question, he would present you with a very compelling argument concerning the nature of a social structure to form a ‘high’ class of people (shortly after asking you how you managed to communicate with him beyond the grave). He and other social commentators have frequently pointed out that the existence of a social system where all are genuinely treated equally is unstable without some ‘higher class’ of people to look up to- even if it is only in hatred. It is humanity’s natural tendency to try and better itself, try to fight its way to the top of the pile, so if the ‘high’ group disappear temporarily they will be quickly replaced; hence why there is such a disparity between rich and poor even in a country such as the USA founded on the principle that ‘all men are created free and equal’. This principle applies to social situations too; if the ‘popular’ kids were to fall from grace, then some other group would likely rise to fill the power vacuum at the top of the social spectrum. And, as we all know, power and influence are powerful corrupting forces, so this position would be likely to transform this new ‘popular’ group into arrogant b*stards too, removing the niceness they had when they were just normal guys. This effect is also in evidence that many of the previously hateful people at the top of the spectrum become very normal and friendly when spoken to one-on-one, outside of their social group (from my experience anyway; this does not apply to all people in such groups)

However, another explanation is perhaps more believable; that arrogance is a cause rather than a symptom. By acting like they are better than the rest of the world, the rest of the world subconsciously get it into their heads that, much though they are hated, they are the top of the social ladder purely because they said so. And perhaps this idea is more comforting, because it takes us back to the idea we started with; that nobody is more actually popular than anyone else, and that it doesn’t really matter in the grand scheme of things. Regardless of where your group ranks on the social scale, if it’s yours and you get along with the people in it, then it doesn’t really matter about everyone else or what they think, so long as you can get on, be happy, and enjoy yourself.

Footnote: I get most of these ideas from what is painted by the media as being the norm in American schools and from what friends have told me, since I’ve been lucky enough that the social hierarchies I encountered from my school experience basically left one another along. Judging by the horror stories other people tell me, I presume it was just my school. Plus, even if it’s total horseshit, it’s enough of a trope that I can write a post about it.

Why the chubs?

My last post dealt with the thorny issue of obesity, both it’s increasing presence in our everyday lives, and what for me is the underlying reason behind the stats that back up media scare stories concerning ‘the obesity epidemic’- the rise in size of the ‘average’ person over the last few decades. The precise causes of this trend can be put down to a whole host of societal factors within our modern age, but that story is boring as hell and has been repeated countless times by commenters far more adept in this field than me. Instead, today I wish present the case for modern-day obesity as a problem concerning the fundamental biology of a human being.

We, and our dim and distant ancestors of the scaly/furry variety, have spent the last few million years living wild; hunting, fighting and generally acting much like any other evolutionary pathway. Thus, we can learn a lot about our own inbuilt biology and instincts by studying the behaviour of animals currently alive today, and when we do so, several interesting animal eating habits become apparent. As anyone who has tried it as a child can attest (and I speak from personal experience), grass is not good stuff to eat. It’s tough, it takes a lot of chewing and processing (many herbivores have multiple stomachs to make sure they squeeze the maximum nutritional value out of their food), and there really isn’t much of it to power a fully-functional being. As such, grazers on grass and other such tough plant matter (such as leaves) will spend most of their lives doing nothing but guzzle the stuff, trying to get as much as possible through their system. Other animals will favour food with a higher nutritional content, such as fruits, tubers or, in many cases, meat, but these frequently present issues. Fruits are highly seasonal and rarely available in a large enough volume to support a large population, as well as being quite hard to get a lot of down; plants try to ‘design’ fruits so that each visitor takes only a few at a time, so as best to spread their seeds far and wide, and as such there are few animals that can sustain themselves on such a diet.  Other food such as tubers or nuts are hard to get at, needing to be dug up or broken in highly energy-consuming activities, whilst meat has the annoying habit of running away or fighting back whenever you try to get at it. As anyone who watches nature documentaries will attest, most large predators will only eat once every few days (admittedly rather heavily).

The unifying factor of all of this is that food is, in the wild, highly energy- and time-consuming to get hold of and consume, since every source of it guards its prize jealously. Therefore, any animal that wants to survive in this tough world must be near-constantly in pursuit of food simply to fulfil all of its life functions, and this is characterised by being perpetually hungry. Hunger is a body’s way of telling us that we should get more food, and in the wild this constant desire for more is kept in check by the difficulty that getting hold of it entails. Similarly, animal bodies try to assuage this desire by being lazy; if something isn’t necessary, then there’s no point wasting valuable energy going after it (since this will mean spending more time going after food to replace lost energy.)

However, in recent history (and a spectacularly short period of time from evolution’s point of view), one particular species called homo sapiens came up with this great idea called civilisation, which basically entailed the pooling and sharing of skill and resources in order to best benefit everyone as a whole. As an evolutionary success story, this is right up there with developing multicellular body structures in terms of being awesome, and it has enabled us humans to live far more comfortable lives than our ancestors did, with correspondingly far greater access to food. This has proved particularly true over the last two centuries, as technological advances in a more democratic society have improved the everyman’s access to food and comfortable living to a truly astounding degree. Unfortunately (from the point of view of our waistline) the instincts of our bodies haven’t quite caught up to the idea that when we want/need food, we can just get food, without all that inconvenient running around after it to get in the way. Not only that, but a lack of pack hierarchy combined with this increased availability means that we can stock up on food until we have eaten our absolute fill if so we wish; the difference between ‘satiated’ and ‘stuffed’ can work out as well over 1000 calories per meal, and over a long period of time it only takes a little more than we should be having every day to start packing on the pounds. Combine that with our natural predilection to laziness meaning that we don’t naturally think of going out for some exercise as fun purely for its own sake, and the fact that we no longer burn calories chasing our food, or in the muscles we build up from said chasing, and we find ourselves consuming a lot more calories than we really should be.

Not only that, but during this time we have also got into the habit of spending a lot of time worrying over the taste and texture of our food. This means that, unlike our ancestors who were just fine with simply jumping on a squirrel and devouring the thing, we have to go through the whole rigmarole of getting stuff out of the fridge, spending two hours slaving away in a kitchen and attempting to cook something vaguely resembling tasty. This wait is not something out bodies enjoy very much, meaning we often turn to ‘quick fixes’ when in need of food; stuff like bread, pasta or ready meals. Whilst we all know how much crap goes into ready meals (which should, as a rule, never be bought by anyone who cares even in the slightest about their health; salt content of those things is insane) and other such ‘quick fixes’, fewer people are aware of the impact a high intake of whole grains can have on our bodies. Stuff like bread and rice only started being eaten by humans a few thousand years ago, as we discovered the benefits of farming and cooking, and whilst they are undoubtedly a good food source (and are very, very difficult to cut from one’s diet whilst still remaining healthy) our bodies have simply not had enough time, evolutionarily speaking, to get used to them. This means they have a tendency to not make us feel as full as their calorie content should suggest, thus meaning that we eat more than our body in fact needs (if you want to feel full whilst not taking in so many calories, protein is the way to go; meat, fish and dairy are great for this).

This is all rather academic, but what does it mean for you if you want to lose a bit of weight? I am no expert on this, but then again neither are most of the people acting as self-proclaimed nutritionists in the general media, and anyway, I don’t have any better ideas for posts. So, look at my next post for my, admittedly basic, advice for anyone trying to make themselves that little bit healthier, especially if you’re trying to work of a few of the pounds built up over this festive season.

Goodwill to all men

NOTE: This post was meant to go up on Christmas Eve, but WordPress clearly broke on me so apparently you get it now instead- sorry. Ah well, might as well put it up anyway…

 

Ah, Christmas; such an interesting time of year. The season of plenty, the season of spending too much, the season of eating too much, the season of decisions we later regret and those moments we always remember. The season where some families will go without food to keep the magic alive for their children, the season where some new feuds are born but old ones are set aside, and the season where goodwill to all men (and women) becomes a key focus of our attention.

When I was young, I always had a problem with this. I had similar issues with Mother’s Day, and Father’s Day even more so (I don’t know how I came to know that it was an entirely commercial invention, but there you go), and whilst Christmas was awesome enough that I wasn’t going to ruin it by seasonal complaints, one thing always bugged me about ‘the season of goodwill’. Namely, why can’t we just be nice to each other all the time, rather than just for a few weeks of the year?

A cynic might say we get all the goodwill out of our systems over Christmas in preparation for being miserable bastards for the rest of the year, but cynicism is unhealthy and in any case, I try to keep it out of my bloggy adventures. Plus, we are capable of doing nice stuff for the rest of the year, even if we don’t do so much as some might think we should, and humans never cease to be awesome beings when they put their mind to it. No, it’s not that we give up being nice for the rest of the year, but more that we are quite clearly eminently able of being more nice but not, seemingly, all the time.

Goodwill to our fellow man is not the only seasonal occurrence that seems more prevalent over the festive period for no obvious reason; many of our Christmas traditions, both old and modern, follow a similar thread. Turkey, for instance; whilst it’s never been Christmas fare in my household for various reasons, I know enough people for whom a turkey dinner plus trimmings is the festive standard to know that these same people never have the bird at any other time of the year (I know you Americans have it on Thanksgiving, but I don’t know enough about how all that works to comment). I saw a comment online a couple of weeks ago about eggnog (another seemingly American-specific thing), and mentioning how this apparently awesome stuff (never tried it myself, so again can’t comment) is never available at any other time of the year. A response soon followed courtesy of a shop worker, who said there’s always a supply of it tucked away somewhere throughout the year in the shop where he worked, but that nobody ever bought it outside of December.

We should remember that there is something of a fine line to tread when we discuss these ideas; there are a lot of things that only occur at Christmas time (the giving of gifts, decorations, the tree and so on) that don’t need any such explanation because they are solely associated with the season. If one were to put tinsel up in June, then you might be thought a bit odd for your apparent celebration of Christmas in midsummer; tinsel is not associated with anything other than festive celebration, so in any other context it’s just weird.  This is particularly true given that tinsel and other such decorations are just that; decorations, with no purpose outside of festive celebration. Similarly, whilst gift-giving is appreciated throughout the rest of year (although it’s best to do so in moderation), going to all the trouble of thinking, deliberating, wrapping secretively and making a big fanfare over it is only associated with special occasions (Christmases or birthdays). Stuff like turkey and eggnog can probably be classified as somewhere in the middle; very much associated with the Christmas period, but still separate from it and capable for being consumed at other times of the year.

The concept of goodwill and being nice to people is a little different; not just something that is possible throughout the rest of the year, but something actively encouraged as being a commendable trait, so the excuse of ‘it’s just a feature of the season’ doesn’t really cut it in this context. Some might say that quite a lot of the happiness exuded at Christmastime is somewhat forced, or at the very least tiring, as anyone who’s looked at the gaunt face between the smiling facade of a Christmas day Mum can tell. Therefore, it could be argued that Christmas good cheer is simply too much work to keep up for the rest of the year, and that if we were forced to keep our smiley faces on we would either snap or collapse in exhaustion before long. Others might say that keeping good cheer confined to one portion of the year makes it that much more fun and special when it comes round each year, but to me the reason is slightly more… mathematical.

Human beings are competitive, ambitious creatures, perpetually seeking to succeed and triumph over the odds. Invariably, this frequently means triumphing over other people too, and this is not a situation that lends itself to being dedicated to being nice to one another; competition and the strive to succeed may be key features behind human and personal success, but they do not lend themselves to being nice to one another. Not infrequently, such competition requires us to deliberately take the not-nice option, as dicking on our competition often provides the best way to compete with them; or at the very least, we sometimes need to be harsh bastards to make sure stuff gets done at all. This concept is known in philosophy as the prisoner’s dilemma, which I should get round to doing a post on one of these days.

However at Christmas time achievement becomes of secondary importance to enjoyment; to spending time with friends and family, and to just enjoying the company of your nearest and dearest. Therefore, comparatively little actually gets done over the Christmas period (at least from an economist’s point of view), and so the advantage presented by mild dickishness to some others for the rest of the year disappears. Everything in life becomes reduced down to a state where being nice to everyone around us best serves our purpose of making our environment a fun, comfortable place to be. At Christmas time, we have no reason to be nasty, and every reason to be nice; and for that reason alone, Christmas is a wonderful thing. Merry Christmas, everybody.

One Year On

A year is a long time.

On the 16th of December last year, I was on Facebook. Nothing unusual about this (I spent and indeed, to a slightly lesser extent, still spend rather too much time with that little blue f in the top corner of my screen), especially given that it was the run up to Christmas and I was bored, and neither was the precise content of the bit of Facebook I was looking at- an argument. Such things are common in the weird world of social networking, although they surely shouldn’t be, and this was just another such time. Three or four people were posting long, eloquent, semi-researched and furiously defended messages over some point of ethics, politics or internet piracy, I know not which (it was probably one of those anyway, since that’s what most of them seem to be about among my friends list). Unfortunately, one of those people was me, and I was losing. Well, I say losing; I don’t think anybody could be said to be winning, but I was getting angry and upset all the same, made worse by the realisation that what I was doing was a COMPLETE WASTE OF TIME. I am not in any position whereby my Views are going to have a massive impact on the lives of everyone else, nobody wants to hear what they are, and there was no way in hell that I was going to convince anyone that my opinion was more ‘right’ than their strongly-held conviction- all I and my fellow arguees were achieving was getting very, very angry at one another, actively making us all more miserable. We could pretend that we were debating an important issue, but in reality were just another group of people screaming at one another via the interwebs.

A little under a week later, the night after the winter solstice (22nd of December, which you should notice was exactly 366 days ago), I was again to be found watching an argument unfold on Facebook. Thankfully this time I was not participating, merely looking on with horror as another group of four or five people made their evening miserable by pretending they could convince others that they were ‘wrong’. The provocativeness of the original post, spouting one set of Views as gospel truth over the web, the self-righteousness of the responses and the steadily increasing vitriol of the resulting argument, all struck me as a terrible waste of some wonderful brains. Those participating I knew to be good people, smart people, capable of using their brains for, if not betterment of the world around them, then perhaps a degree of self-betterment or at the very least something that was not making the world a more unhappy place. The moment was not a happy one.

However, one of the benefits of not competing in such an argument is that I didn’t have to be reminded of it or spend much time watching it unfold, so I turned back to my news feed and began scrolling down. As I did so, I came to another friend, putting a link up to his blog. This was a recent experiment for him, only a few posts old at the time, and he self-publicised it religiously every time a post went up. He has since discontinued his blogging adventures, to my disappointment, but they made fun reading whilst they lasted; short (mostly less than 300 words) and covering a wide range of random topics. He wasn’t afraid to just be himself online, and wasn’t concerned about being definitively right; if he offered an opinion, it was just something he thought, no more & no less, and there was no sense that it was ever combative. Certainly it was never the point of any post he made; each was just something he’d encountered in the real world or online that he felt would be relatively cool and interesting to comment on. His description described his posts as ‘musings’, and that was the right word for them; harmless, fun and nice. They made the internet and world in general, in some tiny little way, a nicer place to explore.

So, I read through his post. I smirked a little, smiled and closed the tab, returning once more to Facebook and the other distractions & delights the net had to offer. After about an hour or so, my thoughts once again turned to the argument, and I rashly flicked over to look at how it was progressing. It had got to over 100 comments and, as these things do, was gradually wandering off-topic to a more fundamental, but no less depressing, point of disagreement. I was once again filled with a sense that these people were wasting their lives, but this time my thoughts were both more decisive and introspective. I thought about myself; listless, counting down the last few empty days before Christmas, looking at the occasional video or blog, not doing much with myself. My schedule was relatively free, I had a lot of spare time, but I was wasting it. I thought of all the weird and wonderful thoughts that flew across my brain, all the ideas that would spring and fountain of their own accord, all of the things that I thought were interesting, amazing or just downright wonderful about our little mental, spinning ball of rock and water and its strange, pink, fleshy inhabitants that I never got to share. Worse, I never got to put them down anywhere, so after time all these thoughts would die in some forgotten corner of my brain, and the potential they had to remind me of themselves was lost. Once again, I was struck by a sense of waste, but also of resolve; I could try to remedy this situation. So, I opened up WordPress, I filled out a few boxes, and I had my own little blog. My fingers hovered over the keyboard, before falling to the keys. I began to write a little introduction to myself.

Today, the role of my little corner of the interwebs has changed somewhat. Once, I would post poetry, lists, depressed trains of thought and last year’s ’round robin letter of Planet Earth’, which I still regard as one of the best concepts I ever put onto the net (although I don’t think I’ll do one this year- not as much major stuff has hit the news). Somewhere along the line, I realised that essays were more my kind of thing, so I’ve (mainly) stuck to them since; I enjoy the occasional foray into something else, but I find that I can’t produce as much regular stuff this was as otherwise. In any case, the essays have been good for me; I can type, research and get work done so much faster now, and it has paid dividends to my work rate and analytical ability in other fields. I have also found that in my efforts to add evidence to my comments, I end up doing a surprising amount of research that turns an exercise in writing down what I know into one of increasing the kind of stuff I know, learning all sorts of new and random stuff to pack into my brain. I have also violated my own rules about giving my Views on a couple of occasions (although I would hope that I haven’t been too obnoxious about it when I have), but broadly speaking the role of my blog has stayed true to those goals stated in my very first post; to be a place free from rants, to be somewhere to have a bit of a laugh and to be somewhere to rescue unwary travellers dredging the backwaters of the internet who might like what they’ve stumbled upon. But, really, this little blog is like a diary for me; a place that I don’t publicise on my Facebook feed, that I link to only rarely, and that I keep going because I find it comforting. It’s a place where there’s nobody to judge me, a place to house my mind and extend my memory. It’s stressful organising my posting time and coming up with ideas, but whilst blogging, the rest of the world can wait for a bit. It’s a calming place, a nice place, and over the last year it has changed me.

A year is a long time.

Other Politicky Stuff

OK, I know I talked about politics last time, and no I don’t want to start another series on this, but I actually found when writing my last post that I got very rapidly sidetracked when I tried to use voter turnout as a way of demonstrating the fact that everyone hates their politicians, and I thought I might dedicate a post to this particular train of thought as well.

You see, across the world, but predominantly in the developed west where the right to choose our leaders has been around for ages, less and less people are turning out each time to vote.  By way of an example, Ronald Reagan famously won a ‘landslide’ victory when coming to power in 1980- but only actually attracted the vote of 29% of all eligible voters. In some countries, such as Australia, voting is mandatory, but thoughts about introducing such a system elsewhere have frequently met with opposition and claims that it goes against people’s democratic right to abstain from doing so (this argument is largely rubbish, but no time for that now).

A lot of reasons have been suggested for this trend, among them a sense of political apathy, laziness, and the idea that we having the right to choose our leaders for so long has meant we no longer find such an idea special or worth exercising. For example, the presidential election in Venezuela – a country that underwent something of a political revolution just over a decade ago and has a history of military dictatorships, corruption and general political chaos – a little while ago saw a voter turnout of nearly 90% (incumbent president Hugo Chavez winning with 54% of the vote to win his fourth term of office in case you were interested) making Reagan look boring by comparison.

However, another, more interesting (hence why I’m talking about it) argument has also been proposed, and one that makes an awful lot of sense. In Britain there are 3 major parties competing for every seat, and perhaps 1 or two others who may be standing in your local area. In the USA, your choice is pretty limited to either Obama or Romney, especially if you’re trying to avoid the ire of the rabidly aggressive ‘NO VOTE IS A VOTE FOR ROMNEY AND HITLER AND SLAUGHTERING KITTENS’ brigade. Basically, the point is that your choice of who to vote for is limited to usually less than 5 people, and given the number of different issues they have views on that mean something to you the chance of any one of them following your precise political philosophy is pretty close to zero.

This has wide reaching implications extending to every corner of democracy, and is indicative of one simple fact; that when the US Declaration of Independence was first drafted some 250 years ago and the founding fathers drew up what would become the template for modern democracy, it was not designed for a state, or indeed a world, as big and multifaceted as ours. That template was founded on the basis of the idea that one vote was all that was needed to keep a government in line and following the will of the masses, but in our modern society (and quite possibly also in the one they were designing for) that is simply not the case. Once in power, a government can do almost what it likes (I said ALMOST) and still be confident that they will get a significant proportion of the country voting for them; not only that, but that their unpopular decisions can often be ‘balanced out’ by more popular, mass-appeal ones, rather than their every decision being the direct will of the people.

One solution would be to have a system more akin to Greek democracy, where every issue is answered by referendum which the government must obey. However, this presents just as many problems as it answers; referendums are very expensive and time-consuming to set up and perform, and if they became commonplace it could further enhance the existing issue of voter apathy. Only the most actively political would vote in every one, returning the real power to the hands of a relative few who, unlike previously, haven’t been voted in. However, perhaps the most pressing issue with this solution is that it rather renders the role of MPs, representatives, senators and even Prime Ministers & Presidents rather pointless. What is the point of our society choosing those who really care about the good of their country, have worked hard to slowly rise up the ranks and giving them a chance to determine how their country is governed, if we are merely going to reduce their role to ones of administrators and form fillers? Despite the problems I mentioned last time out, of all the people we’ve got to choose from politicians are probably the best people to have governing us (or at least the most reliably OK, even if it’s simply because we picked them).

Plus, politics is a tough business, and what is the will of the people is not necessarily always what’s best for the country as a whole. Take Greece at the moment; massive protests are (or at least were; I know everyone’s still pissed off about it) underway due to the austerity measures imposed by the government, because of the crippling economic suffering that is sure to result. However, the politicians know that such measures are necessary and are refusing to budge on the issue- desperate times call for difficult decisions (OK, I know there were elections that almost entirely centred on this decision that sided with austerity, but shush- you’re ruining my argument). To pick another example, President Obama (and several Democrat candidates before him) have met with huge opposition to the idea of introducing a US national healthcare system, basically because Americans hate taxes. Nonetheless, this is something he believes very strongly in, and has finally managed to get through congress; if he wins the elections later this year, we’ll see how well he executes.

In short, then, there are far too many issues, too many boxes to balance and ideas to question, for all protesting in a democratic society to take place at the ballot box. Is there a better solution to waving placards in the street and sending strongly worded letters? Do those methods at all work? In all honesty, I don’t know- that whole internet petitions get debated in parliament thing the British government recently imported from Switzerland is a nice idea, but, just like more traditional forms of protest, gives those in power no genuine categorical imperative to change anything. If I had a solution, I’d probably be running for government myself (which is one option that definitely works- just don’t all try it at once), but as it is I am nothing more than an idle commentator thinking about an imperfect system.

Yeah, I struggle for conclusions sometimes.

The End of The World

As everyone who understands the concept of buying a new calendar when the old one runs out should be aware, the world is emphatically due to not end on December 21st this year thanks to a Mayan ‘prophecy’ that basically amounts to one guy’s arm getting really tired and deciding ‘sod carving the next year in, it’s ages off anyway’. Most of you should also be aware of the kind of cosmology theories that talk about the end of the world/the sun’s expansion/the universe committing suicide that are always hastily suffixed with an ‘in 200 billion years or so’, making the point that there’s really no need to worry and that the world is probably going to be fine for the foreseeable future; or at least, that by the time anything serious does happen we’re probably not going to be in a position to complain.

However, when thinking about this, we come across a rather interesting, if slightly macabre, gap; an area nobody really wants to talk about thanks to a mixture of lack of certainty and simple fear. At some point in the future, we as a race and a culture will surely not be here. Currently, we are. Therefore, between those two points, the human race is going to die.

Now, from a purely biological perspective there is nothing especially surprising or worrying about this; species die out all the time (in fact we humans are getting so good at inadvertent mass slaughter that between 2 and 20 species are going extinct every day), and others evolve and adapt to slowly change the face of the earth. We humans and our few thousand years of existence, and especially our mere two or three thousand of organised mass society, are the merest blip in the earth’s long and varied history. But we are also unique in more ways than one; the first race to, to a very great extent, remove ourselves from the endless fight for survival and start taking control of events once so far beyond our imagination as to be put down to the work of gods. If the human race is to die, as it surely will one day, we are simply getting too smart and too good at thinking about these things for it to be the kind of gradual decline & changing of a delicate ecosystem that characterises most ‘natural’ extinctions. If we are to go down, it’s going to be big and it’s going to be VERY messy.

In short, with the world staying as it is and as it has for the past few millennia we’re not going to be dying out very soon. However, this is also not very biologically unusual, for when a species go extinct it is usually the result of either another species with which they are engaging in direct competition out-competing them and causing them to starve, or a change in environmental conditions meaning they are no longer well-adapted for the environment they find themselves in. But once again, human beings appear to be showing a semblance of being rather above this; having carved out what isn’t so much an ecological niche as a categorical redefining of the way the world works there is no other creature that could be considered our biological competitor, and the thing that has always set humans apart ecologically is our ability to adapt. From the ice ages where we hunted mammoth, to the African deserts where the San people still live in isolation, there are very few things the earth can throw at us that are beyond the wit of humanity to live through. Especially a human race that is beginning to look upon terraforming and cultured food as a pretty neat idea.

So, if our environment is going to change sufficiently for us to begin dying out, things are going to have to change not only in the extreme, but very quickly as well (well, quickly in geographical terms at least). This required pace of change limits the number of potential extinction options to a very small, select list. Most of these you could make a disaster film out of (and in most cases one has), but one that is slightly less dramatic (although they still did end up making a film about it) is global warming.

Some people are adamant that global warming is either a) a myth, b) not anything to do with human activity or c) both (which kind of seems a contradiction in terms, but hey). These people can be safely categorized under ‘don’t know what they’re *%^&ing talking about’, as any scientific explanation that covers all the available facts cannot fail to reach the conclusion that global warming not only exists, but that it’s our fault. Not only that, but it could very well genuinely screw up the world- we are used to the idea that, in the long run, somebody will sort it out, we’ll come up with a solution and it’ll all be OK, but one day we might have to come to terms with a state of affairs where the combined efforts of our entire race are simply not enough. It’s like the way cancer always happens to someone else, until one morning you find a lump. One day, we might fail to save ourselves.

The extent to which global warming looks set to screw around with our climate is currently unclear, but some potential scenarios are extreme to say the least. Nothing is ever quite going to match up to the picture portrayed in The Day After Tomorrow (for the record, the Gulf Stream will take around a decade to shut down if/when it does so), but some scenarios are pretty horrific. Some predict the flooding of vast swathes of the earth’s surface, including most of our biggest cities, whilst others predict mass desertification, a collapse of many of the ecosystems we rely on, or the polar regions swarming across Northern Europe. The prospect of the human population being decimated is a very real one.

But destroyed? Totally? After thousands of years of human society slowly getting the better of and dominating all that surrounds it? I don’t know about you, but I find that quite unlikely- at the very least, it at least seems to me like it’s going to take more than just one wave of climate change to finish us off completely. So, if climate change is unlikely to kill us, then what else is left?

Well, in rather a nice, circular fashion, cosmology may have the answer, even if we don’t some how manage to pull off a miracle and hang around long enough to let the sun’s expansion get us. We may one day be able to blast asteroids out of existence. We might be able to stop the super-volcano that is Yellowstone National Park blowing itself to smithereens when it erupts as it is due to in the not-too-distant future (we also might fail at both of those things, and let either wipe us out, but ho hum). But could we ever prevent the sun emitting a gamma ray burst at us, of a power sufficient to cause the third largest extinction in earth’s history last time it happened? Well, we’ll just have to wait and see…

Attack of the Blocks

I spend far too much time on the internet. As well as putting many hours of work into trying to keep this blog updated regularly, I while away a fair portion of time on Facebook, follow a large number of video series’ and webcomics, and can often be found wandering through the recesses of YouTube (an interesting and frequently harrowing experience that can tell one an awful lot about the extremes of human nature). But there is one thing that any resident of the web cannot hope to avoid for any great period of time, and quite often doesn’t want to- the strange world of Minecraft.

Since its release as a humble alpha-version indie game in 2009, Minecraft has boomed to become a runaway success and something of a cultural phenomenon. By the end of 2011, before it had even been released in its final release format, Minecraft had registered 4 million purchases and 4 times that many registered users, which isn’t bad for a game that has never advertised itself, spread semi-virally among nerdy gamers for its mere three-year history and was made purely as an interesting project by its creator Markus Persson (aka Notch). Thousands of videos, ranging from gameplay to some quite startlingly good music videos (check out the work of Captain Sparklez if you haven’t already) litter YouTube and many of the games’ features (such as TNT and the exploding mobs known as Creepers) have become memes in their own right to some degree.

So then, why exactly has Minecraft succeeded where hundreds and thousands of games have failed, becoming a revolution in gamer culture? What is it that makes Minecraft both so brilliant, and so special?

Many, upon being asked this question, tend to revert to extolling the virtues of the game’s indie nature. Being created entirely without funding as an experiment in gaming rather than profit-making, Minecraft’s roots are firmly rooted in the humble sphere of independent gaming, and it shows. One obvious feature is the games inherent simplicity- initially solely featuring the ability to wander around, place and destroy blocks, the controls are mainly (although far from entirely) confined to move and ‘use’, whether that latter function be shoot, slash, mine or punch down a tree. The basic, cuboid, ‘blocky’ nature of the game’s graphics, allowing for both simplicity of production and creating an iconic, retro aesthetic that makes it memorable and standout to look at. Whilst the game has frequently been criticised for not including a tutorial (I myself took a good quarter of an hour to find out that you started by punching a tree, and a further ten minutes to work out that you were supposed to hold down the mouse button rather than repeatedly click), this is another common feature of indie gaming, partly because it saves time in development, but mostly because it makes the game feel like it is not pandering to you and thus allowing indie gamers to feel some degree of elitism that they are good enough to work it out by themselves. This also ties in with the very nature of the game- another criticism used to be (and, to an extent, still is, even with the addition of the Enderdragon as a final win objective) that the game appeared to be largely devoid of point, existent only for its own purpose. This is entirely true, whether you view that as a bonus or a detriment being entirely your own opinion, and this idea of an unfamiliar, experimental game structure is another feature common in one form or another to a lot of indie games.

However, to me these do not seem to be entirely worthy of the name ‘answers’ regarding the question of Minecraft’s phenomenal success. The reason I think this way is that they do not adequately explain exactly why Minecraft rose to such prominence whilst other, often similar, indie games have been left in relative obscurity. Limbo, for example, is a side-scrolling platformer and a quite disturbing, yet compelling, in-game experience, with almost as much intrigue and puzzle from a set of game mechanics simpler even than those of Minecraft. It has also received critical acclaim often far in excess of Minecraft (which has received a positive, but not wildly amazed, response from critics), and yet is still known to only an occasional few. Amnesia: The Dark Descent has been often described as the greatest survival horror game in history, as well as incorporating a superb set of graphics, a three-dimensional world view (unlike the 2D view common to most indie games) and the most pants-wettingly terrifying experience anyone who’s ever played it is likely to ever face- but again, it is confined to the indie realm. Hell, Terraria is basically Minecraft in 2D, but has sold around 40 times less than Minecraft itself. All three of these games have received fairly significant acclaim and coverage, and rightly so, but none has become the riotous cultural phenomenon that Minecraft has, and neither have had an Assassin’s Creed mod (first example that sprung to mind).

So… why has Minecraft been so successful. Well, I’m going to be sticking my neck out here, but to my mind it’s because it doesn’t play like an indie game. Whilst most independently produced titled are 2D, confined to fairly limited surroundings and made as simple & basic as possible to save on development (Amnesia can be regarded as an exception), Minecraft takes it own inherent simplicity and blows it up to a grand scale. It is a vast, open world sandbox game, with vague resonances of the Elder Scrolls games and MMORPG’s, taking the freedom, exploration and experimentation that have always been the advantages of this branch of the AAA world, and combined them with the innovative, simplistic gaming experience of its indie roots. In some ways it’s similar to Facebook, in that it takes a simple principle and then applies it to the largest stage possible, and both have enjoyed a similarly explosive rise to fame. The randomly generated worlds provide infinite caverns to explore, endless mobs to slay, all the space imaginable to build the grandest of castles, the largest of cathedrals, or the SS Enterprise if that takes your fancy. There are a thousand different ways to play the game on a million different planes, all based on just a few simple mechanics. Minecraft is the best of indie and AAA blended together, and is all the more awesome for it.