Poverty Changes

£14,000 is quite a large amount of money. Enough for 70,000 Freddos, a decade’s worth of holidays, two new Nissan Pixo’s, several thousand potatoes or a gold standard racing pigeon. However, if you’re trying to live off just that amount in modern Britain, it quickly seems quite a lot smaller. Half of that could easily disappear on rent, whilst the average British family will spend a further £4,000 on food (significantly greater than the European average, for one reason or another). Then we must factor in tax, work-related expenses, various repair bills, a TV license, utility & heating bills, petrol money and other transport expenses, and it quickly becomes apparent that trying to live on this amount will require some careful budgeting. Still, not to worry too much though; it’s certainly possible to keep the body and soul of a medium sized family together on £14k a year, if not absolutely comfortably, and in any case 70% of British families have an annual income in excess of this amount. It might not be a vast amount to live on, but it should be about enough.

However, there’s a reason I quoted £14,000 specifically in the figure above, because I recently saw another statistic saying that if one’s income is above 14 grand a year, you are one of the top 4% richest people on planet Earth. Or, to put it another way, if you were on that income, and were then to select somebody totally at random from our species, then 24 times out of 25 you would be richer than them.

Now, this slightly shocking fact, as well as being a timely reminder as to the prevalence of poverty amongst fellow members of our species, to me raises an interesting question; if £14,000 is only just about enough to let one’s life operate properly in modern Britain, how on earth does the vast majority of the world manage to survive at all on significantly less than this? More than 70% of the Chinese population (in 2008, admittedly; the rate of Chinese poverty is decreasing at a staggering rate thanks to its booming economy) live on less than $5 a day, and 35 years ago more than 80% were considered to be in absolute poverty. How does this work? How does most of the rest of the world physically survive?

The obvious starting point is the one stating that much of it barely does. Despite the last few decades of massive improvement in the living standards and poverty levels in the world in general,  the World Bank estimates that some 20% of the world’s populace is living below the absolute poverty line of surviving on less than $1.50 per person per day, or £365 a year (down from around 45% in the early 1980s- Bob Geldof’s message has packed a powerful punch). This is the generally accepted marker for being less than what a person can physically keep body and soul together on, and having such a huge proportion of people living below this marker tends to drag down the global average. Poverty is something that the last quarter of the century has seen a definitive effort on the part of humanity to reduce, but it’s still a truly vast issue across the globe.

However, the main contributing factor to me behind how a seemingly meagre amount of money in the first world would be considered bountiful wealth in the third is simply down to how economics works. We in the west are currently enjoying the fruits of two centuries of free-market capitalism, which has fundamentally changed the way our civilisation functions. When we as a race first came up with the concept of civilisation, of pooling and exchanging skills and resources for the betterment of the collective, this was largely confined to the local community, or at least to the small-scale. Farmers provided for those living in the surrounding twenty miles or so, as did brewers, hunters, and all other such ‘small businessmen’, as they would be called today. The concept of a country provided security from invasion and legal support on a larger scale, but that was about it; any international trade was generally conducted between kings and noblemen, and was very much small scale.

However, since the days of the British Empire and the Industrial Revolution, business has got steadily bigger and bigger. It started out with international trade between the colonies, and the rich untapped resources the European imperial powers found there, moved on to the industrial scale manufacture of goods, and then the high-intensity sale of consumer products to the general population. Now we have vast multinational companies organising long, exhaustive chains of supply, manufacture and retail, and our society has become firmly rooted in this intense selling international economy. Without constantly selling vast quantities of stuff to one another, the western world as we know it simply would not exist.

This process causes many side effects, but one is of particular interest; everything becomes more expensive. To summarise very simply, the basic principle of capitalism involves workers putting in work and skill to increase the value of something; that something then gets sold, and the worker then gets some of the difference between cost of materials and cost of sale as a reward for their effort. For this to work, then one’s reward for putting in your effort must be enough to purchase the stuff needed to keep you alive; capitalism rests on the principle of our bodies being X% efficient at turning the food we eat into the energy we can use to work. If business is successful, then the workers of a company (here the term ‘workers’ covers everyone from factory floor to management) will gain money in the long term, enabling them to spend more money. This means that the market increases in size, and people can either sell more goods or start selling them for a higher price, so goods become worth more, so the people making those goods start getting more money, and so on.

The net result of this is that in an ‘expensive’ economy, everyone has a relatively high income and high expenditure, because all goods, taxes, land, utilities etc. cost quite a lot; but, for all practical purposes, this results in a remarkably similar situation to a ‘cheap’ economy, where the full force of western capitalism hasn’t quite taken hold yet- for, whilst the people residing there have less money, the stuff that is there costs less having not been through the corporation wringer. So, why would we find it tricky to live on less money than the top 4% of the world’s population? Blame the Industrial Revolution.

Advertisement

…but some are more equal than others

Seemingly the key default belief of any modern, respectable government and, indeed, a well brought-up child of the modern age, is that of egalitarianism- that all men are born equal. Numerous documents, from the US Declaration of Independence to the UN Bill of Human Rights, have proclaimed this as a ‘self-evident truth’, and anyone who still blatantly clings onto the idea that some people are born ‘better’ than others by virtue of their family having more money is dubbed out of touch at best, and (bizarrely) a Nazi at worst. And this might be considered surprising given the amount of approval and the extent to which we set store by a person’s rank or status.

I mean, think about it. A child from a well-respected, middle class family with two professional parents will invariably get more opportunities in life, and will frequently be considered more ‘trustworthy’, than a kid born into a broken home with a mother on benefits and a father in jail, particularly if his accent (especially) or skin colour (possibly to a slightly lesser extent in Europe than the US) suggests this fact. Someone with an expensive, tailored suit can stand a better chance at a job interview to a candidate with an old, fading jacket and worn knees on his trousers that he has never been rich enough to replace, and I haven’t even started on the wage and job availability gap between men and women, despite that there are nowadays more female university graduates than males. You get the general idea. We might think that all are born equal, but that doesn’t mean we treat them like that.

Some have said that this, particularly in the world of work, is to do with the background and age of the people concerned. Particularly in large, old and incredibly valuable corporate enterprises such as banks, the average age of senior staff and shareholders tends to be on the grey end of things, the majority of them are male and many of them will have had the top-quality private education that allowed them to get there, so the argument put forward is that these men were brought up surrounded by this sort of ‘public schoolers are fantastic and everyone else is a pleb’ mentality. And it is without doubt true that very few companies have an average age of a board member below 50, and many above 65; in fact the average age of a CEO in the UK has recently gone up from a decade-long value of 51 to nearly 53.  However, the evidence suggests that the inclusion of younger board members and CEOs generally benefits a company by providing a fresher understanding of the modern world; data that could only be gathered by the fact that there are a large number of young, high-ranking businesspeople to evaluate. And anyway; in most job interviews, it’s less likely to be the board asking the questions than it is a recruiting officer of medium business experience- this may be an issue, but I don’t think it’s the key thing here.

It could well be possible that the true answer is that there is no cause at all, and the whole business is nothing more than a statistical blip. In Freakonomics, an analysis was done to find the twenty ‘blackest’ and ‘whitest’ boy’s names in the US (I seem to remember DeShawn was the ‘blackest’ and Jake the ‘whitest’), and then compared the job prospects of people with names on either of those two lists. The results suggested that people with one of the ‘white’ names did better in the job market than those with ‘black’ names, perhaps suggesting that interviewers are being, subconsciously or not, racist. But, a statistical analysis revealed this to not, in fact, be the case; we must remember that black Americans are, on average, less well off than their white countrymen, meaning they are more likely to go to a dodgy school, have problems at home or hang around with the wrong friends. Therefore, black people do worse, on average, on the job market because they are more likely to be not as well-qualified as white equivalents, making them, from a purely analytical standpoint, often worse candidates. This meant that Jake was more likely to get a job than DeShawn because Jake was simply more likely to be a better-educated guy, so any racism on the part of job interviewers is not prevalent enough to be statistically significant. To some extent, we may be looking at the same thing here- people who turn up to an interview with cheap or hand-me-down clothes are likely to have come from a poorer background to someone with a tailored Armani suit, and are therefore likely to have had a lower standard of education and make less attractive candidates to an interviewing panel. Similarly, women tend to drop their careers earlier in life if they want to start a family, since the traditional family model puts the man as chief breadwinner, meaning they are less likely to advance up the ladder and earn the high wages that could even out the difference in male/female pay.

But statistics cannot quite cover anything- to use another slightly tangential bit of research, a study done some years ago found that teachers gave higher marks to essays written in neat handwriting than they did to identical essays that were written messier. The neat handwriting suggested a diligent approach to learning, a good education in their formative years, making the teacher think the child was cleverer, and thus deserving of more marks, than a scruffier, less orderly hand. Once again, we can draw parallels to our two guys in their different suits. Mr Faded may have good qualifications and present himself well, but his attire suggests to his interviewers that he is from a poorer background. We have a subconscious understanding of the link between poorer backgrounds and the increased risk of poor education and other compromising factors, and so the interviewers unconsciously link our man to the idea that he has been less well educated than Mr Armani, even if the evidence presented before them suggests otherwise. They are not trying to be prejudiced, they just think the other guy looks more likely to be as good as his paperwork suggests. Some of it isn’t even linked to such logical connections; research suggests that interviewers, just as people in everyday life, are drawn to those they feel are similar to them, and they might also make the subconscious link that ‘my wife stays at home and looks after the kids, there aren’t that many women in the office, so what’s this one doing here?’- again, not deliberate discrimination, but it happens.

In many ways this is an unfortunate state of affairs, and one that we should attempt to remedy in everyday life whenever and wherever we can. But a lot of the stuff that to a casual observer might look prejudiced, might be violating our egalitarian creed, we do without thinking, letting out brain make connections that logic should not. The trick is not to ‘not judge a book by it’s cover’, but not to let your brain register that there’s a cover at all.

NMEvolution

Music has been called by some the greatest thing the human race has ever done, and at its best it is undoubtedly a profound expression of emotion more poetic than anything Shakespeare ever wrote. True, done badly it can sound like a trapped cat in a box of staplers falling down a staircase, but let’s not get hung up on details here- music is awesome.

However, music as we know it has only really existed for around a century or so, and many of the developments in music’s  history that have shaped it into the tour de force that it is in modern culture are in direct parallel to human history. As such, the history of our development as a race and the development of music run closely alongside one another, so I thought I might attempt a set of edited highlights of the former (well, western history at least) by way of an exploration of the latter.

Exactly how and when the various instruments as we know them were invented and developed into what they currently are is largely irrelevant (mostly since I don’t actually know and don’t have the time to research all of them), but historically they fell into one of two classes. The first could be loosely dubbed ‘noble’ instruments- stuff like the piano, clarinet or cello, which were (and are) hugely expensive to make, required a significant level of skill to do so, and were generally played for and by the rich upper classes in vast orchestras, playing centuries-old music written by the very few men with the both the riches, social status and talent to compose them. On the other hand, we have the less historically significant, but just as important, ‘common’ instruments, such as the recorder and the ancestors of the acoustic guitar. These were a lot cheaper to make and thus more available to (although certainly far from widespread among) the poorer echelons of society, and it was on these instruments that tunes were passed down from generation to generation, accompanying traditional folk dances and the like; the kind of people who played such instruments very rarely had the time to spare to really write anything new for them, and certainly stood no chance of making a living out of them. And, for many centuries, that was it- what you played and what you listened to, if you did so at all, depended on who you were born as.

However, during the great socioeconomic upheaval and levelling that accompanied the 19th century industrial revolution, music began to penetrate society in new ways. The growing middle and upper-middle classes quickly adopted the piano as a respectable ‘front room’ instrument for their daughters to learn, and sheet music was rapidly becoming both available and cheap for the masses. As such, music began to become an accessible activity for far larger swathes of the population and concert attendances swelled. This was the Romantic era of music composition, with the likes of Chopin, Mendelssohn and Brahms rising to prominence, and the size of an orchestra grew considerably to its modern size of four thousand violinists, two oboes and a bored drummer (I may be a little out in my numbers here) as they sought to add some new experimentation to their music. This experimentation with classical orchestral forms was continued through the turn of the century by a succession of orchestral composers, but this period also saw music head in a new and violently different direction; jazz.

Jazz was the quintessential product of the United States’ famous motto ‘E Pluribus Unum’ (From Many, One), being as it was the result of a mixing of immigrant US cultures. Jazz originated amongst America’s black community, many of whom were descendants of imported slaves or even former slaves themselves, and was the result of traditional African music blending with that of their forcibly-adopted land. Whilst many black people were heavily discriminated against when it came to finding work, they found they could forge a living in the entertainment industry, in seedier venues like bars and brothels. First finding its feet in the irregular, flowing rhythms of ragtime music, the music of the deep south moved onto the more discordant patterns of blues in the early 20th century before finally incorporating a swinging, syncopated rhythm and an innovative sentiment of improvisation to invent jazz proper.

Jazz quickly spread like wildfire across the underground performing circuit, but it wouldn’t force its way into popular culture until the introduction of prohibition in the USA. From 1920 all the way up until the Presidency of Franklin D Roosevelt (whose dropping of the bill is a story in and of itself) the US government banned the consumption of alcohol, which (as was to be expected, in all honesty) simply forced the practice underground. Dozens of illegal speakeasies (venues of drinking, entertainment and prostitution usually run by the mob) sprung up in every district of every major American city, and they were frequented by everyone from the poorest street sweeper to the police officers who were supposed to be closing them down. And in these venues, jazz flourished. Suddenly, everyone knew about jazz- it was a fresh, new sound to everyone’s ears, something that stuck in the head and, because of its ‘common’, underground connotations, quickly became the music of the people. Jazz musicians such as Louis Armstrong (a true pioneer of the genre) became the first celebrity musicians, and the way the music’s feel resonated with the happy, prosperous feeling surrounding the economic good times of the 1920s lead that decade to be dubbed ‘the Jazz Age’.

Countless things allowed jazz and other, successive generations to spread around the world- the invention of the gramophone further enhanced the public access to music, as did the new cultural phenomenon of the cinema and even the Second World War, which allowed for truly international spread. By the end of the war, jazz, soul, blues, R&B and all other derivatives had spread from their mainly deep south origins across the globe, blazing a trail for all other forms of popular music to follow in its wake. And, come the 50s, they did so in truly spectacular style… but I think that’ll have to wait until next time.

Aging

OK, I know it was a while ago, but who watched Felix Baumgartner’s jump? If you haven’t seen it, then you seriously missed out; the sheer spectacle of the occasion was truly amazing, so unlike anything you’ve ever seen before. We’re fairly used to seeing skydives from aeroplanes, but usually we only see either a long distance shot, jumper’s-eye-view, or a view from the plane showing them being whisked away half a second after jumping. Baumgartner’s feat was… something else, the two images available for the actual jump being direct, static views of a totally vertical fall. Plus, they were so angled to give a sense of the awesome scope of the occasion; one showed directly down to earth below, showing the swirling clouds and the shape of the land, whilst the other shot gave a beautiful demonstration of the earth’s curvature. The height he was at made the whole thing particularly striking; shots from the International Space Station and the moon have showed the earth from further away, but Baumgartner’s unique height made everything seem big enough to be real, yet small enough to be terrifying. And then there was the drop itself; a gentle lean forward from the Austrian, followed by what can only be described as a plummet. You could visibly see the lack of air resistance, so fast was he accelerating compared to our other images of skydivers. The whole business was awe-inspiring. Felix Baumgartner, you sir have some serious balls.

However, I bring this story up not because of the event itself, nor the insane amount of media coverage it received, nor even the internet’s typically entertaining reaction to the whole business (this was probably my favourite). No, the thing that really caught my eye was a little something about Baumgartner himself; namely, that the man who holds the world records for highest freefall, highest manned balloon flight, fastest unassisted speed and second longest freefall ever will be forty-four years old in April.

At his age, he would be ineligible for entry into the British Armed Forces, is closer to collecting his pension than university, and has already experienced more than half his total expected time on this earth. Most men his age are in the process of settling down, finding their place in some management company and getting slightly less annoyed at being passed over for promotion by some youngster with a degree and four boatloads of hopelessly naive enthusiasm. They’re in the line for learning how to relax, taking up golf, being put onto diet plans by their wives and going to improving exhibitions of obscure artists. They are generally not throwing themselves out of balloons 39 kilometres above the surface of the earth, even if they were fit and mobile enough to get inside the capsule with half a gigatonne of sensors and pressure suit (I may be exaggerating slightly).

Baumgartner’s feats for a man of his age (he was also the first man to skydive across the English channel, and holds a hotly disputed record for lowest BASE jump ever) are not rare ones without reason. Human beings are, by their very nature, lazy (more on that another time) and tend to favour the simple, homely life rather one that demands such a high-octane, highly stressful thrill ride of a life experience. This tendency towards laziness also makes us grow naturally more and more unfit as time goes by, our bodies slowly using the ability our boundlessly enthusiastic childish bodies had for scampering up trees and chasing one another, making such seriously impressive physical achievements rare.

And then there’s the activity itself; skydiving, and even more so BASE jumping, is also a dangerous, injury-prone sport, and as such it is rare to find regular practitioners of Baumgartner’s age and experience who have not suffered some kind of reality-checking accident leaving them either injured, scared or, in some cases, dead. Finally, we must consider the fact that there are very few people rich enough and brave enough to give such an expensive, exhilarating hobby as skydiving a serious go, and even less with both the clout, nous, ambition and ability to get a project such as Red Bull Stratos off the ground. And we must also remember that one has to overcome the claustrophobic, restrictive experience of doing the jump in a heavy pressure suit; even Baumgartner had to get help from a sports psychologist to get over his claustrophobia caused by being in the suit.

But then again, maybe we shouldn’t be too surprised. Red Bull Stratos was a culmination of years of effort in a single minded pursuit of a goal, and that required a level of experience in both skydiving and life in general that simply couldn’t be achieved by anyone younger than middle age- the majority of younger, perhaps even more ambitious, skydivers simply could not have got the whole thing done. And we might think that the majority of middle-aged people don’t achieve great things, but then again in the grand scheme of things the majority of everyone don’t end up getting most of the developed world watching them of an evening. Admittedly, the majority of those who do end up doing the most extraordinary physical things are under 35, but there’s always room for an exceptional human to change that archetype. And anyway; look at the list of Nobel Prize winners and certified geniuses on our earth, our leaders and heroes. Many of them have turned their middle age into something truly amazing, and if their field happens to be quantum entanglement rather than BASE jumping then so be it; they can still be extraordinary people.

I don’t really know what the point of this post was, or exactly what conclusion I was trying to draw from it; it basically started off because I thought Felix Baumgartner was a pretty awesome guy, and I happened to notice he was older than I thought he would be. So I suppose it would be best to leave you with a fact and a quote from his jump. Fact: When he jumped, his heart rate was measured as being lower than the average resting (ie lying down doing nothing and not wetting yourself in pants-shitting terror) heart rate of a normal human, so clearly the guy is cool and relaxed to a degree beyond human imagining. Quote: “Sometimes you have to be really high to see how small you really are”.

Other Politicky Stuff

OK, I know I talked about politics last time, and no I don’t want to start another series on this, but I actually found when writing my last post that I got very rapidly sidetracked when I tried to use voter turnout as a way of demonstrating the fact that everyone hates their politicians, and I thought I might dedicate a post to this particular train of thought as well.

You see, across the world, but predominantly in the developed west where the right to choose our leaders has been around for ages, less and less people are turning out each time to vote.  By way of an example, Ronald Reagan famously won a ‘landslide’ victory when coming to power in 1980- but only actually attracted the vote of 29% of all eligible voters. In some countries, such as Australia, voting is mandatory, but thoughts about introducing such a system elsewhere have frequently met with opposition and claims that it goes against people’s democratic right to abstain from doing so (this argument is largely rubbish, but no time for that now).

A lot of reasons have been suggested for this trend, among them a sense of political apathy, laziness, and the idea that we having the right to choose our leaders for so long has meant we no longer find such an idea special or worth exercising. For example, the presidential election in Venezuela – a country that underwent something of a political revolution just over a decade ago and has a history of military dictatorships, corruption and general political chaos – a little while ago saw a voter turnout of nearly 90% (incumbent president Hugo Chavez winning with 54% of the vote to win his fourth term of office in case you were interested) making Reagan look boring by comparison.

However, another, more interesting (hence why I’m talking about it) argument has also been proposed, and one that makes an awful lot of sense. In Britain there are 3 major parties competing for every seat, and perhaps 1 or two others who may be standing in your local area. In the USA, your choice is pretty limited to either Obama or Romney, especially if you’re trying to avoid the ire of the rabidly aggressive ‘NO VOTE IS A VOTE FOR ROMNEY AND HITLER AND SLAUGHTERING KITTENS’ brigade. Basically, the point is that your choice of who to vote for is limited to usually less than 5 people, and given the number of different issues they have views on that mean something to you the chance of any one of them following your precise political philosophy is pretty close to zero.

This has wide reaching implications extending to every corner of democracy, and is indicative of one simple fact; that when the US Declaration of Independence was first drafted some 250 years ago and the founding fathers drew up what would become the template for modern democracy, it was not designed for a state, or indeed a world, as big and multifaceted as ours. That template was founded on the basis of the idea that one vote was all that was needed to keep a government in line and following the will of the masses, but in our modern society (and quite possibly also in the one they were designing for) that is simply not the case. Once in power, a government can do almost what it likes (I said ALMOST) and still be confident that they will get a significant proportion of the country voting for them; not only that, but that their unpopular decisions can often be ‘balanced out’ by more popular, mass-appeal ones, rather than their every decision being the direct will of the people.

One solution would be to have a system more akin to Greek democracy, where every issue is answered by referendum which the government must obey. However, this presents just as many problems as it answers; referendums are very expensive and time-consuming to set up and perform, and if they became commonplace it could further enhance the existing issue of voter apathy. Only the most actively political would vote in every one, returning the real power to the hands of a relative few who, unlike previously, haven’t been voted in. However, perhaps the most pressing issue with this solution is that it rather renders the role of MPs, representatives, senators and even Prime Ministers & Presidents rather pointless. What is the point of our society choosing those who really care about the good of their country, have worked hard to slowly rise up the ranks and giving them a chance to determine how their country is governed, if we are merely going to reduce their role to ones of administrators and form fillers? Despite the problems I mentioned last time out, of all the people we’ve got to choose from politicians are probably the best people to have governing us (or at least the most reliably OK, even if it’s simply because we picked them).

Plus, politics is a tough business, and what is the will of the people is not necessarily always what’s best for the country as a whole. Take Greece at the moment; massive protests are (or at least were; I know everyone’s still pissed off about it) underway due to the austerity measures imposed by the government, because of the crippling economic suffering that is sure to result. However, the politicians know that such measures are necessary and are refusing to budge on the issue- desperate times call for difficult decisions (OK, I know there were elections that almost entirely centred on this decision that sided with austerity, but shush- you’re ruining my argument). To pick another example, President Obama (and several Democrat candidates before him) have met with huge opposition to the idea of introducing a US national healthcare system, basically because Americans hate taxes. Nonetheless, this is something he believes very strongly in, and has finally managed to get through congress; if he wins the elections later this year, we’ll see how well he executes.

In short, then, there are far too many issues, too many boxes to balance and ideas to question, for all protesting in a democratic society to take place at the ballot box. Is there a better solution to waving placards in the street and sending strongly worded letters? Do those methods at all work? In all honesty, I don’t know- that whole internet petitions get debated in parliament thing the British government recently imported from Switzerland is a nice idea, but, just like more traditional forms of protest, gives those in power no genuine categorical imperative to change anything. If I had a solution, I’d probably be running for government myself (which is one option that definitely works- just don’t all try it at once), but as it is I am nothing more than an idle commentator thinking about an imperfect system.

Yeah, I struggle for conclusions sometimes.

What we know and what we understand are two very different things…

If the whole Y2K debacle over a decade ago taught us anything, it was that the vast majority of the population did not understand the little plastic boxes known as computers that were rapidly filling up their homes. Nothing especially wrong or unusual about this- there’s a lot of things that only a few nerds understand properly, an awful lot of other stuff in our life to understand, and in any case the personal computer had only just started to become commonplace. However, over 12 and a half years later, the general understanding of a lot of us does not appear to have increased to any significant degree, and we still remain largely ignorant of these little feats of electronic witchcraft. Oh sure, we can work and operate them (most of us anyway), and we know roughly what they do, but as to exactly how they operate, precisely how they carry out their tasks? Sorry, not a clue.

This is largely understandable, particularly given the value of ‘understand’ that is applicable in computer-based situations. Computers are a rare example of a complex system that an expert is genuinely capable of understanding, in minute detail, every single aspect of the system’s working, both what it does, why it is there, and why it is (or, in some cases, shouldn’t be) constructed to that particular specification. To understand a computer in its entirety, therefore, is an equally complex job, and this is one very good reason why computer nerds tend to be a quite solitary bunch, with quite few links to the rest of us and, indeed, the outside world at large.

One person who does not understand computers very well is me, despite the fact that I have been using them, in one form or another, for as long as I can comfortably remember. Over this summer, however, I had quite a lot of free time on my hands, and part of that time was spent finally relenting to the badgering of a friend and having a go with Linux (Ubuntu if you really want to know) for the first time. Since I like to do my background research before getting stuck into any project, this necessitated quite some research into the hows and whys of its installation, along with which came quite a lot of info as to the hows and practicalities of my computer generally. I thought, then, that I might spend the next couple of posts or so detailing some of what I learned, building up a picture of a computer’s functioning from the ground up, and starting with a bit of a history lesson…

‘Computer’ was originally a job title, the job itself being akin to accountancy without the imagination. A computer was a number-cruncher, a supposedly infallible data processing machine employed to perform a range of jobs ranging from astronomical prediction to calculating interest. The job was a fairly good one, anyone clever enough to land it probably doing well by the standards of his age, but the output wasn’t. The human brain is not built for infallibility and, not infrequently, would make mistakes. Most of these undoubtedly went unnoticed or at least rarely caused significant harm, but the system was nonetheless inefficient. Abacuses, log tables and slide rules all aided arithmetic manipulation to a great degree in their respective fields, but true infallibility was unachievable whilst still reliant on the human mind.

Enter Blaise Pascal, 17th century mathematician and pioneer of probability theory (among other things), who invented the mechanical calculator aged just 19, in 1642. His original design wasn’t much more than a counting machine, a sequence of cogs and wheels so constructed as to able to count and convert between units, tens, hundreds and so on (ie a turn of 4 spaces on the ‘units’ cog whilst a seven was already counted would bring up eleven), as well as being able to work with currency denominations and distances as well. However, it could also subtract, multiply and divide (with some difficulty), and moreover proved an important point- that a mechanical machine could cut out the human error factor and reduce any inaccuracy to one of simply entering the wrong number.

Pascal’s machine was both expensive and complicated, meaning only twenty were ever made, but his was the only working mechanical calculator of the 17th century. Several, of a range of designs, were built during the 18th century as show pieces, but by the 19th the release of Thomas de Colmar’s Arithmometer, after 30 years of development, signified the birth of an industry. It wasn’t a large one, since the machines were still expensive and only of limited use, but de Colmar’s machine was the simplest and most reliable model yet. Around 3,000 mechanical calculators, of various designs and manufacturers, were sold by 1890, but by then the field had been given an unexpected shuffling.

Just two years after de Colmar had first patented his pre-development Arithmometer, an Englishmen by the name of Charles Babbage showed an interesting-looking pile of brass to a few friends and associates- a small assembly of cogs and wheels that he said was merely a precursor to the design of a far larger machine: his difference engine. The mathematical workings of his design were based on Newton polynomials, a fiddly bit of maths that I won’t even pretend to understand, but that could be used to closely approximate logarithmic and trigonometric functions. However, what made the difference engine special was that the original setup of the device, the positions of the various columns and so forth, determined what function the machine performed. This was more than just a simple device for adding up, this was beginning to look like a programmable computer.

Babbage’s machine was not the all-conquering revolutionary design the hype about it might have you believe. Babbage was commissioned to build one by the British government for military purposes, but since Babbage was often brash, once claiming that he could not fathom the idiocy of the mind that would think up a question an MP had just asked him, and prized academia above fiscal matters & practicality, the idea fell through. After investing £17,000 in his machine before realising that he had switched to working on a new and improved design known as the analytical engine, they pulled the plug and the machine never got made. Neither did the analytical engine, which is a crying shame; this was the first true computer design, with two separate inputs for both data and the required program, which could be a lot more complicated than just adding or subtracting, and an integrated memory system. It could even print results on one of three printers, in what could be considered the first human interfacing system (akin to a modern-day monitor), and had ‘control flow systems’ incorporated to ensure the performing of programs occurred in the correct order. We may never know, since it has never been built, whether Babbage’s analytical engine would have worked, but a later model of his difference engine was built for the London Science Museum in 1991, yielding accurate results to 31 decimal places.

…and I appear to have run on a bit further than intended. No matter- my next post will continue this journey down the history of the computer, and we’ll see if I can get onto any actual explanation of how the things work.

A Brief History of Copyright

Yeah, sorry to be returning to this topic yet again, I am perfectly aware that I am probably going to be repeating an awful lot of stuff that either a) I’ve said already or b) you already know. Nonetheless, having spent a frustrating amount of time in recent weeks getting very annoyed at clever people saying stupid things, I feel the need to inform the world if only to satisfy my own simmering anger at something really not worth getting angry about. So:

Over the past year or so, the rise of a whole host of FLLAs (Four Letter Legal Acronyms) from SOPA to ACTA has, as I have previously documented, sent the internet and the world at large in to paroxysms of mayhem at the very idea that Google might break and/or they would have to pay to watch the latest Marvel film. Naturally, they also provoked a lot of debate, ranging in intelligence from intellectual to average denizen of the web, on the subject of copyright and copyright law. I personally think that the best way to understand anything is to try and understand exactly why and how stuff came to exist in the first place, so today I present a historical analysis of copyright law and how it came into being.

Let us travel back in time, back to our stereotypical club-wielding tribe of stone age human. Back then, the leader not only controlled and lead the tribe, but ensured that every facet of it worked to increase his and everyone else’s chance of survival, and chance of ensuring that the next meal would be coming along. In short, what was good for the tribe was good for the people in it. If anyone came up with a new idea or technological innovation, such as a shield for example, this design would also be appropriated and used for the good of the tribe. You worked for the tribe, and in return the tribe gave you protection, help gathering food and such and, through your collective efforts, you stayed alive. Everybody wins.

However, over time the tribes began to get bigger. One tribe would conquer their neighbours, gaining more power and thus enabling them to take on bigger, larger, more powerful tribes and absorb them too. Gradually, territories, nations and empires form, and what was once a small group in which everyone knew everyone else became a far larger organisation. The problem as things get bigger is that what’s good for a country starts to not necessarily become as good for the individual. As a tribe gets larger, the individual becomes more independent of the motions of his leader, to the point at which the knowledge that you have helped the security of your tribe does not bear a direct connection to the availability of your next meal- especially if the tribe adopts a capitalist model of ‘get yer own food’ (as opposed to a more communist one of ‘hunters pool your resources and share between everyone’ as is common in a very small-scale situation when it is easy to organise). In this scenario, sharing an innovation for ‘the good of the tribe’ has far less of a tangible benefit for the individual.

Historically, this rarely proved to be much of a problem- the only people with the time and resources to invest in discovering or producing something new were the church, who generally shared between themselves knowledge that would have been useless to the illiterate majority anyway, and those working for the monarchy or nobility, who were the bosses anyway. However, with the invention of the printing press around the start of the 16th century, this all changed. Public literacy was on the up and the press now meant that anyone (well, anyone rich enough to afford the printers’ fees)  could publish books and information on a grand scale. Whilst previously the copying of a book required many man-hours of labour from a skilled scribe, who were rare, expensive and carefully controlled, now the process was quick, easy and available. The impact of the printing press was made all the greater by the social change of the few hundred years between the Renaissance and today, as the establishment of a less feudal and more merit-based social system, with proper professions springing up as opposed to general peasantry, meaning that more people had the money to afford such publishing, preventing the use of the press being restricted solely to the nobility.

What all this meant was that more and more normal (at least, relatively normal) people could begin contributing ideas to society- but they weren’t about to give them up to their ruler ‘for the good of the tribe’. They wanted payment, compensation for their work, a financial acknowledgement of the hours they’d put in to try and make the world a better place and an encouragement for others to follow in their footsteps. So they sold their work, as was their due. However, selling a book, which basically only contains information, is not like selling something physical, like food. All the value is contained in the words, not the paper, meaning that somebody else with access to a printing press could also make money from the work you put in by running of copies of your book on their machine, meaning they were profiting from your work. This can significantly cut or even (if the other salesman is rich and can afford to undercut your prices) nullify any profits you stand to make from the publication of your work, discouraging you from putting the work in in the first place.

Now, even the most draconian of governments can recognise that your citizens producing material that could not only benefit your nation’s happiness but also potentially have great material use is a valuable potential resource, and that they should be doing what they can to promote the production of that material, if only to save having to put in the large investment of time and resources themselves. So, it makes sense to encourage the production of this material, by ensuring that people have a financial incentive to do it. This must involve protecting them from touts attempting to copy their work, and hence we arrive at the principle of copyright: that a person responsible for the creation of a work of art, literature, film or music, or who is responsible for some form of technological innovation, should have legal control over the release & sale of that work for at least a set period of time. And here, as I will explain next time, things start to get complicated…