Connections

History is a funny old business; an endless mix of overlapping threads, intermingling stories and repeating patterns that makes fascinating study for anyone who knows where to look. However, the part of it that I enjoy most involves taking the longitudinal view on things, linking two seemingly innocuous, or at least totally unrelated, events and following the trail of breadcrumbs that allow the two to connect. Things get even more interesting when the relationship is causal, so today I am going to follow the trail of one of my favourite little stories; how a single storm was, in the long run, responsible for the Industrial revolution. Especially surprising given that the storm in question occurred in 1064.

This particular storm occurred in the English Channel, and doubtless blew many ships off course, including one that had left from the English port of Bosham (opposite the Isle of Wight). Records don’t say why the ship was making its journey, but what was definitely significant was its passenger; Harold Godwinson, Earl of Wessex and possibly the most powerful person in the country after King Edward the Confessor. He landed (although that might be overstating the dignity and intention of the process) at Ponthieu, in northern France, and was captured by the local count, who subsequently turned him over to his liege when he, with his famed temper, heard of his visitor: the liege in question was Duke William of Normandy, or ‘William the Bastard’ as he was also known (he was the illegitimate son of the old duke and a tanner). Harold’s next move was (apparently) to accompany his captor to a battle just up the road in Brittany. He then tried to negotiate his freedom, which William accepted, on the condition that he swear an oath to him that, were the childless King Edward to die, he would support William’s claim to the throne (England at the time operated a sort of elective monarchy, where prospective candidates were chosen by a council of nobles known as the Witengamot). According to the Bayeux tapestry, Harold took this oath and left France; but two years later King Edward fell into a coma. With his last moment of consciousness before what was surely an unpleasant death, he apparently gestured to Harold, standing by his bedside. This was taken by Harold, and the Witengamot, as a sign of appointing a successor, and Harold accepted the throne. This understandably infuriated William, who considered this a violation of his oath, and subsequently invaded England. His timing of this coincided with another distant cousin, Harald Hardrada of Norway, deciding to push his claim to the throne, and in the resulting chaos William came to the fore. He became William the Conqueror, and the Normans controlled England for the next several hundred years.

One of the things that the Norman’s brought with them was a newfound view on religion; England was already Christian, but their respective Church’s views on certain subjects differed slightly. One such subject was serfdom, a form of slavery that was very popular among the feudal lords of the time. Serfs were basically slaves, in that they could be bought or sold as commodities; they were legally bound to the land they worked, and were thus traded and owned by the feudal lords who owned the land. In some countries, it was not unusual for one’s lord to change overnight after a drunken card game; Leo Tolstoy lost most of his land in just such an incident, but that’s another story. It was not a good existence for a serf, completely devoid of any form of freedom, but for a feudal lord it was great; cheap, guaranteed labour and thus income from one’s land, and no real risks concerned. However the Norman church’s interpretation of Christianity was morally opposed to the idea, and began to trade serfs for free peasants as a form of agricultural labour. A free peasant was not tied to the land but rented it from his liege, along with the right to use various pieces of land & equipment; the feudal lord still had income, but if he wanted goods from his land he had to pay for it from his peasants, and there were limits on the control he had over them. If a peasant so wished, he could pack up and move to London or wherever, or to join a ship; whatever he wanted in his quest to make his fortune. The vast majority were never faced with this choice as a reasonable idea, but the principle was important- a later Norman king, Henry I, also reorganised the legal system and introduced the role of sheriff, producing a society based around something almost resembling justice.

[It is worth noting that the very last serfs were not freed until the reign of Queen Elizabeth in the 1500s, and that subsequent British generations during the 18th century had absolutely no problem with trading in black slaves, but they justified that partly by never actually seeing the slaves and partly by taking the view that the black people weren’t proper humans anyway. We can be disgusting creatures]

A third Norman king further enhanced this concept of justice, even if completely by accident. King John was the younger brother of inexplicable national hero King Richard I, aka Richard the Lionheart or Couer-de-Lion (seriously, the dude was a Frenchman who visited England twice, both to raise money for his military campaigns, and later levied the largest ransom in history on his people when he had to be released by the Holy Roman Emperor- how he came to national prominence I will never know), and John was unpopular. He levied heavy taxes on his people to pay for costly and invariably unsuccessful military campaigns, and whilst various incarnations of Robin Hood have made him seem a lot more malevolent than he probably was, he was not a good King. He was also harsh to his people, and successfully pissed off peasant and noble alike; eventually the Norman Barons presented John with an ultimatum to limit his power, and restore some of theirs. However, the wording of the document also granted some basic and fundamental rights to the common people as well; this document was known as the Magna Carta; one of the most important legal documents in history, and arguably the cornerstone in the temple of western democracy.

The long-term ramifacations of this were huge; numerous wars were fought over the power it gave the nobility in the coming centuries, and Henry II (9 years old when he took over from father John) was eventually forced to call the first parliament; which, crucially, featured both barons (the noblemen, in what would soon become the House of Lords) and burghers (administrative leaders and representatives of the cities & commoners, in the House of Commons). The Black Death (which wiped out most of the peasant population and thus raised the value of the few who were left) greatly increased the value and importance of peasants across Europe for purely economic reasons for a few years, but over the next few centuries multiple generations of kings in several countries would slowly return things to the old ways, with them on top and their nobles kept subservient. In countries such as France, a nobleman got himself power, rank, influence and wealth by getting into bed with the king (in the cases of some ambitious noblewomen, quite literally); but in England the existence of a Parliament meant that no matter how much the king’s power increased through the reign of Plantagenets, Tudors and Stuarts, the gentry had some form of national power and community- and that the people were, to some nominal degree, represented as well. This in turn meant that it became not uncommon for the nobility and high-ranking (or at least rich) ordinary people to come into contact, and created a very fluid class system. Whilst in France a middle class businessman was looked on with disdain by the lords, in Britain he would be far more likely to be offered a peerage; nowadays the practice is considered undemocratic, but this was the cutting edge of societal advancement several hundred years ago. It was this ‘lower’ class of gentry, comprising the likes of John Hampden and Oliver Cromwell, who would precipitate the English Civil War as King Charles I tried to rule without Parliament altogether (as opposed to his predecessors  who merely chose to not listen to them a lot of the time); when the monarchy was restored (after several years of bloodshed and puritan brutality at the hands of Cromwell’s New Model Army, and a seemingly paradoxical few decades spent with Cromwell governing with only a token parliament, when he used them at all), parliament was the political force in Britain. When James II once again tried his dad’s tactic of proclaiming himself god-sent ruler whom all should respect unquestioningly, Parliament’s response was to invite the Dutch King William of Orange over to replace James and become William III, which he duly did. Throughout the reign of the remaining Stuarts and the Hanoverian monarchs (George I to Queen Victoria), the power of the monarch became steadily more and more ceremonial as the two key political factions of the day, the Whigs (later to become the Liberal, and subsequently Liberal Democrat, Party) and the Tories (as today’s Conservative Party is still known) slugged it out for control of Parliament, the newly created role of ‘First Lord of the Treasury’ (or Prime Minister- the job wasn’t regularly selected from among the commons for another century or so) and, eventually, the country. This brought political stability, and it brought about the foundations of modern democracy.

But I’m getting ahead of myself; what does this have to do with the Industrial Revolution? Well, we can partly blame the political and financial stability at the time, enabling corporations and big business to operate simply and effectively among ambitious individuals wishing to exploit potential; but I think that the key reason it occurred has to do with those ambitious people themselves. In Eastern Europe & Russia, in particular, there were two classes of people; nobility who were simply content to scheme and enjoy their power, and the masses of illiterate serfs. In most of Western Europe, there was a growing middle class, but the monarchy and nobility were united in keeping them under their thumb and preventing them from making any serious impact on the world. The French got a bloodthirsty revolution and political chaos as an added bonus, whilst the Russians waited for another century to finally get sufficiently pissed of at the Czar to precipitate a communist revolution. In Britain, however, there were no serfs, and corporations were built from the middle classes. These people’s primary concerns wasn’t rank or long-running feuds, disagreements over land or who was sleeping with the king; they wanted to make money, and would do so by every means at their disposal. This was an environment ripe for entrepreneurism, for an idea worth thousands to take the world by storm, and they did so with relish. The likes of Arkwright, Stephenson and Watt came from the middle classes and were backed by middle class industry, and the rest of Britain came along for the ride as Britain’s coincidentally vast coal resources were put to good use in powering the change. Per capita income, population and living standards all soared, and despite the horrors that an age of unregulated industry certainly wrought on its populace, it was this period of unprecedented change that was the vital step in the formation of the world as we know it today. And to think that all this can be traced, through centuries of political change, to the genes of uselessness that would later become King John crossing the channel after one unfortunate shipwreck…

And apologies, this post ended up being a lot longer than I intended it to be

Advertisement

What the @*$!?

WARNING: Swearing will feature prominently in this post, as will a discussion of sexual material. Children, if your parents shout at you for reading this then it is officially YOUR PROBLEM. Okay?

I feel this may also be the place to apologise for missing a week of posts; didn’t stop writing them, did stop posting them. Don’t know why

Language can enable us to do many things; articulate our ideas, express our sorrow, reveal our love, and tell somebody else’s embarrassing stories to name but a few. Every language has approached these and other practicalities of the everyday life they are designed to assist in different ways (there is one language I have heard of with no word for left or right, meaning that they refer to everything in terms of points of a compass and all members of the tribe thus have an inbuilt sense of where north is at all times), but there is one feature that every language from Japanese to Klingon has managed to incorporate, something without which a language would not be as complete and fully-fabricated as it ought, and which is almost always the first thing learnt by a student of a new language; swearing.

(Aside Note: English, partly due to its flexible nature and the fact that it didn’t really develop as a language until everyone else had rather shown it the way, has always been a particularly good language for being thoroughly foul and dirty, and since it’s the only language I have any degree of reasonable proficiency in I think I’ll stick to that for the time being. If anyone knows anything interesting about swearing in other languages, please feel free to leave them in the comments)

Swearing, swearwords and bad language itself generally have one of three sources; many of the ‘milder’ swearwords tend to have a religious origin, and more specifically refer either to content considered evil by the Church/some form of condemnation to evil (so ‘damn’, in reference to being ‘damned’ by Satan), or to stuff considered in some way blasphemous and therefore wrong (the British idiom ‘bloody’ stems from the Tudor expression ‘God’s Blood’, which along with similar references such as ‘Christ’s Passion’ suggested that the Holy Trinity was in some way fallible and human, and thus capable of human weakness and vice- this was blasphemy according to the Church and therefore wrong). The place of ‘mid-level’ swearwords is generally taken by rather crude references to excrement, egestion and bodily emissions in general (piss, shit etc.). The ‘worst swearwords’ in modern society are of course sexual in nature, be they either references to genitalia, prostitution or the act itself.

The reason for these ideas having become sweary & inappropriate is a fairly simple, but nonetheless interesting, route to track. When the Church ruled the land, anything considered blasphemous or wrong according to their literature and world view was frowned upon at best and punished severely at worst, so words connected to these ideas were simply not broached in public. People knew what they meant, of course, and in seedy or otherwise ‘underground’ places, where the Church’s reach was weak, these words found a home, instantly connecting them with this ‘dirty’ side of society. Poo and sex, of course, have always been considered ‘dirty’ among polite society, something always kept behind closed doors (I’ve done an entire post on the sex aspect of this before) and are thus equally shocking and ripe for sweary material when exposed to the real world.

A quick social analysis of these themes also reveals the reasons behind the ‘hierarchy’ of swearwords. In the past hundred years, the role of the church in everyday western society has dropped off dramatically and offending one’s local priest (or your reputation with him) has become less of a social concern. Among the many consequences of this (and I’m sure an aggressive vicar could list a hundred more) has been the increased prevalence of swearing in normal society, and the fall of Church-related swearwords in terms of how severe they are; using a word once deemed blasphemous doesn’t really seem that serious in a secular society, and the meaning it does have is almost anachronistic in nature. It helps, of course, that these words are among the oldest swearwords that have found common use, meaning that as time has gone by their original context has been somewhat lost and they have got steadily more and more tame. Perhaps in 200 years my future equivalent will be able to say dick in front of his dad for this reason.

The place of excrement and sex in our society has, however, not changed much in the last millennia or two. Both are things that are part of our everyday lives that all of us experience, but that are not done in the front room or broached in polite company- rather ugly necessities and facts of life still considered ‘wrong’ enough to become swearwords. However, whilst going to the loo is a rather inconvenient business that is only dirty because the stuff it produces is (literally), sex is something that we enjoy and often seek out. It is, therefore, a vice, something which we can form an addiction to, and addictions are something that non-addicts find slightly repulsive when observed in addicts or regular practitioners. The Church (yes, them again) has in particular found sex abhorrent if it is allowed to become rampant and undignified, historically favouring rather strict, Victorian positions and execution- all of which means that, unlike poo, sex has been actively clamped down on in one way or another at various points in history. This has, naturally, rarely done much to combat whatever has been seen as the ‘problem’, merely forcing it underground in most cases, but what it has done is put across an image of sex as something that is not just rather dirty but actively naughty and ‘wrong’. This is responsible partly for the thrill some people get when trash talking about and during sex, and the whole ‘you’ve been a naughty girl’ terminology and ideas that surround the concept of sex- but it is also responsible for making sexually explicit references even more underhand, even more to be kept out of polite spheres of movement, and thus making sexually-related swearwords the most ‘extreme’ of all those in our arsenal.

So… yeah, that’s what I got on the subject of swearing. Did anyone want a conclusion to this or something?

Determinism

In the early years of the 19th century, science was on a roll. The dark days of alchemy were beginning to give way to the modern science of chemistry as we know it today, the world of physics and the study of electromagnetism were starting to get going, and the world was on the brink of an industrial revolution that would be powered by scientists and engineers. Slowly, we were beginning to piece together exactly how our world works, and some dared to dream of a day where we might understand all of it. Yes, it would be a long way off, yes there would be stumbling blocks, but maybe, just maybe, so long as we don’t discover anything inconvenient like advanced cosmology, we might one day begin to see the light at the end of the long tunnel of science.

Most of this stuff was the preserve of hopeless dreamers, but in the year 1814 a brilliant mathematician and philosopher, responsible for underpinning vast quantities of modern mathematics and cosmology, called Pierre-Simon Laplace published a bold new article that took this concept to extremes. Laplace lived in the age of ‘the clockwork universe’, a theory that held Newton’s laws of motion to be sacrosanct truths and claimed that these laws of physics caused the universe to just keep on ticking over, just like the mechanical innards of a clock- and just like a clock, the universe was predictable. Just as one hour after five o clock will always be six, presuming a perfect clock, so every result in the world can be predicted from the results. Laplace’s arguments took such theory to its logical conclusion; if some vast intellect were able to know the precise positions of every particle in the universe, and all the forces and motions of them, at a single point in time, then using the laws of physics such an intellect would be able to know everything, see into the past, and predict the future.

Those who believed in this theory were generally disapproved of by the Church for devaluing the role of God and the unaccountable divine, whilst others thought it implied a lack of free will (although these issues are still considered somewhat up for debate to this day). However, among the scientific community Laplace’s ideas conjured up a flurry of debate; some entirely believed in the concept of a predictable universe, in the theory of scientific determinism (as it became known), whilst others pointed out the sheer difficulty in getting any ‘vast intellect’ to fully comprehend so much as a heap of sand as making Laplace’s arguments completely pointless. Other, far later, observers, would call into question some of the axiom’s upon which the model of the clockwork universe was based, such as Newton’s laws of motion (which collapse when one does not take into account relativity at very high velocities); but the majority of the scientific community was rather taken with the idea that they could know everything about something should they choose to. Perhaps the universe was a bit much, but being able to predict everything, to an infinitely precise degree, about a few atoms perhaps, seemed like a very tempting idea, offering a delightful sense of certainty. More than anything, to these scientists there work now had one overarching goal; to complete the laws necessary to provide a deterministic picture of the universe.

However, by the late 19th century scientific determinism was beginning to stand on rather shaky ground; although  the attack against it came from the rather unexpected direction of science being used to support the religious viewpoint. By this time the laws of thermodynamics, detailing the behaviour of molecules in relation to the heat energy they have, had been formulated, and fundamental to the second law of thermodynamics (which is, to this day, one of the fundamental principles of physics) was the concept of entropy.  Entropy (denoted in physics by the symbol S, for no obvious reason) is a measure of the degree of uncertainty or ‘randomness’ inherent in the universe; or, for want of a clearer explanation, consider a sandy beach. All of the grains of sand in the beach can be arranged in a vast number of different ways to form the shape of a disorganised heap, but if we make a giant, detailed sandcastle instead there are far fewer arrangements of the molecules of sand that will result in the same structure. Therefore, if we just consider the two situations separately, it is far, far more likely that we will end up with a disorganised ‘beach’ structure rather than a castle forming of its own accord (which is why sandcastles don’t spring fully formed from the sea), and we say that the beach has a higher degree of entropy than the castle. This increased likelihood of higher entropy situations, on an atomic scale, means that the universe tends to increase the overall level of entropy in it; if we attempt to impose order upon it (by making a sandcastle, rather than waiting for one to be formed purely by chance), we must input energy, which increases the entropy of the surrounding air and thus resulting in a net entropy increase. This is the second law of thermodynamics; entropy always increases, and this principle underlies vast quantities of modern physics and chemistry.

If we extrapolate this situation backwards, we realise that the universe must have had a definite beginning at some point; a starting point of order from which things get steadily more chaotic, for order cannot increase infinitely as we look backwards in time. This suggests some point at which our current universe sprang into being, including all the laws of physics that make it up; but this cannot have occurred under ‘our’ laws of physics that we experience in the everyday universe, as they could not kickstart their own existence. There must, therefore, have been some other, higher power to get the clockwork universe in motion, destroying the image of it as some eternal, unquestionable predictive cycle. At the time, this was seen as vindicating the idea of the existence of God to start everything off; it would be some years before Edwin Hubble would venture the Big Bang Theory, but even now we understand next to nothing about the moment of our creation.

However, this argument wasn’t exactly a death knell for determinism; after all, the laws of physics could still describe our existing universe as a ticking clock, surely? True; the killer blow for that idea would come from Werner Heisenburg in 1927.

Heisenburg was a particle physicist, often described as the person who invented quantum mechanics (a paper which won him a Nobel prize). The key feature of his work here was the concept of uncertainty on a subatomic level; that certain properties, such as the position and momentum of a particle, are impossible to know exactly at any one time. There is an incredibly complicated explanation for this concerning wave functions and matrix algebra, but a simpler way to explain part of the concept concerns how we examine something’s position (apologies in advance to all physics students I end up annoying). If we want to know where something is, then the tried and tested method is to look at the thing; this requires photons of light to bounce off the object and enter our eyes, or hypersensitive measuring equipment if we want to get really advanced. However, at a subatomic level a photon of light represents a sizeable chunk of energy, so when it bounces off an atom or subatomic particle, allowing us to know where it is, it so messes around with the atom’s energy that it changes its velocity and momentum, although we cannot predict how. Thus, the more precisely we try to measure the position of something, the less accurately we are able to know its velocity (and vice versa; I recognise this explanation is incomplete, but can we just take it as red that finer minds than mine agree on this point). Therefore, we cannot ever measure every property of every particle in a given space, never mind the engineering challenge; it’s simply not possible.

This idea did not enter the scientific consciousness comfortably; many scientists were incensed by the idea that they couldn’t know everything, that their goal of an entirely predictable, deterministic universe would forever remain unfulfilled. Einstein was a particularly vocal critic, dedicating the rest of his life’s work to attempting to disprove quantum mechanics and back up his famous statement that ‘God does not play dice with the universe’. But eventually the scientific world came to accept the truth; that determinism was dead. The universe would never seem so sure and predictable again.

Leaning Right

The political spectrum (yes, politics again) has, for over 200 years now, been the standard model for representing political views, adopted by both the media and laymen alike. It’s not hard to see why; the concept of judging every political view or party by a measure of left-ness and right-ness makes it very simple to understand and easily all-encompassing, allowing various groups to be easily compared to one another without a lot of complicated analysis and wordy explanations. The idea comes from the French revolution towards the end of the 18th century; in the revolutionary parliament, factions among political figures were incredibly divisive and the source of open conflict, so much like home & away fans at a football match they attempted to separate themselves. Those sitting to the left of the parliamentary president were the revolutionaries, the radicals, the secular and the republican, those who had driven the waves of chaotic change that characterised the revolutionary period. However, as the revolution went on, another set of views formed running counter to the revolutionary ideas of the left-sitters, and who quickly found their way into an equally tight-knit group on the right hand side of the hall; those who supported the principles of the monarchy, the prominence of the church in French society and politics, and the concepts of hierarchy and rank. It goes without saying, of course, that those populating the right-wing, as it would become known were mainly those who would benefit from these principles; the rich, the upper class (well, what little of it that hadn’t been killed off) and the high-standing.

And, according to the Big Book of Political Cliche’s, right wing=bad. Right wing means uber-capitalist, aristocratic, a semi-tyrannical overseer extorting money from the poor, innocent, repressed working classes and supportive of stealing from the poor to give to the rich. The right is where racists are to be found, old-fashioned bigots out of touch with the real world, those who think slavery was an excellent business model, and the neo-Nazis (I realise I may be pushing the envelope on what the stereotype actually is, but you get my point).

However, when one analyses the concept of right-wingedness (far more interesting than the left, which is all the same philosophy with varying degrees of mental instability), we begin to find a disparity, something that hints that our method of classification itself may be somewhat out of change and in need of a rethink. Right wing is considered to incorporate both a socio-economic position (pro-capitalist, laissez-faire and ‘get the poor working’ in very broad terms) and a social equality one (racism, sexism, discrimination etc.) akin to Nazism, and nowadays the two simply do not align themselves with the same demographic any more.

I mean, consider it purely from the ‘who votes for them’ angle. In Britain, the (nowadays fairly nominally) right-leaning Conservative party finds it power base in the country’s richer areas, such as the Home Counties, and among the rich & successful capitalists, since their quality of life can be put down to the capitalist model that Conservatism is so supportive of (and their benefits from taxation are relatively small compared to the help it provides the poorer demographics with). However, far-right parties and political groups such as the British National Party (BNP) and English Defence League (EDL) tend to seek support from right at the opposite end of the social ladder, seeking support from the young, working-class, white skinhead male sphere of existence. Both of them draw support from a predominantly white power base, but beyond that there is little connection.

This is not something solely prevalent today; the Nazi party are often held up as the epitomy of right-wing for their vehemently racist ‘far-right’ policies, but we often seem to forget that ‘Nazi’ is just a corruption of ‘Natso’, short for ‘National Socialist German Workers Party’. The party’s very title indicates that one of their key areas of support was for ‘the German Workers’, making a similar appeal as the communists of the time. Although their main support was eventually found in the middle  and lower-middle classes (the upper end of the social ladder considering Hitler a poor upstart who would never make anything of himself, demonstrating exactly how out of touch they were with the real world), the Nazi economic policy that put Germany through an astonishing economic turnaround between 1933 (when the Nazis took power) and 1939 was closely centred around the socialist ‘public works & state-controlled business’ model that Franklin D. Roosevelt had recently adopted to lead the USA out of The Great Depression. Many socialists and communists would doubtless have approved, if any of them hadn’t been locked up, beaten up or on their way to forced labour camps. In terms of socio-economic policy then, the Natso’s were clearly less ‘National’ and more ‘Socialist’.

We are, then, presented with this strange disparity between the economic-policy based ‘right’ and the racism-centric ‘far right’. The two were originally linked by the concepts of nationalism and traditionalism; from the earliest days of the political spectrum the right wing have always been very much supportive of a return ‘to the old ways’, of thinking nostalgically of the past (usually because there was less left-wingedness in it) and that the modern world is getting steadily worse in the name of ‘progress’. One feature identified in this vein is that of immigration, of foreign-born workers entering the country and ‘stealing our jobs’ (et cetera), in their view devaluing the worthiness of their own country. This has made the idea of nationalism and extreme patriotism a stereotypically right wing trait, and the associated view that ‘my country is better than yours’. This basic sense of the superiority of various races is the key rhetoric of ‘Social Darwinism’, a concept pioneered by the Nazis (among others) that suggests that Charles Darwin’s ‘Survival of the Fittest’ principle should be applied to the various races of humanity too, and that the ‘better’ races have a right of superiority over the ‘lesser’ ones (traditionally ethnic minorities in the west, such as middle eastern and black), and this too is a feature of many far-right viewpoints.

But the field has changed since those ideas were pioneered; the modern world that we live in is for one thing a lot easier to traverse than before, meaning that those rich enough to afford it can easily see the whole globe in all its glorious diversity and wonder for themselves, and our increasingly diverse western society has seen a significant number of ‘minorities’ enter the top echelons of society. It is also true that using cheap, hard working labour from immigrants rather than from workers with trade unions makes good economic (if often not moral) sense for large corporations, meaning that the ‘rich capitalist’ demographic who are so supportive of conservative economic policy are no longer the kind of people who worry about those ‘stealing our jobs’. This viewpoint has turned to the opposite end of the social spectrum, the kind of people who can genuinely see their jobs being done by ‘foreigners’ and get jealous and resentful about it; it is these people who form the support bas for right-wing populists and think the EDL know what they’re talking about, and in many ways that is more worrying. The rich having dangerous, extreme views is a serious danger, but there are comparatively few of them and democracy entails just one vote each. The number of young, angry, working class white men is far larger, and it is this demographic that won the BNP a seat in the House of Commons at the last election. Will this view get more or less prevalent as time goes on? I would like to think the latter, but maybe we’ll just have to wait and see…

“If I die before I wake…”

…which I might well do when this post hits the internet, then I hope somebody will at least look down upon my soul & life’s work favourably. Today, I am going to be dealing with the internet’s least favourite topic, an idea whose adherence will get you first derided and later inundated with offers to go and be slaughtered in one’s bed, a subject that should be taboo for any blogger looking to not infuriate everybody; that of religion.

I am not a religious person; despite a nominally Anglican upbringing my formative years found most of my Sundays occupied on the rugby pitch, whilst a deep interest in science tended to form the foundations of my world beliefs- I think (sometimes) to some personal detriment. This is a pattern I see regularly among those people I find as company (which may or may not say something about my choice of friends)- predominantly atheists with little or no religious upbringing who tend to steer clear of religion and its various associated features wherever possible. However, where I find I differ from them tends to be when the subject is broached when in the present of a devoutly Christian friend of mine; whilst I tend to leave his beliefs to himself and try not to spark an argument, many others I know see a demonstration of his beliefs as a cue to start on a campaign of ‘ha ha isn’t your world philosophy stupid’, and so on.  I tend to find these attacks more baffling and a little saddening than anything else, so I thought that I might take this opportunity to take my usual approach and try to analyse the issue

First up is a fact that most people are aware of even if it hasn’t quite made the jump into an articulate thought yet; that every religion is in fact two separate parts. The first of these can be dubbed the ‘faith’ aspect; the stories, the gods, the code of morals & general life guidelines and such, all of the bits that form the core of a system of beliefs and are, to a theist, the ‘godly’ part of their religion. The second can be labelled the ‘church’ aspect; this is the more man-made, even artificial, aspect of the religious system, and covers the system of priesthood (or equivalent) for each religion, their holy buildings, the religious leaders and even people’s personal interpretation of the ‘faith’ aspect. Holy books, such as the Bible or Torah, fall somewhere in between (Muslims believe, for example, that the Qur’an is literally the word of Allah, translated through the prophet Muhammed) as do the various prayers and religious music. In Buddhism, these two aspects are known as the Dharma (teachings) and Sangha (community), and together with Buddha form the ‘three jewels’ of their religion. In some religions, such as Scientology (if that can technically be called a religion) the two aspects are so closely entwined so as to be hard to separate, but they are still distinct aspects that should be treated separately. The ‘faith’ aspect of religion is, in most respects, the really important one, for it is this that actually formulates the basis of a religion; without a belief system, a church is nothing more than a place where people go to shout their views at those who inexplicably turn up. A religion’s ‘church’ aspect is its organised divisions, and exists for no greater or lesser purpose than to spread, cherish, protect and correctly translate the word of God, or other parts of the ‘faith’ aspect generally. This distinction is vital when we consider how great a difference there can be between what somebody believes and what another does in the same name.

For example, consider the ultra-fundamentalist Taliban currently fighting their Jihad (the word does not, on an unrelated note, technically translate as ‘holy war’ and the two should not be thought of a synonymous) in Afghanistan against the USA and other western powers. Their personal interpretation of the Qur’an and the teachings of Islam (their ‘church’ aspect) has lead them to believe that women do not deserve equal rights to men, that the western powers are ‘infidels’ who should be purged from the world, and that they must use force and military intervention against them to defend Islam from said infidels- hence why they are currently fighting a massive war that is getting huge amounts of innocent civilians killed and destroying their faith’s credibility. By contrast, there are nearly 2 million Muslims currently living in the UK, the vast majority of whom do not interpret their religion in the same way and are not currently blowing up many buildings- and yet they still identify as Islamic and believe in, broadly speaking, the same faith. To pick a perhaps more ‘real world’ example, I’m sure that the majority of Britain’s Catholic population steadfastly disagree with the paedophilia practiced by some of their Church’s priests, and that a certain proportion also disagree with the Pope’s views on the rights of homosexuals; and yet, they are still just as Christian as their priests, are devout believers in the teachings of God & Jesus and try to follow them as best as they can.

This I feel, is the nub of the matter; that one can be simultaneously a practising Christian, Muslim, Jew or whatever else and still be a normal human being. Just because your vicar holds one view, doesn’t mean you hold the same, and just because some people choose to base their entire life around their faith does not mean that a person must be defined by their belief system. And, returning to the subject of the ridicule many practising theists suffer, just because the ‘church’ aspect of a religion does something silly, doesn’t mean all practitioners of it deserve to be tarred with the same brush- or that their view on the world should even matter to you as you enjoy life in your own way (unless of course their belief actively impedes you in some way).

I feel like I haven’t really got my point across properly, so I’ll leave you with a few links that I think illustrate quite well what I’m trying to get at. I only hope that it will help others find a little more tolerance towards those who have found a religious path.

And sorry for this post being rather… weird

The Seven Slightly Harmful Quite Bad Things

The Seven Deadly Sins are quite an odd thing amongst western culture; a list of traits ostensibly meant to represent the worst features of humanity, but that is instead regarded as something of a humorous diversion, and one, moreover, that a large section of the population have barely heard of. The sins of wrath (originally spelt ‘wroth’, and often represented simply as ‘anger’), greed (or ‘avarice’), sloth (laziness), pride, lust, envy and gluttony were originally not meant as definite sins at all. Rather, the Catholic Church, who came up with them, called them the seven Capital Vices (their original religious origin also leads to them being referred to as ‘cardinal sins’) and rather than representing mere sins in and of themselves they were representative of the human vices from which all sin was born. The Church’s view on sin is surprisingly complex- all sinful activity is classified either as venial (bad but relatively minor) or mortal (meant to destroy the inner goodness of a person and lead them down a path of eternal damnation). Presumably the distinction was intended to prevent all sinful behaviour from being labelled a straight ticket to hell, but this idea may have been lost in a few places over time, as might (unfortunately) be accepted. Thus, holding a Capital Vice did not mean that you were automatically a sinful person, but that you were more naturally predisposed to commit sin and should try to exorcise them from you. All sin falls under the jurisdiction (for want of better word) of one of the vices, hence the confusion, and each Deadly Sin had its own counterpart Heavenly Virtue; patience for wrath, charity for greed, diligence for sloth, humility for pride, chastity for lust (hence why catholic priests are meant to be chaste), kindness for envy and temperance for gluttony. To a Catholic, therefore, these fourteen vices and virtues are the only real and, from a moral perspective, meaningful traits a person can have, all others being merely offshoots of them. Pride is usually considered the most severe of the sins, in that one challenges your place in comparison to God, and is also considered the source of the other six; Eve’s original sin was not, therefore, the eating of the fruit from the forbidden tree, but the pride and self-importance that lead her to challenge the word of God.

There have been other additions, or suggestions of them, to this list over the years; acedia, a neglect of ones duty based on melancholy and depression, was seen as symptomatic of a refusal to enjoy god’s world, whilst vainglory (a kind of boastful vanity) was incorporated under pride in the 14th century. Some more recent scholars have suggested the addition of traits such as fear, superstition and cruelty, although the church would probably put the former two under pride, in that one is not trusting in God to save you, and the latter as pride in your position and exercising of power over another (as you can see, ‘pride’ can be made to cover a whole host of things). I would also argue that, whilst the internet is notoriously loath to accept anything the Christian church has ever done as being a remotely good idea, that there is a lot we can learn by examining the list. We all do bad things, that goes without saying, but that does not mean that we are incapable of trying to make ourselves into better people, and the first step along that road is a proper understanding of precisely where and how we are flawed as people. Think of some act of your behaviour, maybe something you feel as being good behaviour and another as a dubiously moral incident, and try to place its root cause under one of those fourteen traits. You may be surprised as to what you can find out about yourself.

However, I don’t want to spend the rest of this post on a moral lesson, for there is another angle I wish to consider with regard to the Seven Deadly Sins- that they need not be sins at all. Every one of the capital vices is present to some degree within us, and can be used as justification for a huge range of good behaviour. If we do not allow ourselves to be envious of our peers’ achievements, how can we ever become inspired to achieve such heights ourselves- or, to pick a perhaps more appropriate example, if we are not envious of the perfectness of the Holy Trinity, how can and why should we aspire to be like them? Without the occasional espousal of anger and wrath, we may find it impossible to convey the true emotion behind what we care about, to enable others to care also, and to ensure we can appropriately defend what we care for. How could the Church ever have attempted to retake the Holy Land without the wrath required to act and win decisively? Greed too acts as a driving force for our achievements (can the church’s devotion to its vast collection of holy relics not be labelled as such?), and the occasional bout of gluttony and sloth are often necessary to best aid our rest and recuperation, enabling us to continue to act as good, kind people with the emotional and physical strength to bear life’s burden. Lust is often necessary as a natural predisposition to love, surely a virtuous trait if ever there was one, whilst a world consisting solely of chaste, ‘proper’ people would clearly not last very long. And then there is pride, the deadliest and also the most virtuous of vices. Without a sense of pride, how can we ever have even a modicum of self-respect, how can we ever recognise what we have done well and attempt to emulate it, and how can we ever feel any emotion that makes us seem like normal human beings rather than cold, calculating, heartless machines?

Perhaps, then, the one true virtue that we should apply to all of this is that of temperance. We all do bad things and we may all have a spark of the seven deadly sins inside us, but that doesn’t mean necessarily that the incidences of the two need always to coincide. Sure, if we just embrace our vices and pander to them, the world will probably not end up a terribly healthy place, and I’m sure that my description of the deadly sins is probably stretching the point as to what they specifically meant in their original context. But, not every dubiously right thing you do is entirely terrible, and a little leeway here and there can go an awfully long way to making sure we don’t end up going collectively mental.

We Will Remember Them

Four days ago (this post was intended for Monday, when it would have been yesterday, but I was out then- sorry) was Remembrance Sunday; I’m sure you were all aware of that. Yesterday we acknowledged the dead, recognised the sacrifice they made in service of their country, and reflected upon the tragic horrors that war inflicted upon them and our nations. We gave our thanks that “for your tomorrow, we gave our today”.

However, as the greatest wars ever to rack our planet have disappeared towards the realm of being outside living memory, a few dissenting voices have risen about the place of the 11th of November as a day of national mourning and remembrance. They are not loud complaints, as anything that may be seen as an attempt to sully the memories of those who ‘laid so costly a sacrifice on the altar of freedom’ (to quote Saving Private Ryan) is unsurprisingly lambasted and vilified by the majority, but it would be wrong not to recognise that there are some who question the very idea of Remembrance Sunday in its modern incarnation.

‘Remembrance Sunday,’ so goes the argument, ‘is very much centred around the memories of those who died: recognising their act of sacrifice and championing the idea that ‘they died for us’.” This may partly explain why the Church has such strong links with the ceremony; quite apart from religion being approximately 68% about death, the whole concept of sacrificing oneself for the good of others is a direct parallel to the story of Jesus Christ. ‘However,’ continues the argument, ‘the wars that we of the old Allied Powers chiefly celebrate and remember are ones in which we won, and if we had lost them then to argue that they had given their lives in defence of their realm would make it seem like their sacrifice was wasted- thus, this style of remembrance is not exactly fair. Furthermore, by putting the date of our symbolic day of remembrance on the anniversary of the end of the First World War, we invariably make that conflict (and WWII) our main focus of interest. But, it is widely acknowledged that WWI was a horrific, stupid war, in which millions died for next to no material gain and which is generally regarded as a terrible waste of life. We weren’t fighting for freedom against some oppressive power, but because all the European top brass were squaring up to one another in a giant political pissing contest, making the death of 20 million people the result of little more than a game of satisfying egos. This was not a war in which ‘they died for us’ is exactly an appropriate sentiment’.

Such an argument is a remarkably good one, and does call into question the very act of remembrance itself.  It’s perhaps more appropriate to make such an argument with more recent wars- the Second World War was a necessary conflict if ever there was one, and it cannot be said that those soldiers currently fighting in Afghanistan are not trying to make a deeply unstable and rather undemocratic part of the world a better place to live in (I said trying). However, this doesn’t change the plain and simple truth that war is a horrible, unpleasant activity that we ought to be trying to get rid of wherever humanly possible, and remembering soldiers from years gone by as if their going to die in a muddy trench was absolutely the most good and right thing to do does not seem like the best way of going about this- it reminds me of, in the words of Wilfred Owen: “that old lie:/Dulce Et Decorum Est/Pro Patria Mori”.

However, that is not to say that we should not remember the deaths and sacrifices of those dead soldiers, far from it. Not only would it be hideously insensitive to both their memories and families (my family was fortunate enough to not experience any war casualties in the 20th century), but it would also suggest to soldiers currently fighting that their fight is meaningless- something they are definitely not going to take well, which would be rather inadvisable since they have all the guns and explosives. War might be a terrible thing, but that is not to say that it doesn’t take guts and bravery to face the guns and fight for what you believe in (or, alternatively, what your country makes you believe in). As deaths go, it is at least honourable, if not exactly Dulce Et Decorum.

And then, of course, there is the whole point of remembrance, and indeed history itself, to remember. The old adage about ‘study history or else find yourself repeating it’ still holds true, and by learning lessons from the past we stand very little chance of improving on our previous mistakes. Without the great social levelling and anti-imperialist effects of the First World War, then women may never have got the vote, jingoistic ideas about empires,  and the glory of dying in battle may still abound, America may (for good or ill) have not made enough money out of the war to become the economic superpower it is today and wars may, for many years more, have continued to waste lives through persistent use of outdated tactics on a modern battlefield with modern weaponry, to name but the first examples to come into my head- so to ignore the act of remembrance is not just disrespectful, but downright rude.

Perhaps then, the message to learn is not to ignore the sacrifice that those soldiers have made over the years, but rather to remember what they died to teach us. We can argue for all of eternity as to whether the wars that lead to their deaths were ever justified, but we can all agree that the concept of war itself is a wrong one, and that the death and pain it causes are the best reasons to pursue peace wherever we can. This then, should perhaps be the true message of Remembrance Sunday; that over the years, millions upon millions of soldiers have dyed the earth red with their blood, so that we might one day learn the lessons that enable us to enjoy a world in which they no longer have to.

A Brief History of Copyright

Yeah, sorry to be returning to this topic yet again, I am perfectly aware that I am probably going to be repeating an awful lot of stuff that either a) I’ve said already or b) you already know. Nonetheless, having spent a frustrating amount of time in recent weeks getting very annoyed at clever people saying stupid things, I feel the need to inform the world if only to satisfy my own simmering anger at something really not worth getting angry about. So:

Over the past year or so, the rise of a whole host of FLLAs (Four Letter Legal Acronyms) from SOPA to ACTA has, as I have previously documented, sent the internet and the world at large in to paroxysms of mayhem at the very idea that Google might break and/or they would have to pay to watch the latest Marvel film. Naturally, they also provoked a lot of debate, ranging in intelligence from intellectual to average denizen of the web, on the subject of copyright and copyright law. I personally think that the best way to understand anything is to try and understand exactly why and how stuff came to exist in the first place, so today I present a historical analysis of copyright law and how it came into being.

Let us travel back in time, back to our stereotypical club-wielding tribe of stone age human. Back then, the leader not only controlled and lead the tribe, but ensured that every facet of it worked to increase his and everyone else’s chance of survival, and chance of ensuring that the next meal would be coming along. In short, what was good for the tribe was good for the people in it. If anyone came up with a new idea or technological innovation, such as a shield for example, this design would also be appropriated and used for the good of the tribe. You worked for the tribe, and in return the tribe gave you protection, help gathering food and such and, through your collective efforts, you stayed alive. Everybody wins.

However, over time the tribes began to get bigger. One tribe would conquer their neighbours, gaining more power and thus enabling them to take on bigger, larger, more powerful tribes and absorb them too. Gradually, territories, nations and empires form, and what was once a small group in which everyone knew everyone else became a far larger organisation. The problem as things get bigger is that what’s good for a country starts to not necessarily become as good for the individual. As a tribe gets larger, the individual becomes more independent of the motions of his leader, to the point at which the knowledge that you have helped the security of your tribe does not bear a direct connection to the availability of your next meal- especially if the tribe adopts a capitalist model of ‘get yer own food’ (as opposed to a more communist one of ‘hunters pool your resources and share between everyone’ as is common in a very small-scale situation when it is easy to organise). In this scenario, sharing an innovation for ‘the good of the tribe’ has far less of a tangible benefit for the individual.

Historically, this rarely proved to be much of a problem- the only people with the time and resources to invest in discovering or producing something new were the church, who generally shared between themselves knowledge that would have been useless to the illiterate majority anyway, and those working for the monarchy or nobility, who were the bosses anyway. However, with the invention of the printing press around the start of the 16th century, this all changed. Public literacy was on the up and the press now meant that anyone (well, anyone rich enough to afford the printers’ fees)  could publish books and information on a grand scale. Whilst previously the copying of a book required many man-hours of labour from a skilled scribe, who were rare, expensive and carefully controlled, now the process was quick, easy and available. The impact of the printing press was made all the greater by the social change of the few hundred years between the Renaissance and today, as the establishment of a less feudal and more merit-based social system, with proper professions springing up as opposed to general peasantry, meaning that more people had the money to afford such publishing, preventing the use of the press being restricted solely to the nobility.

What all this meant was that more and more normal (at least, relatively normal) people could begin contributing ideas to society- but they weren’t about to give them up to their ruler ‘for the good of the tribe’. They wanted payment, compensation for their work, a financial acknowledgement of the hours they’d put in to try and make the world a better place and an encouragement for others to follow in their footsteps. So they sold their work, as was their due. However, selling a book, which basically only contains information, is not like selling something physical, like food. All the value is contained in the words, not the paper, meaning that somebody else with access to a printing press could also make money from the work you put in by running of copies of your book on their machine, meaning they were profiting from your work. This can significantly cut or even (if the other salesman is rich and can afford to undercut your prices) nullify any profits you stand to make from the publication of your work, discouraging you from putting the work in in the first place.

Now, even the most draconian of governments can recognise that your citizens producing material that could not only benefit your nation’s happiness but also potentially have great material use is a valuable potential resource, and that they should be doing what they can to promote the production of that material, if only to save having to put in the large investment of time and resources themselves. So, it makes sense to encourage the production of this material, by ensuring that people have a financial incentive to do it. This must involve protecting them from touts attempting to copy their work, and hence we arrive at the principle of copyright: that a person responsible for the creation of a work of art, literature, film or music, or who is responsible for some form of technological innovation, should have legal control over the release & sale of that work for at least a set period of time. And here, as I will explain next time, things start to get complicated…

The Inevitable Dilemma

And so, today I conclude this series of posts on the subject of alternative intelligence (man, I am getting truly sick of writing that word). So far I have dealt with the philosophy, the practicalities and the fundamental nature of the issue, but today I tackle arguably the biggest and most important aspect of AI- the moral side. The question is simple- should we be pursuing AI at all?

The moral arguments surrounding AI are a mixed bunch. One of the biggest is the argument that is being thrown at a steadily wider range of high-level science nowadays (cloning, gene analysis and editing, even synthesis of new artificial proteins)- that the human race does not have the moral right, experience or ability to ‘play god’ and modify the fundamentals of the world in this way. Our intelligence, and indeed our entire way of being, has evolved over thousands upon millions of years of evolution, and has been slowly sculpted and built upon by nature over this time to find the optimal solution for self-preservation and general well being- this much scientists will all accept. However, this argument contends that the relentless onward march of science is simply happening too quickly, and that the constant demand to make the next breakthrough, do the next big thing before everybody else, means that nobody is stopping to think of the morality of creating a new species of intelligent being.

This argument is put around a lot with issues such as cloning or culturing meat, and it’s probably not helped matters that it is typically put around by the Church- never noted as getting on particularly well with scientists (they just won’t let up about bloody Galileo, will they?). However, just think about what could happen if we ever do succeed in creating a fully sentient computer. Will we all be enslaved by some robotic overlord (for further reference, see The Matrix… or any other of the myriad sci-fi flicks based on the same idea)? Will we keep on pushing and pushing to greater endeavours until we build a computer with intelligence on all levels infinitely superior to that of the human race? Or will we turn robot-kind into a slave race- more expendable than humans, possibly with programmed subservience? Will we have to grant them rights and freedoms just like us?

Those last points present perhaps the biggest other dilemma concerning AI from a purely moral standpoint- at what point will AI blur the line between being merely a machine and being a sentient entity worthy of all the rights and responsibilities that entails? When will a robot be able to be considered responsible for its own actions? When will be able to charge a robot as the perpetrator of a crime? So far, only one person has ever been killed by a robot (during an industrial accident at a car manufacturing plant), but if such an event were ever to occur with a sentient robot, how would we punish it? Should it be sentenced to life in prison? If in Europe, would the laws against the death penalty prevent a sentient robot from being ‘switched off’? The questions are boundless, but if the current progression of AI is able to continue until sentient AI is produced, then they will have to be answered at some point.

But there are other, perhaps more worrying issues to confront surrounding advanced AI. The most obvious non-moral opposition to AI comes from an argument that has been made in countless films over the years, from Terminator to I, Robot- namely, the potential that if robot-kind are ever able to equal or even better our mental faculties, then they could one day be able to overthrow us as a race. This is a very real issue when confronting the stereotypical issue of a war robot- that of an invincible metal machine capable of wanton destruction on par with a medium sized tank, and who is easily able to repair itself and make more of itself. It’s an idea that is reasonably unlikely to ever become real, but it actually raises another idea- one that is more likely to happen, more likely to build unnoticed, and is far, far more scary. What if the human race, fragile little blobs of fairly dumb flesh that we are, were ever to be totally superseded as an entity by robots?

This, for me, is the single most terrifying aspect of AI- the idea that I may one day become obsolete, an outdated model, a figment of the past. When compared to a machine’s ability to churn out hundreds of copies of itself simply from a blueprint and a design, the human reproductive system suddenly looks very fragile and inefficient by comparison. When compared to tough, hard, flexible modern metals and plastics that can be replaced in minutes, our mere flesh and blood starts to seem delightfully quaint. And if the whirring numbers of a silicon chip are ever able to become truly intelligent, then their sheer processing capacity makes our brains seem like outdated antiques- suddenly, the organic world doesn’t seem quite so amazing, and certainly more defenceless.

But could this ever happen? Could this nightmare vision of the future where humanity is nothing more than a minority race among a society ruled by silicon and plastic ever become a reality? There is a temptation from our rational side to say of course not- for one thing, we’re smart enough not to let things get to that stage, and that’s if AI even gets good enough for it to happen. But… what if it does? What if they can be that good? What if intelligent, sentient robots are able to become a part of a society to an extent that they become the next generation of engineers, and start expanding upon the abilities of their kind? From there on, one can predict an exponential spiral of progression as each successive and more intelligent generation turns out the next, even better one. Could it ever happen? Maybe not. Should we be scared? I don’t know- but I certainly am.