A History of Justified Violence

The Crusades rank among the most controversial wars in the history of mankind, and they go up against some pretty stiff competitors (WWI, the Boer War, and every time the English have fought their Celtic neighbours to name but a few). Spanning two hundred years, the historical view of the Crusades has changed slowly at the time; at the time they were thought a holy mission from God, at a later date a noble but ultimately flawed idea, and now many historians take the view that crusaders were little more than a bunch of murdering rapists plundering their way across the holy land. In fact, throughout history only one thing has been agreed on; that they were an abject failure.

The story of how the crusades came to be is a rather twisted one. During the early years of the second millennia AD, Christianity and Islam were spoiling for a fight that has, in some respects, yet to end. Christianity had a head start and had taken firm root in Europe, but over the last few centuries Islam had been founded and spread across the world like wildfire. Zealots, often with a Qur’an in one hand and a sword in the other had spread the word of Allah and Muhammed across the Middle East, Turkey, North Africa and pretty much all of Spain & Portugal south of Barcelona. Indeed, they probably would have gone further (given their immense technological, financial and military clout), had Islam as a religion not descended into infighting with the creation of the Sunni and Shia denominations and the collapse of the unified caliphate. Nevertheless, many Islamic empires were vast and incredibly powerful, and a serious force to be reckoned with.

The rise of Islam was an interesting phenomenon, unique to the world at the time, because of the way it is backed by the principle of jihad. Nowadays, the word tends to be taken to mean ‘holy war’, which is misleading- jihad refers to a Muslim’s attempts to struggle (‘struggle’ being the literal meaning of the word) against non-Muslims, both in a spiritual and worldly capacity. This can be taken to refer to a literal physical struggle against the enemies of Islam, and it was under this guidance that Muslim armies swept across the world under the banner of their religion. This was a strange concept to Christian nations, for whilst they had certainly fought before they had never done so for religious reasons. The Bible’s main messages are, after all, of peace and love for thy neighbour, and the Ten Commandments even state explicitly that ‘You shall not kill’. War was many things, but Christian was not, up until this point, among them.

However, the success of the Islamic approach, demonstrating just how powerful war and faith could be when they went hand-in-hand, lead the Church to reconsider, and added fuel to the fire of an already heated debate regarding whether the use of violence was ever justifiable to a Christian. There was already enough material to provoke argument; particularly in the Old Testament, God is frequently seen dispensing his wrath upon sinners (including one apparent count of genocide and a systematic cleansing of pretty much the entire world, among other things) in direct contravention of his son’s teachings. Then there were the questions of how one was otherwise meant to fight back against an invading force; ‘turn the other cheek’ is all very well, but loses its attractiveness when one is faced with someone attempting to kill you. Other schools of thought held that sin could be justified if it prevented a greater evil from occurring; but others stuck to the old view, claiming that violence could never be justified and only begat, or was begotten by, other violent acts.

It should also be remembered that the medieval Church was a distinctly political entity, and knew perfectly well that to attempt to tell, say, the vastly powerful Holy Roman Empire that it couldn’t declare war was just asking for trouble. Indeed, in later years the HRE even set up its own puppet papacy of ‘antipopes’, allowing them to excommunicate whoever they wanted and thus claim their wars were righteous.

However, the real trump card for the ‘just war camp’ was Jerusalem. The city of Jesus’ Crucifixion, the capital of Israel under the rule of King David (it is worth remembering that Mary’s spouse Joseph was of the House of David, hence why he returned to Bethlehem, the city of David when the Roman census was called), thought by many to be the place of Christ’s hidden tomb, it was the holiest city in the Christian (and Jewish) world, as even the Vatican would admit. However, it was also where, according to Islamic scripture, Muhammed undertook ‘the Night Journey’, in which he travelled to Jerusalem on a winged mule, and met with several prophets before ascending to speak directly with God (apparently the main source of discussion was an argument between God and the prophet Musa concerning how many prayers per day were required, with poor Muhammed acting as a messenger between the two. I would stress, however, that I am not especially knowledgeable with regards to Muslim scripture; if anyone wants to correct me on this, feel free to do so in the comments). This made it one of the holiest cities in the Muslim world, and Islamic forces had captured it (and the rest of Palestine to boot) in 636. The city had changed hands several times since then, but it had remained Muslim. For a long time this hadn’t been too much of a problem, but come the 11th century the Muslim rulers started acting against the Christian population. The Church of the Holy Sepulchre was destroyed, the Byzantine Empire (which, although Orthodox, was still technically on Catholic Europe’s side) was getting worn down by near-constant war against its Muslim neighbours, and Christian pilgrims started being harassed on their way to Jerusalem.

It was this that really tipped the Catholic Church’s ‘just war’ debate over the edge, and the Church eventually adopted the stance that war could be justified in the eyes of God if it was pursued in His name (a concept similar in nature to the jihad principle of justified warfare in fighting against the enemies of one’s religion). This was a decision made with one thing in mind; to win back the Holy Land from the Saracen infidels (Saracen being the coverall name given by Catholics to the Muslim occupiers of Jerusalem). To do that, the church needed an army. To get an army, they called a crusade…

Advertisements

Practical computing

This looks set to be my final post of this series about the history and functional mechanics of computers. Today I want to get onto the nuts & bolts of computer programming and interaction, the sort of thing you might learn as a budding amateur wanting to figure out how to mess around with these things, and who’s interested in exactly how they work (bear in mind that I am not one of these people and am, therefore, likely to get quite a bit of this wrong). So, to summarise what I’ve said in the last two posts (and to fill in a couple of gaps): silicon chips are massive piles of tiny electronic switches, memory is stored in tiny circuits that are either off or on, this pattern of off and on can be used to represent information in memory, memory stores data and instructions for the CPU, the CPU has no actual ability to do anything but automatically delegates through the structure of its transistors to the areas that do, the arithmetic logic unit is a dumb counting machine used to do all the grunt work and is also responsible, through the CPU, for telling the screen how to make the appropriate pretty pictures.

OK? Good, we can get on then.

Programming languages are a way of translating the medium of computer information and instruction (binary data) into our medium of the same: words and language. Obviously, computers do not understand that the buttons we press on our screen have symbols on them, that these symbols mean something to us and that they are so built to produce the same symbols on the monitor when we press them, but we humans do and that makes computers actually usable for 99.99% of the world population. When a programmer brings up an appropriate program and starts typing instructions into it, at the time of typing their words mean absolutely nothing. The key thing is what happens when their data is committed to memory, for here the program concerned kicks in.

The key feature that defines a programming language is not the language itself, but the interface that converts words to instructions. Built into the workings of each is a list of ‘words’ in binary, each word having a corresponding, but entirely different, string of data associated with it that represents the appropriate set of ‘ons and offs’ that will get a computer to perform the correct task. This works in one of two ways: an ‘interpreter’ is an inbuilt system whereby the programming is stored just as words and is then converted to ‘machine code’ by the interpreter as it is accessed from memory, but the most common form is to use a compiler. This basically means that once you have finished writing your program, you hit a button to tell the computer to ‘compile’ your written code into an executable program in data form. This allows you to delete the written file afterwards, makes programs run faster, and gives programmers an excuse to bum around all the time (I refer you here)

That is, basically how computer programs work- but there is one last, key feature, in the workings of a modern computer, one that has divided both nerds and laymen alike across the years and decades and to this day provokes furious debate: the operating system.

An OS, something like Windows (Microsoft), OS X (Apple) or Linux (nerds), is basically the software that enables the CPU to do its job of managing processes and applications. Think of it this way: whilst the CPU might put two inputs through a logic gate and send an output to a program, it is the operating system that will set it up to determine exactly which gate to put it through and exactly how that program will execute. Operating systems are written onto the hard drive, and can, theoretically, be written using nothing more than a magnetized needle, a lot of time and a plethora of expertise to flip the magnetically charged ‘bits’ on the hard disk. They consist of many different parts, but the key feature of all of them is the kernel, the part that manages the memory, optimises the CPU performance and translates programs from memory to screen. The precise translation and method by which this latter function happens differs from OS to OS, hence why a program written for Windows won’t work on a Mac, and why Android (Linux-powered) smartphones couldn’t run iPhone (iOS) apps even if they could access the store. It is also the cause of all the debate between advocates of different operating systems, since different translation methods prioritise/are better at dealing with different things, work with varying degrees of efficiency and are more  or less vulnerable to virus attack. However, perhaps the most vital things that modern OS’s do on our home computers is the stuff that, at first glance seems secondary- moving stuff around and scheduling. A CPU cannot process more than one task at once, meaning that it should not be theoretically possible for a computer to multi-task; the sheer concept of playing minesweeper whilst waiting for the rest of the computer to boot up and sort itself out would be just too outlandish for words. However, a clever piece of software called a scheduler in each OS which switches from process to process very rapidly (remember computers run so fast that they can count to a billion, one by one, in under a second) to give the impression of it all happening simultaneously. Similarly, a kernel will allocate areas of empty memory for a given program to store its temporary information and run on, but may also shift some rarely-accessed memory from RAM (where it is accessible) to hard disk (where it isn’t) to free up more space (this is how computers with very little free memory space run programs, and the time taken to do this for large amounts of data is why they run so slowly) and must cope when a program needs to access data from another part of the computer that has not been specifically allocated a part of that program.

If I knew what I was talking about, I could witter on all day about the functioning of operating systems and the vast array of headache-causing practicalities and features that any OS programmer must consider, but I don’t and as such won’t. Instead, I will simply sit back, pat myself on the back for having actually got around to researching and (after a fashion) understanding all this, and marvel at what strange, confusing, brilliant inventions computers are.

Where do we come from?

In the sport of rugby at the moment (don’t worry, I won’t stay on this topic for too long I promise), there is rather a large debate going on- one that has been echoing around the game for at least a decade now, but that seems to be coming ever closer to the fore. This is the issue of player nationality, namely the modern trend for foreign players to start playing for sides other than those of their birth. The IRB’s rules currently state that one is eligible to play for a country having either lived there for the past three years or if you, either of your parents or any of your grandparents were born there (and so long as you haven’t played for another international side). This state of affairs that has allowed a myriad of foreigners, mainly South Africans (Mouritz Botha, Matt Stevens, Brad Barritt) and New Zealanders (Dylan Hartley, Thomas Waldrom, Riki Flutey), as well as a player all of whose family have played for Samoa (Manu Tuilagi), to play for England in recent years. In fact, Scotland recently played host to an almost comic state of affairs as both the SRU and the media counted down the days until electric Dutch wing Tim Visser, long hailed as the solution to the Scots’ try scoring problems, was eligible to play for Scotland on residency grounds.

These rules were put in place after the ‘Grannygate’ scandal during the early noughties. Kiwi coach Graham Henry, hailed as ‘The Great Redeemer’ by Welsh fans after turning their national side around and leading them to eleven successive victories, had ‘found’ a couple of New Zealanders (Shane Howarth and Brett Sinkinson) with Welsh grandparents to help bolster his side. However, it wasn’t long before a bit of investigative journalism found out that there was no Welsh connection whatsoever, and the whole thing had been a fabrication by Henry and his team. Both players were stopped playing for Wales, and amidst the furore the IRB brought in their new rules.  Sinkinson later qualified on residency and won six further caps for the Welsh. Howarth, having previously played for New Zealand, never played international rugby again.

It might seem odd, then, that this issue is still considered a scandal, despite the IRB having supposedly ‘sorted it out’. But it remains a hugely contentious issue, dividing those who think that Mouritz Botha’s thick South African accent should not be allowed in a white shirt and those who point out that he apparently considers himself English and has as much a right as anyone to compete for the shirt. This is not just an issue in rugby either- during the Olympics, there was a decent amount of criticism for the presence of ‘plastic Brits’ in the Great Britain squad (many of them sporting strong American accents), something that has been present since the days of hastily anglicised South African Zola Budd. In some ways athletics is even more dodgy, as athletes are permitted to change the country they represent (take Bernard Lagat, who originally represented his native Kenya before switching to the USA).

The problem is that nationality is not a simple black & white dividing line, especially in today’s multicultural, well-travelled world. Many people across the globe now hold a dual nationality and a pair of legal passports, and it would be churlish to suggest that they ‘belong’ any more to one country than another. Take Mo Farah, for example, one of Britain’s heroes after the games, and a British citizen- despite being born in, and having all his family come from, Somaliland (technically speaking this is an independent, semi-autonomous state, but is internationally only recognised as part of Somalia). And just as we Britons exalt the performance of ‘our man’, in his home country the locals are equally ecstatic about the performance of a man they consider Somali, whatever country’s colours he runs in.

The thing is, Mo Farah, to the British public at least, seems British. We are all used to our modern, multicultural society, especially in London, so his ethnic origin barely registers as ‘foreign’ any more, and he has developed a strong English accent since he first moved here aged 9. On the other hand, both of Shana Cox’s parents were born in Britain, but was raised in Long Island and has a notable American accent, leading many to dub her a ‘plastic Brit’ after she lead off the 4 x 400m women’s relay team for Great Britain. In fact, you would be surprised how important accent is to our perception of someone’s nationality, as it is the most obvious indicator of where a person’s development as a speaker and a person occurred.

A simultaneously both interesting and quite sad demonstration of this involves a pair of Scottish rappers I saw in the paper a few years ago (and whose names I have forgotten). When they first auditioned as rappers, they did so in their normal Scots accents- and were soundly laughed out of the water. Seriously, their interviewers could barely keep a straight face as they rejected them out of hand purely based on the sound of their voice. Their solution? To adopt American accents, not just for their music but for their entire life. They rapped in American, spoke in American, swore, drank, partied & had sex all in these fake accents. People they met often used to be amazed by the perfect Scottish accents these all-american music stars were able to impersonate. And it worked, allowing them to break onto the music scene and pursue their dreams as musicians, although it exacted quite a cost. At home in Scotland, one of them asked someone at the train station about the timetable. Initially unable to understand the slight hint of distaste he could hear in their homely Scots lilt, it was about a minute before he realised he had asked the question entirely in his fake accent.

(Interestingly, Scottish music stars The Proclaimers, who the rappers were unfavourably compared to in their initial interview, were once asked about the use of their home accents in their music as opposed to the more traditional American of the music industry, and were so annoyed at the assumption that they ‘should’ be singing in an accent that wasn’t theirs that they even made a song (‘Flatten all the Vowels’) about the incident.)

This story highlights perhaps the key issue when considering the debate of nationality- that what we perceive as where someone’s from will often not tell us the whole story. It is not as simple as ‘oh so-and-so is clearly an American, why are they running for Britain?’, because what someone ‘clearly is’ and what they actually are can often be very different. At the very first football international, England v Scotland, most of the Scottish team were selected on the basis of having Scottish-sounding names. We can’t just be judging people on what first meets the eye.