Connections

History is a funny old business; an endless mix of overlapping threads, intermingling stories and repeating patterns that makes fascinating study for anyone who knows where to look. However, the part of it that I enjoy most involves taking the longitudinal view on things, linking two seemingly innocuous, or at least totally unrelated, events and following the trail of breadcrumbs that allow the two to connect. Things get even more interesting when the relationship is causal, so today I am going to follow the trail of one of my favourite little stories; how a single storm was, in the long run, responsible for the Industrial revolution. Especially surprising given that the storm in question occurred in 1064.

This particular storm occurred in the English Channel, and doubtless blew many ships off course, including one that had left from the English port of Bosham (opposite the Isle of Wight). Records don’t say why the ship was making its journey, but what was definitely significant was its passenger; Harold Godwinson, Earl of Wessex and possibly the most powerful person in the country after King Edward the Confessor. He landed (although that might be overstating the dignity and intention of the process) at Ponthieu, in northern France, and was captured by the local count, who subsequently turned him over to his liege when he, with his famed temper, heard of his visitor: the liege in question was Duke William of Normandy, or ‘William the Bastard’ as he was also known (he was the illegitimate son of the old duke and a tanner). Harold’s next move was (apparently) to accompany his captor to a battle just up the road in Brittany. He then tried to negotiate his freedom, which William accepted, on the condition that he swear an oath to him that, were the childless King Edward to die, he would support William’s claim to the throne (England at the time operated a sort of elective monarchy, where prospective candidates were chosen by a council of nobles known as the Witengamot). According to the Bayeux tapestry, Harold took this oath and left France; but two years later King Edward fell into a coma. With his last moment of consciousness before what was surely an unpleasant death, he apparently gestured to Harold, standing by his bedside. This was taken by Harold, and the Witengamot, as a sign of appointing a successor, and Harold accepted the throne. This understandably infuriated William, who considered this a violation of his oath, and subsequently invaded England. His timing of this coincided with another distant cousin, Harald Hardrada of Norway, deciding to push his claim to the throne, and in the resulting chaos William came to the fore. He became William the Conqueror, and the Normans controlled England for the next several hundred years.

One of the things that the Norman’s brought with them was a newfound view on religion; England was already Christian, but their respective Church’s views on certain subjects differed slightly. One such subject was serfdom, a form of slavery that was very popular among the feudal lords of the time. Serfs were basically slaves, in that they could be bought or sold as commodities; they were legally bound to the land they worked, and were thus traded and owned by the feudal lords who owned the land. In some countries, it was not unusual for one’s lord to change overnight after a drunken card game; Leo Tolstoy lost most of his land in just such an incident, but that’s another story. It was not a good existence for a serf, completely devoid of any form of freedom, but for a feudal lord it was great; cheap, guaranteed labour and thus income from one’s land, and no real risks concerned. However the Norman church’s interpretation of Christianity was morally opposed to the idea, and began to trade serfs for free peasants as a form of agricultural labour. A free peasant was not tied to the land but rented it from his liege, along with the right to use various pieces of land & equipment; the feudal lord still had income, but if he wanted goods from his land he had to pay for it from his peasants, and there were limits on the control he had over them. If a peasant so wished, he could pack up and move to London or wherever, or to join a ship; whatever he wanted in his quest to make his fortune. The vast majority were never faced with this choice as a reasonable idea, but the principle was important- a later Norman king, Henry I, also reorganised the legal system and introduced the role of sheriff, producing a society based around something almost resembling justice.

[It is worth noting that the very last serfs were not freed until the reign of Queen Elizabeth in the 1500s, and that subsequent British generations during the 18th century had absolutely no problem with trading in black slaves, but they justified that partly by never actually seeing the slaves and partly by taking the view that the black people weren’t proper humans anyway. We can be disgusting creatures]

A third Norman king further enhanced this concept of justice, even if completely by accident. King John was the younger brother of inexplicable national hero King Richard I, aka Richard the Lionheart or Couer-de-Lion (seriously, the dude was a Frenchman who visited England twice, both to raise money for his military campaigns, and later levied the largest ransom in history on his people when he had to be released by the Holy Roman Emperor- how he came to national prominence I will never know), and John was unpopular. He levied heavy taxes on his people to pay for costly and invariably unsuccessful military campaigns, and whilst various incarnations of Robin Hood have made him seem a lot more malevolent than he probably was, he was not a good King. He was also harsh to his people, and successfully pissed off peasant and noble alike; eventually the Norman Barons presented John with an ultimatum to limit his power, and restore some of theirs. However, the wording of the document also granted some basic and fundamental rights to the common people as well; this document was known as the Magna Carta; one of the most important legal documents in history, and arguably the cornerstone in the temple of western democracy.

The long-term ramifacations of this were huge; numerous wars were fought over the power it gave the nobility in the coming centuries, and Henry II (9 years old when he took over from father John) was eventually forced to call the first parliament; which, crucially, featured both barons (the noblemen, in what would soon become the House of Lords) and burghers (administrative leaders and representatives of the cities & commoners, in the House of Commons). The Black Death (which wiped out most of the peasant population and thus raised the value of the few who were left) greatly increased the value and importance of peasants across Europe for purely economic reasons for a few years, but over the next few centuries multiple generations of kings in several countries would slowly return things to the old ways, with them on top and their nobles kept subservient. In countries such as France, a nobleman got himself power, rank, influence and wealth by getting into bed with the king (in the cases of some ambitious noblewomen, quite literally); but in England the existence of a Parliament meant that no matter how much the king’s power increased through the reign of Plantagenets, Tudors and Stuarts, the gentry had some form of national power and community- and that the people were, to some nominal degree, represented as well. This in turn meant that it became not uncommon for the nobility and high-ranking (or at least rich) ordinary people to come into contact, and created a very fluid class system. Whilst in France a middle class businessman was looked on with disdain by the lords, in Britain he would be far more likely to be offered a peerage; nowadays the practice is considered undemocratic, but this was the cutting edge of societal advancement several hundred years ago. It was this ‘lower’ class of gentry, comprising the likes of John Hampden and Oliver Cromwell, who would precipitate the English Civil War as King Charles I tried to rule without Parliament altogether (as opposed to his predecessors  who merely chose to not listen to them a lot of the time); when the monarchy was restored (after several years of bloodshed and puritan brutality at the hands of Cromwell’s New Model Army, and a seemingly paradoxical few decades spent with Cromwell governing with only a token parliament, when he used them at all), parliament was the political force in Britain. When James II once again tried his dad’s tactic of proclaiming himself god-sent ruler whom all should respect unquestioningly, Parliament’s response was to invite the Dutch King William of Orange over to replace James and become William III, which he duly did. Throughout the reign of the remaining Stuarts and the Hanoverian monarchs (George I to Queen Victoria), the power of the monarch became steadily more and more ceremonial as the two key political factions of the day, the Whigs (later to become the Liberal, and subsequently Liberal Democrat, Party) and the Tories (as today’s Conservative Party is still known) slugged it out for control of Parliament, the newly created role of ‘First Lord of the Treasury’ (or Prime Minister- the job wasn’t regularly selected from among the commons for another century or so) and, eventually, the country. This brought political stability, and it brought about the foundations of modern democracy.

But I’m getting ahead of myself; what does this have to do with the Industrial Revolution? Well, we can partly blame the political and financial stability at the time, enabling corporations and big business to operate simply and effectively among ambitious individuals wishing to exploit potential; but I think that the key reason it occurred has to do with those ambitious people themselves. In Eastern Europe & Russia, in particular, there were two classes of people; nobility who were simply content to scheme and enjoy their power, and the masses of illiterate serfs. In most of Western Europe, there was a growing middle class, but the monarchy and nobility were united in keeping them under their thumb and preventing them from making any serious impact on the world. The French got a bloodthirsty revolution and political chaos as an added bonus, whilst the Russians waited for another century to finally get sufficiently pissed of at the Czar to precipitate a communist revolution. In Britain, however, there were no serfs, and corporations were built from the middle classes. These people’s primary concerns wasn’t rank or long-running feuds, disagreements over land or who was sleeping with the king; they wanted to make money, and would do so by every means at their disposal. This was an environment ripe for entrepreneurism, for an idea worth thousands to take the world by storm, and they did so with relish. The likes of Arkwright, Stephenson and Watt came from the middle classes and were backed by middle class industry, and the rest of Britain came along for the ride as Britain’s coincidentally vast coal resources were put to good use in powering the change. Per capita income, population and living standards all soared, and despite the horrors that an age of unregulated industry certainly wrought on its populace, it was this period of unprecedented change that was the vital step in the formation of the world as we know it today. And to think that all this can be traced, through centuries of political change, to the genes of uselessness that would later become King John crossing the channel after one unfortunate shipwreck…

And apologies, this post ended up being a lot longer than I intended it to be

Advertisement

Up one level

In my last post (well, last excepting Wednesday’s little topical deviation), I talked about the real nuts and bolts of a computer, detailing the function of the transistors that are so vital to the workings of a computer. Today, I’m going to take one step up and study a slightly broader picture, this time concerned with the integrated circuits that utilise such components to do the real grunt work of computing.

An integrated circuit is simply a circuit that is not comprised of multiple, separate, electronic components- in effect, whilst a standard circuit might consist of a few bits of metal and plastic connected to one another by wires, in an IC they are all stuck in the same place and all assembled as one. The main advantage of this is that since all the components don’t have to be manually stuck to one another, but are built in circuit form from the start, there is no worrying about the fiddliness of assembly and they can be mass-produced quickly and cheaply with components on a truly microscopic scale. They generally consist of several layers on top of the silicon itself, simply to allow space for all of the metal connecting tracks and insulating materials to run over one another (this pattern is usually, perhaps ironically, worked out on a computer), and the sheer detail required of their manufacture surely makes it one of the marvels of the engineering world.

But… how do they make a computer work? Well, let’s start by looking at a computer’s memory, which in all modern computers takes the form of semiconductor memory. Memory takes the form of millions upon millions of microscopically small circuits known as memory circuits, each of which consists of one or more transistors. Computers are electronic, meaning to only thing they understand is electricity- for the sake of simplicity and reliability, this takes the form of whether the current flowing in a given memory circuit is ‘on’ or ‘off’. If the switch is on, then the circuit is represented as a 1, or a 0 if it is switched off. These memory circuits are generally grouped together, and so each group will consist of an ordered pattern of ones and zeroes, of which there are many different permutations. This method of counting in ones and zeroes is known as binary arithmetic, and is sometimes thought of as the simplest form of counting. On a hard disk, patches of magnetically charged material represent binary information rather than memory circuits.

Each little memory circuit, with its simple on/off value, represents one bit of information. 8 bits grouped together forms a byte, and there may be billions of bytes in a computer’s memory. The key task of a computer programmer is, therefore, to ensure that all the data that a computer needs to process is written in binary form- but this pattern of 1s and 0s might be needed to represent any information from the content of an email to the colour of one pixel of a video. Clearly, memory on its own is not enough, and the computer needs some way of translating the information stored into the appropriate form.

A computer’s tool for doing this is known as a logic gate, a simple electronic device consisting of (you guessed it) yet more transistor switches. This takes one or two inputs, either ‘on’ or ‘off’ binary ones, and translates them into another value. There are three basic types:  AND gates (if both inputs equal 1, output equals 1- otherwise, output equals 0), OR gates (if either input equals 1, output equals 1- if both inputs equal 0, output equals 0), and NOT gates (if input equals 1, output equals 0, if input equals 0, output equals 1). The NOT gate is the only one of these with a single input, and combinations of these gates can perform other functions too, such as NAND (not-and) or XOR (exclusive OR; if either input equals 1, output equals 1, but if both inputs equal 1 or 0, output equals 0) gates. A computer’s CPU (central processing unit) will contain hundreds of these, connected up in such a way as to link various parts of the computer together appropriately, translate the instructions of the memory into what function a given program should be performing, and thus cause the relevant bit (if you’ll pardon the pun) of information to translate into the correct process for the computer to perform.

For example, if you click on an icon on your desktop, your computer will put the position of your mouse and the input of the clicking action through an AND gate to determine that it should first highlight that icon. To do this, it orders the three different parts of each of the many pixels of that symbol to change their shade by a certain degree, and the the part of the computer responsible for the monitor’s colour sends a message to the Arithmetic Logic Unit (ALU), the computer’s counting department, to ask what the numerical values of the old shades plus the highlighting is, to give it the new shades of colour for the various pictures. Oh, and the CPU should also open the program. To do this, its connections send a signal off to the memory to say that program X should open now. Another bit of the computer then searches through the memory to find program X, giving it the master ‘1’ signal that causes it to open. Now that it is open, this program routes a huge amount of data back through the CPU to tell it to change the pattern of pretty colours on the screen again, requiring another slue of data to go through the ALU, and that areas of the screen A, B and C are now all buttons, so if you click there then we’re going to have to go through this business all over again. Basically the CPU’s logical function consists of ‘IF this AND/OR this happens, which signal do I send off to ask the right part of the memory what to do next?’. And it will do all this in a miniscule fraction of a second. Computers are amazing.

Obviously, nobody in their right mind is going to go through the whole business of telling the computer exactly what to do with each individual piece of binary data manually, because if they did nothing would ever get done. For this purpose, therefore, programmers have invented programming languages to translate their wishes into binary, and for a little more detail about them, tune in to my final post on the subject…

A Continued History

This post looks set to at least begin by following on directly from my last one- that dealt with the story of computers up to Charles Babbage’s difference and analytical engines, whilst this one will try to follow the history along from there until as close to today as I can manage, hopefully getting in a few of the basics of the workings of these strange and wonderful machines.

After Babbage’s death as a relatively unknown and unloved mathematician in 1871, the progress of the science of computing continued to tick over. A Dublin accountant named Percy Ludgate, independently of Babbage’s work, did develop his own programmable, mechanical computer at the turn of the century, but his design fell into a similar degree of obscurity and hardly added anything new to the field. Mechanical calculators had become viable commercial enterprises, getting steadily cheaper and cheaper, and as technological exercises were becoming ever more sophisticated with the invention of the analogue computer. These were, basically a less programmable version of the difference engine- mechanical devices whose various cogs and wheels were so connected up that they would perform one specific mathematical function to a set of data. James Thomson in 1876 built the first, which could solve differential equations by integration (a fairly simple but undoubtedly tedious mathematical task), and later developments were widely used to collect military data and for solving problems concerning numbers too large to solve by human numerical methods. For a long time, analogue computers were considered the future of modern computing, but since they solved and modelled problems using physical phenomena rather than data they were restricted in capability to their original setup.

A perhaps more significant development came in the late 1880s, when an American named Herman Hollerith invented a method of machine-readable data storage in the form of cards punched with holes. These had been around for a while to act rather like programs, such as the holed-paper reels of a pianola or the punched cards used to automate the workings of a loom, but this was the first example of such devices being used to store data (although Babbage had theorised such an idea for the memory systems of his analytical engine). They were cheap, simple, could be both produced and read easily by a machine, and were even simple to dispose of. Hollerith’s team later went on to process the data of the 1890 US census, and would eventually become most of IBM. The pattern of holes on these cards could be ‘read’ by a mechanical device with a set of levers that would go through a hole if there was one present, turning the appropriate cogs to tell the machine to count up one. This system carried on being used right up until the 1980s on IBM systems, and could be argued to be the first programming language.

However, to see the story of the modern computer truly progress we must fast forward to the 1930s. Three interesting people and acheivements came to the fore here: in 1937 George Stibitz, and American working in Bell Labs, built an electromechanical calculator that was the first to process data digitally using on/off binary electrical signals, making it the first digital. In 1936, a bored German engineering student called Konrad Zuse dreamt up a method for processing his tedious design calculations automatically rather than by hand- to this end he devised the Z1, a table-sized calculator that could be programmed to a degree via perforated film and also operated in binary. His parts couldn’t be engineered well enough for it to ever work properly, but he kept at it to eventually build 3 more models and devise the first programming language. However, perhaps the most significant figure of 1930s computing was a young, homosexual, English maths genius called Alan Turing.

Turing’s first contribution to the computing world came in 1936, when he published a revolutionary paper showing that certain computing problems cannot be solved by one general algorithm. A key feature of this paper was his description of a ‘universal computer’, a machine capable of executing programs based on reading and manipulating a set of symbols on a strip of tape. The symbol on the tape currently being read would determine whether the machine would move up or down the strip, how far, and what it would change the symbol to, and Turing proved that one of these machines could replicate the behaviour of any computer algorithm- and since computers are just devices running algorithms, they can replicate any modern computer too. Thus, if a Turing machine (as they are now known) could theoretically solve a problem, then so could a general algorithm, and vice versa if it couldn’t. Not only that, but since modern computers cannot multi-task on the. These machines not only lay the foundations for computability and computation theory, on which nearly all of modern computing is built, but were also revolutionary as they were the first theorised to use the same medium for both data storage and programs, as nearly all modern computers do. This concept is known as a von Neumann architecture, after the man who first pointed out and explained this idea in response to Turing’s work.

Turing machines contributed one further, vital concept to modern computing- that of Turing-completeness. A Turing-complete system was defined as a single Turing machine (known as a Universal Turing machine) capable of replicating the behaviour of any other theoretically possible Turing machine, and thus any possible algorithm or computable sequence. Charles Babbage’s analytical engine would have fallen into that class had it ever been built, in part because it was capable of the ‘if X then do Y’ logical reasoning that characterises a computer rather than a calculator. Ensuring the Turing-completeness of a system is a key part of designing a computer system or programming language to ensure its versatility and that it is capable of performing all the tasks that could be required of it.

Turing’s work had laid the foundations for nearly all the theoretical science of modern computing- now all the world needed was machines capable of performing the practical side of things. However, in 1942 there was a war on, and Turing was being employed by the government’s code breaking unit at Bletchley Park, Buckinghamshire. They had already cracked the German’s Enigma code, but that had been a comparatively simple task since they knew the structure and internal layout of the Enigma machine. However, they were then faced by a new and more daunting prospect: the Lorenz cipher, encoded by an even more complex machine for which they had no blueprints. Turing’s genius, however, apparently knew no bounds, and his team eventually worked out its logical functioning. From this a method for deciphering it was formulated, but it required an iterative process that took hours of mind-numbing calculation to get a result out. A faster method of processing these messages was needed, and to this end an engineer named Tommy Flowers designed and built Colossus.

Colossus was a landmark of the computing world- the first electronic, digital, and partially programmable computer ever to exist. It’s mathematical operation was not highly sophisticated- it used vacuum tubes containing light emission and sensitive detection systems, all of which were state-of-the-art electronics at the time, to read the pattern of holes on a paper tape containing the encoded messages, and then compared these to another pattern of holes generated internally from a simulation of the Lorenz machine in different configurations. If there were enough similarities (the machine could obviously not get a precise matching since it didn’t know the original message content) it flagged up that setup as a potential one for the message’s encryption, which could then be tested, saving many hundreds of man-hours. But despite its inherent simplicity, its legacy is simply one of proving a point to the world- that electronic, programmable computers were both possible and viable bits of hardware, and paved the way for modern-day computing to develop.

A Brief History of Copyright

Yeah, sorry to be returning to this topic yet again, I am perfectly aware that I am probably going to be repeating an awful lot of stuff that either a) I’ve said already or b) you already know. Nonetheless, having spent a frustrating amount of time in recent weeks getting very annoyed at clever people saying stupid things, I feel the need to inform the world if only to satisfy my own simmering anger at something really not worth getting angry about. So:

Over the past year or so, the rise of a whole host of FLLAs (Four Letter Legal Acronyms) from SOPA to ACTA has, as I have previously documented, sent the internet and the world at large in to paroxysms of mayhem at the very idea that Google might break and/or they would have to pay to watch the latest Marvel film. Naturally, they also provoked a lot of debate, ranging in intelligence from intellectual to average denizen of the web, on the subject of copyright and copyright law. I personally think that the best way to understand anything is to try and understand exactly why and how stuff came to exist in the first place, so today I present a historical analysis of copyright law and how it came into being.

Let us travel back in time, back to our stereotypical club-wielding tribe of stone age human. Back then, the leader not only controlled and lead the tribe, but ensured that every facet of it worked to increase his and everyone else’s chance of survival, and chance of ensuring that the next meal would be coming along. In short, what was good for the tribe was good for the people in it. If anyone came up with a new idea or technological innovation, such as a shield for example, this design would also be appropriated and used for the good of the tribe. You worked for the tribe, and in return the tribe gave you protection, help gathering food and such and, through your collective efforts, you stayed alive. Everybody wins.

However, over time the tribes began to get bigger. One tribe would conquer their neighbours, gaining more power and thus enabling them to take on bigger, larger, more powerful tribes and absorb them too. Gradually, territories, nations and empires form, and what was once a small group in which everyone knew everyone else became a far larger organisation. The problem as things get bigger is that what’s good for a country starts to not necessarily become as good for the individual. As a tribe gets larger, the individual becomes more independent of the motions of his leader, to the point at which the knowledge that you have helped the security of your tribe does not bear a direct connection to the availability of your next meal- especially if the tribe adopts a capitalist model of ‘get yer own food’ (as opposed to a more communist one of ‘hunters pool your resources and share between everyone’ as is common in a very small-scale situation when it is easy to organise). In this scenario, sharing an innovation for ‘the good of the tribe’ has far less of a tangible benefit for the individual.

Historically, this rarely proved to be much of a problem- the only people with the time and resources to invest in discovering or producing something new were the church, who generally shared between themselves knowledge that would have been useless to the illiterate majority anyway, and those working for the monarchy or nobility, who were the bosses anyway. However, with the invention of the printing press around the start of the 16th century, this all changed. Public literacy was on the up and the press now meant that anyone (well, anyone rich enough to afford the printers’ fees)  could publish books and information on a grand scale. Whilst previously the copying of a book required many man-hours of labour from a skilled scribe, who were rare, expensive and carefully controlled, now the process was quick, easy and available. The impact of the printing press was made all the greater by the social change of the few hundred years between the Renaissance and today, as the establishment of a less feudal and more merit-based social system, with proper professions springing up as opposed to general peasantry, meaning that more people had the money to afford such publishing, preventing the use of the press being restricted solely to the nobility.

What all this meant was that more and more normal (at least, relatively normal) people could begin contributing ideas to society- but they weren’t about to give them up to their ruler ‘for the good of the tribe’. They wanted payment, compensation for their work, a financial acknowledgement of the hours they’d put in to try and make the world a better place and an encouragement for others to follow in their footsteps. So they sold their work, as was their due. However, selling a book, which basically only contains information, is not like selling something physical, like food. All the value is contained in the words, not the paper, meaning that somebody else with access to a printing press could also make money from the work you put in by running of copies of your book on their machine, meaning they were profiting from your work. This can significantly cut or even (if the other salesman is rich and can afford to undercut your prices) nullify any profits you stand to make from the publication of your work, discouraging you from putting the work in in the first place.

Now, even the most draconian of governments can recognise that your citizens producing material that could not only benefit your nation’s happiness but also potentially have great material use is a valuable potential resource, and that they should be doing what they can to promote the production of that material, if only to save having to put in the large investment of time and resources themselves. So, it makes sense to encourage the production of this material, by ensuring that people have a financial incentive to do it. This must involve protecting them from touts attempting to copy their work, and hence we arrive at the principle of copyright: that a person responsible for the creation of a work of art, literature, film or music, or who is responsible for some form of technological innovation, should have legal control over the release & sale of that work for at least a set period of time. And here, as I will explain next time, things start to get complicated…