“Lies, damn lies, and statistics”

Ours is the age of statistics; of number-crunching, of quantifying, of defining everything by what it means in terms of percentages and comparisons. Statistics crop up in every walk of life, to some extent or other, in fields as widespread as advertising and sport. Many people’s livelihoods now depend on their ability to crunch the numbers, to come up with data and patterns, and much of our society’s increasing ability to do awesome things can be traced back to someone making the numbers dance.

In fact, most of what we think of as ‘statistics’ are not really statistics at all, but merely numbers; to a pedantic mathematician, a statistic is defined as a mathematical function of a sample of data, not the whole ‘population’ we are considering. We use statistics when it would be impractical to measure the whole population, usually because it’s too large, and when we instead are trying to mathematically model the whole population based on a small sample of it. Thus, next to no sporting ‘statistics’ are in fact true statistics as they tend to cover the whole game; if I heard during a rugby match that “Leicester had 59% of the possession”, that is nothing more than a number; or, to use the mathematical term, a parameter. A statistic would be to say “From our sample [of one game] we can conclude that Leicester control an average of 59% of the possession when they play rugby”, but this is quite evidently not true since we couldn’t extrapolate Leicester’s normal behaviour from a single match. It is for this reason that complex mathematical formulae are used to determine the uncertainty of a conclusion drawn from a statistical test, and these are based on the size of the sample we are testing compared to the overall size of the population we are trying to model. These uncertainty levels are often brushed under the carpet when pseudoscientists try to make dramatic, sweeping claims about something, but they are possibly the most important feature of modern statistics.

Another weapon for the poor statistician can be the mis-application of the idea of correlation. Correlation is basically what it means when you take two variables, plot them against one another on a graph, and find you get a nice neat line joining them, suggesting that the two are in some way related. Correlation tends to get scientists very excited, since if two things are linked then it suggests that you can make one thing happen by doing another, an often advantageous concept, and this is known as a causal relationship. However, whilst correlation and causation are rarely not intertwined, the first lesson every statistician learns is this; correlation DOES NOT imply causation.

Imagine, for instance, you have a cold. You feel like crap, your head is spinning, you’re dehydrated and you can’t breath through your nose. If we were, during the period before, during and after your cold, to plot a graph of one’s relative ability to breath through the nose against the severity of your headache (yeah, not very scientific I know), these two facts would both correlate, since they happen at the same time due to the cold. However, if I were to decide that this correlation implies causation, then I would draw the conclusion that all I need to do to give you a terrible headache is to plug your nose with tissue paper so you can’t breath through it. In this case, I have ignored the possibility (and, as it transpires, the eventuality) of there being a third variable (the cold virus) that causes both of the other two variables, and this is very hard to investigate without poking our head out of the numbers and looking at the real world. There are statistical techniques that enable us to do this, but they are for another time.

Whilst this example was more childish than anything, mis-extrapolation of a correlation can have deadly consequences. One example, explored in Ben Goldacre’s Bad Science, concerns beta-carotene, an antioxidant found in carrots, and in 1981 an epidemiologist called Richard Peto published a meta-analysis (post for another time) of a series of scientific studies that suggested people with high beta-carotene levels showed a reduced risk of cancer. At the time, antioxidants were considered the wonder-substance of the nutrition, and everyone got on board with the idea that beta-carotene was awesome stuff. However, all of the studies examined were observational ones; taking a lot of different people, seeing what their beta-carotene levels were and then examining whether or not they had cancer or developed it in later life. None of the studies actually gave their subjects beta-carotene and then saw if that affected their cancer risk, and this prompted the editor of Nature magazine (the scientific journal in which Peto’s paper was published) to include a footnote reading:

Unwary readers (if such there are) should not take the accompanying article as a sign that the consumption of large quantities of carrots (or other dietary sources of beta-carotene) is necessarily protective against cancer.

The editor’s footnote quickly proved a well-judged one; a study conducted in Finland some time afterwards actually gave participants at high risk of lung cancer beta-carotene and found their risk of both getting the cancer and of death were higher than for the ‘placebo’ control group. A later study, named CARET (Carotene And Retinol Efficiency Trial), also tested groups at a high risk of lung cancer, giving half of them a mixture of beta-carotene and vitamin A and the other half placebos. The idea was to run the trial for six years and see how many illnesses/deaths each group ended up with; but after preliminary data found that those having the antioxidant tablets were 46% more likely to die from lung cancer, they decided it would be unethical to continue the trial and it was terminated early. Had the Nature article been allowed to get out of hand before this research was done, then it could have put thousands of people who hadn’t read the article properly at risk; and all because of the dangers of assuming correlation=causation.

This wasn’t really the gentle ramble through statistics I originally intended it to be, but there you go; stats. Next time, something a little less random. Maybe

Advertisement

The Plight of Welsh Rugby

It being a rugby time of year, I thought I might once again cast my gaze over the world of rugby in general. Rugby is the sport I love, and the coming of professionalism has seen it become bigger, faster, and more of a spectacle than ever before. The game itself has, to my mind at least, greatly benefited from the coming of the professional age; but with professionalism comes money, and where there’s money there are problems.

Examples of how financial problems have ruined teams abound all over the world, from England (lead by the financial powerhouse of the RFU) to New Zealand (where player salary caps are, if I remember correctly, set at £50,000 to avoid bankrupting themselves). But the worst examples are to be found in Britain, specifically in Wales (and, to a lesser extent, Scotland).

Back in the day, Wales was the powerhouse of northern hemisphere rugby. Clubs like Bridgend, Pontypool and Llanelli, among others, churned out international-level stars at a quite astounding rate for such relatively small clubs. Amidst the valleys, rugby was a way of life, something that united whole communities who would turn out to watch their local clubs in fierce local derbies. And the results followed; despite England and France enjoying the benefit of far superior playing numbers, Wales were among the most successful sides in the then Five Nations Championship, Welsh sides were considered the major challenge for touring southern hemisphere sides, and the names of such Welsh greats as JPR Williams, Barry John, Phil Bennett and, most famous of the lot, Gareth Edwards, have resonated down the ages. Or at least the nostalgic rugby press tells me, since I wasn’t really in a position to notice at the time.

However, professionalism demands that clubs pay their players if they wish to keep hold of them, and that requires them to generate a not insignificant degree of income. Income requires fans, and more importantly a large number of fans who are willing and able to travel to games and pay good money for tickets and other paraphernalia, and this requires a team to be based in an area of sufficient population and wealth. This works best when clubs are based in and around large cities; but since rugby is a game centred around rolling around in a convenient acre of mud it does not always translate well to a city population. As such, many rugby heartlands tend to be fairly rural, and thus present major issues when considering a professional approach to the game. This was a major problem in Scotland; their greatest talent pool came from the borders region, home of such famous clubs as Melrose and Galashiels, but when the game went pro in 1995 the area only had a population of around 100,000 and was declining economically. For the SRU to try and support all their famous clubs would have been nigh-on impossible, since there are only so many potential fans to go around those many with proud rugby heritage in such a relatively small area, and to pick one club over another would have been a move far too dangerous to contemplate. So they opted for a regional model; here, the old clubs would form their own leagues to act as a talent pool for regional sides who would operate as big, centrally contracted, professional outfits. The idea was that everyone, regardless of their club of origin, would come together to back their region, the proud sum of its many parts; but in reality many consider regional sides to be rather soulless outfits without the heritage or locality to drum up support. In Scotland they formed four regions originally, but the Caledonia Reds (covering the vast, lowly populated area north of the major cities) were disbanded after just a season and the Border Reivers, sprung from Soctland’s rugby heartland, went in 2005 after poor results and worse attendances. Now only Edinburgh and Glasgow are left, doing what they can in places with all the money and none of the heritage.

Ireland also adopted the regional model, but there it was far less of a problem. Ireland (which for rugby purposes incorporates Northern Ireland as well) is a larger, more densely populated country than Scotland, and actually has four major cities to base its four regional sides in (Limerick, Galway, Belfast and Dublin, whose potential to grow into a rugby powerhouse, as the largest conurbation of people in Europe without a major football side, is huge). Not only that, but relatively few Irish clubs had garnered the fame and prestige of their fellow Celts, so the regions didn’t have so many heritage problems. And its shown; Ireland is now the most successful country in the Celtic League (or RaboDirect Pro12, to satisfy the sponsors), Leinster have won 3 Heineken Cups in 5 years, and just four years ago, the national side achieved their country’s second-ever Grand Slam.

But it was in Wales that rugby had the farthest to fall, and fall it did; without the financial, geographical and club structure advantages of England or the virgin potential of Ireland, Welsh fortunes have been topsy-turvy. Initially five regions were set up, but the Celtic Warriors folded after just a few seasons and left only four, covering the four south coast cities of Llanelli (Scarlets), Swansea (Ospreys), Newport (Dragons) and Cardiff. Unfortunately, these cities are not huge and are all very close to one another, giving them a small catchment area and very little sense of regional rivalry; since they are all, apparently, part of the same region. Their low population means the clubs struggle to support themselves from the city population, but without any sense of historic or community identity they find it even harder to build a dedicated fan base; and with the recent financial situation, with professional rugby living through its first depression as player wages continue to rise, these finances are getting stretched ever thinner.

Not only that, but all the old clubs, whilst they still exist, are losing out on the deal too. Whilst the prestige and heritage are still there, with the WRU’s and the rugby world’s collective focus on the regional teams’ top-level performance nobody cares about the clubs currently tussling it out in the Principality Premiership, and many of these communities have lost their connection with clubs that once very much belonged to the community. This loss of passion for the game on a local level may partly be inspired by the success of football clubs such as Swansea, enjoying an impressive degree of Premier League success. Many of these local clubs also have overspent in pursuit of success in the professional era, and with dwindling crowds this has come back to bite; some prestigious clubs have gone into administration and tumbled down the leagues, tarnishing a reputation and dignity that is, for some, the best thing they have left. Even the Welsh national team, so often a source of pride no matter what befalls the club game, has suffered over the last year, only recently breaking an eight-match losing streak that drew stark attention to the Welsh game’s ailing health.

The WRU can’t really win in this situation; it’s too invested in the regional model to scrap it without massive financial losses, and to try and invest in a club game would have stretch the region’s wallets even further than they are currently. And yet the regional model isn’t working brilliantly either, failing to regularly produce either the top-quality games that such a proud rugby nation deserves or sufficient money to support the game. Wales’ economic situation, in terms of population and overall wealth, is simply not ideally suited to the excesses of professional sport, and the game is suffering as a result. And there’s just about nothing the WRU can do about it, except to just keep on pushing and hoping that their regions will gather loyalty, prestige and (most importantly) cash in due time. Maybe the introduction of an IRB-enforced universal salary cap, an idea I have long supported, would help the Welsh, but it’s not a high-priority idea within the corridors of power. Let us just hope the situation somehow manages to resolve itself.

Connections

History is a funny old business; an endless mix of overlapping threads, intermingling stories and repeating patterns that makes fascinating study for anyone who knows where to look. However, the part of it that I enjoy most involves taking the longitudinal view on things, linking two seemingly innocuous, or at least totally unrelated, events and following the trail of breadcrumbs that allow the two to connect. Things get even more interesting when the relationship is causal, so today I am going to follow the trail of one of my favourite little stories; how a single storm was, in the long run, responsible for the Industrial revolution. Especially surprising given that the storm in question occurred in 1064.

This particular storm occurred in the English Channel, and doubtless blew many ships off course, including one that had left from the English port of Bosham (opposite the Isle of Wight). Records don’t say why the ship was making its journey, but what was definitely significant was its passenger; Harold Godwinson, Earl of Wessex and possibly the most powerful person in the country after King Edward the Confessor. He landed (although that might be overstating the dignity and intention of the process) at Ponthieu, in northern France, and was captured by the local count, who subsequently turned him over to his liege when he, with his famed temper, heard of his visitor: the liege in question was Duke William of Normandy, or ‘William the Bastard’ as he was also known (he was the illegitimate son of the old duke and a tanner). Harold’s next move was (apparently) to accompany his captor to a battle just up the road in Brittany. He then tried to negotiate his freedom, which William accepted, on the condition that he swear an oath to him that, were the childless King Edward to die, he would support William’s claim to the throne (England at the time operated a sort of elective monarchy, where prospective candidates were chosen by a council of nobles known as the Witengamot). According to the Bayeux tapestry, Harold took this oath and left France; but two years later King Edward fell into a coma. With his last moment of consciousness before what was surely an unpleasant death, he apparently gestured to Harold, standing by his bedside. This was taken by Harold, and the Witengamot, as a sign of appointing a successor, and Harold accepted the throne. This understandably infuriated William, who considered this a violation of his oath, and subsequently invaded England. His timing of this coincided with another distant cousin, Harald Hardrada of Norway, deciding to push his claim to the throne, and in the resulting chaos William came to the fore. He became William the Conqueror, and the Normans controlled England for the next several hundred years.

One of the things that the Norman’s brought with them was a newfound view on religion; England was already Christian, but their respective Church’s views on certain subjects differed slightly. One such subject was serfdom, a form of slavery that was very popular among the feudal lords of the time. Serfs were basically slaves, in that they could be bought or sold as commodities; they were legally bound to the land they worked, and were thus traded and owned by the feudal lords who owned the land. In some countries, it was not unusual for one’s lord to change overnight after a drunken card game; Leo Tolstoy lost most of his land in just such an incident, but that’s another story. It was not a good existence for a serf, completely devoid of any form of freedom, but for a feudal lord it was great; cheap, guaranteed labour and thus income from one’s land, and no real risks concerned. However the Norman church’s interpretation of Christianity was morally opposed to the idea, and began to trade serfs for free peasants as a form of agricultural labour. A free peasant was not tied to the land but rented it from his liege, along with the right to use various pieces of land & equipment; the feudal lord still had income, but if he wanted goods from his land he had to pay for it from his peasants, and there were limits on the control he had over them. If a peasant so wished, he could pack up and move to London or wherever, or to join a ship; whatever he wanted in his quest to make his fortune. The vast majority were never faced with this choice as a reasonable idea, but the principle was important- a later Norman king, Henry I, also reorganised the legal system and introduced the role of sheriff, producing a society based around something almost resembling justice.

[It is worth noting that the very last serfs were not freed until the reign of Queen Elizabeth in the 1500s, and that subsequent British generations during the 18th century had absolutely no problem with trading in black slaves, but they justified that partly by never actually seeing the slaves and partly by taking the view that the black people weren’t proper humans anyway. We can be disgusting creatures]

A third Norman king further enhanced this concept of justice, even if completely by accident. King John was the younger brother of inexplicable national hero King Richard I, aka Richard the Lionheart or Couer-de-Lion (seriously, the dude was a Frenchman who visited England twice, both to raise money for his military campaigns, and later levied the largest ransom in history on his people when he had to be released by the Holy Roman Emperor- how he came to national prominence I will never know), and John was unpopular. He levied heavy taxes on his people to pay for costly and invariably unsuccessful military campaigns, and whilst various incarnations of Robin Hood have made him seem a lot more malevolent than he probably was, he was not a good King. He was also harsh to his people, and successfully pissed off peasant and noble alike; eventually the Norman Barons presented John with an ultimatum to limit his power, and restore some of theirs. However, the wording of the document also granted some basic and fundamental rights to the common people as well; this document was known as the Magna Carta; one of the most important legal documents in history, and arguably the cornerstone in the temple of western democracy.

The long-term ramifacations of this were huge; numerous wars were fought over the power it gave the nobility in the coming centuries, and Henry II (9 years old when he took over from father John) was eventually forced to call the first parliament; which, crucially, featured both barons (the noblemen, in what would soon become the House of Lords) and burghers (administrative leaders and representatives of the cities & commoners, in the House of Commons). The Black Death (which wiped out most of the peasant population and thus raised the value of the few who were left) greatly increased the value and importance of peasants across Europe for purely economic reasons for a few years, but over the next few centuries multiple generations of kings in several countries would slowly return things to the old ways, with them on top and their nobles kept subservient. In countries such as France, a nobleman got himself power, rank, influence and wealth by getting into bed with the king (in the cases of some ambitious noblewomen, quite literally); but in England the existence of a Parliament meant that no matter how much the king’s power increased through the reign of Plantagenets, Tudors and Stuarts, the gentry had some form of national power and community- and that the people were, to some nominal degree, represented as well. This in turn meant that it became not uncommon for the nobility and high-ranking (or at least rich) ordinary people to come into contact, and created a very fluid class system. Whilst in France a middle class businessman was looked on with disdain by the lords, in Britain he would be far more likely to be offered a peerage; nowadays the practice is considered undemocratic, but this was the cutting edge of societal advancement several hundred years ago. It was this ‘lower’ class of gentry, comprising the likes of John Hampden and Oliver Cromwell, who would precipitate the English Civil War as King Charles I tried to rule without Parliament altogether (as opposed to his predecessors  who merely chose to not listen to them a lot of the time); when the monarchy was restored (after several years of bloodshed and puritan brutality at the hands of Cromwell’s New Model Army, and a seemingly paradoxical few decades spent with Cromwell governing with only a token parliament, when he used them at all), parliament was the political force in Britain. When James II once again tried his dad’s tactic of proclaiming himself god-sent ruler whom all should respect unquestioningly, Parliament’s response was to invite the Dutch King William of Orange over to replace James and become William III, which he duly did. Throughout the reign of the remaining Stuarts and the Hanoverian monarchs (George I to Queen Victoria), the power of the monarch became steadily more and more ceremonial as the two key political factions of the day, the Whigs (later to become the Liberal, and subsequently Liberal Democrat, Party) and the Tories (as today’s Conservative Party is still known) slugged it out for control of Parliament, the newly created role of ‘First Lord of the Treasury’ (or Prime Minister- the job wasn’t regularly selected from among the commons for another century or so) and, eventually, the country. This brought political stability, and it brought about the foundations of modern democracy.

But I’m getting ahead of myself; what does this have to do with the Industrial Revolution? Well, we can partly blame the political and financial stability at the time, enabling corporations and big business to operate simply and effectively among ambitious individuals wishing to exploit potential; but I think that the key reason it occurred has to do with those ambitious people themselves. In Eastern Europe & Russia, in particular, there were two classes of people; nobility who were simply content to scheme and enjoy their power, and the masses of illiterate serfs. In most of Western Europe, there was a growing middle class, but the monarchy and nobility were united in keeping them under their thumb and preventing them from making any serious impact on the world. The French got a bloodthirsty revolution and political chaos as an added bonus, whilst the Russians waited for another century to finally get sufficiently pissed of at the Czar to precipitate a communist revolution. In Britain, however, there were no serfs, and corporations were built from the middle classes. These people’s primary concerns wasn’t rank or long-running feuds, disagreements over land or who was sleeping with the king; they wanted to make money, and would do so by every means at their disposal. This was an environment ripe for entrepreneurism, for an idea worth thousands to take the world by storm, and they did so with relish. The likes of Arkwright, Stephenson and Watt came from the middle classes and were backed by middle class industry, and the rest of Britain came along for the ride as Britain’s coincidentally vast coal resources were put to good use in powering the change. Per capita income, population and living standards all soared, and despite the horrors that an age of unregulated industry certainly wrought on its populace, it was this period of unprecedented change that was the vital step in the formation of the world as we know it today. And to think that all this can be traced, through centuries of political change, to the genes of uselessness that would later become King John crossing the channel after one unfortunate shipwreck…

And apologies, this post ended up being a lot longer than I intended it to be

The Offensive Warfare Problem

If life has shown itself to be particularly proficient at anything, it is fighting. There is hardly a creature alive today that does not employ physical violence in some form to get what it wants (or defend what it has) and, despite a vast array of moral arguments to the contrary of that being a good idea (I must do a post on the prisoner’s dilemma some time…), humankind is, of course, no exception. Unfortunately, our innate inventiveness and imagination as a race means that we have been able to let our brains take our fighting to the next level, with consequences that have got ever-more destructive as  time has gone  by. With the construction of the first atomic bombs, humankind had finally got to where it had threatened to for so long- the ability to literally wipe out planet earth.

This insane level of offensive firepower is not just restricted to large-scale big-guns (the kind that have been used fir political genital comparison since Napoleon revolutionised the use of artillery in warfare)- perhaps the most interesting and terrifying advancement in modern warfare and conflict has been the increased prevalence and distribution of powerful small arms, giving ‘the common man’ of the battlefield a level of destructive power that would be considered hideously overwrought in any other situation (or, indeed, the battlefield of 100 years ago). The epitomy of this effect is, of course, the Kalashnikov AK-47, whose cheapness and insane durability has rendered it invaluable to rebel groups or other hastily thrown together armies, giving them an ability to kill stuff that makes them very, very dangerous to the population of wherever they’re fighting.

And this distribution of such awesomely dangerous firepower has began to change warfare, and to explain how I need to go on a rather dramatic detour. The goal of warfare has always, basically, centred around the control of land and/or population, and as James Herbert makes so eminently clear in Dune, whoever has the power to destroy something controls it, at least in a military context. In his book Ender’s Shadow (I feel I should apologise for all these sci-fi references), Orson Scott Card makes the entirely separate point that defensive warfare in the context of space warfare makes no practical sense. For a ship & its weapons to work in space warfare, he rather convincingly argues, the level of destruction it must be able to deliver would have to be so large that, were it to ever get within striking distance of earth it would be able to wipe out literally billions- and, given the distance over which any space war must be conducted, mutually assured destruction simply wouldn’t work as a defensive strategy as it would take far too long for any counterstrike attempt to happen. Therefore, any attempt to base one’s warfare effort around defence, in a space warfare context, is simply too risky, since one ship (or even a couple of stray missiles) slipping through in any of the infinite possible approach directions to a planet would be able to cause uncountable levels of damage, leaving the enemy with a demonstrable ability to destroy one’s home planet and, thus, control over it and the tactical initiative. Thus, it doesn’t make sense to focus on a strategy of defensive warfare and any long-distance space war becomes a question of getting there first (plus a bit of luck).

This is all rather theoretical and, since we’re talking about a bunch of spaceships firing missiles at one another, not especially relevant when considering the realities of modern warfare- but it does illustrate a point, namely that as offensive capabilities increase the stakes rise of the prospect of defensive systems failing. This was spectacularly, and horrifyingly, demonstrated during 9/11, during which a handful of fanatics armed with AK’s were able to kill 5,000 people, destroy the world trade centre and irrevocably change the face of the world economy and world in general. And that came from only one mode of attack, and despite all the advances in airport security that have been made since then there is still ample opportunity for an attack of similar magnitude to happen- a terrorist organisation, we must remember, only needs to get lucky once. This means that ‘normal’ defensive methods, especially since they would have to be enforced into all of our everyday lives (given the format that terrorist attacks typically take), cannot be applied to this problem, and we must rely almost solely on intelligence efforts to try and defend ourselves.

This business of defence and offence being in imbalance in some form or another is not a phenomenon solely confined to the modern age. Once, wars were fought solely with clubs and shields, creating a somewhat balanced case of attack and defence;  attack with the club, defend with the shield. If you were good enough at defending, you could survive; simple as that. However, some bright spark then came up with the idea of the bow, and suddenly the world was in imbalance- even if an arrow couldn’t pierce an animal skin stretched over some sticks (which, most of the time, it could), it was fast enough to appear from nowhere before you had a chance to defend yourself. Thus, our defensive capabilities could not match our offensive ones. Fast forward a millennia or two, and we come to a similar situation; now we defended ourselves against arrows and such by hiding in castles behind giant stone walls  and other fortifications that were near-impossible to break down, until some smart alec realised the use of this weird black powder invented in China. The cannons that were subsequently invented could bring down castle walls in a matter of hours or less, and once again they could not be matched from the defensive standpoint- our only option now lay in hiding somewhere the artillery couldn’t get us, or running out of the way of these lumbering beasts. As artillery technology advanced throughout the ensuing centuries, this latter option became less and less feasible as the sheer numbers of high-explosive weaponry trained on opposition armies made them next-to impossible to fight in the field; but they were still difficult to aim accurately at well dug-in soldiers, and from these starting conditions we ended up with the First World War.

However, this is not a direct parallel of the situation we face now; today we deal with the simple and very real truth that a western power attempting to defend its borders (the situation is somewhat different when they are occupying somewhere like Afghanistan, but that can wait until another time) cannot rely on simple defensive methods alone- even if every citizen was an army trained veteran armed with a full complement of sub-machine guns (which they quite obviously aren’t), it wouldn’t be beyond the wit of a terrorist group to sneak a bomb in somewhere destructive. Right now, these methods may only be capable of killing or maiming hundreds or thousands at a time; tragic, but perhaps not capable of restructuring a society- but as our weapon systems get ever more advanced, and our more effective systems get ever cheaper and easier for fanatics to get hold of, the destructive power of lone murderers may increase dramatically, and with deadly consequences.

I’m not sure that counts as a coherent conclusion, or even if this counts as a coherent post, but it’s what y’got.

Why the chubs?

My last post dealt with the thorny issue of obesity, both it’s increasing presence in our everyday lives, and what for me is the underlying reason behind the stats that back up media scare stories concerning ‘the obesity epidemic’- the rise in size of the ‘average’ person over the last few decades. The precise causes of this trend can be put down to a whole host of societal factors within our modern age, but that story is boring as hell and has been repeated countless times by commenters far more adept in this field than me. Instead, today I wish present the case for modern-day obesity as a problem concerning the fundamental biology of a human being.

We, and our dim and distant ancestors of the scaly/furry variety, have spent the last few million years living wild; hunting, fighting and generally acting much like any other evolutionary pathway. Thus, we can learn a lot about our own inbuilt biology and instincts by studying the behaviour of animals currently alive today, and when we do so, several interesting animal eating habits become apparent. As anyone who has tried it as a child can attest (and I speak from personal experience), grass is not good stuff to eat. It’s tough, it takes a lot of chewing and processing (many herbivores have multiple stomachs to make sure they squeeze the maximum nutritional value out of their food), and there really isn’t much of it to power a fully-functional being. As such, grazers on grass and other such tough plant matter (such as leaves) will spend most of their lives doing nothing but guzzle the stuff, trying to get as much as possible through their system. Other animals will favour food with a higher nutritional content, such as fruits, tubers or, in many cases, meat, but these frequently present issues. Fruits are highly seasonal and rarely available in a large enough volume to support a large population, as well as being quite hard to get a lot of down; plants try to ‘design’ fruits so that each visitor takes only a few at a time, so as best to spread their seeds far and wide, and as such there are few animals that can sustain themselves on such a diet.  Other food such as tubers or nuts are hard to get at, needing to be dug up or broken in highly energy-consuming activities, whilst meat has the annoying habit of running away or fighting back whenever you try to get at it. As anyone who watches nature documentaries will attest, most large predators will only eat once every few days (admittedly rather heavily).

The unifying factor of all of this is that food is, in the wild, highly energy- and time-consuming to get hold of and consume, since every source of it guards its prize jealously. Therefore, any animal that wants to survive in this tough world must be near-constantly in pursuit of food simply to fulfil all of its life functions, and this is characterised by being perpetually hungry. Hunger is a body’s way of telling us that we should get more food, and in the wild this constant desire for more is kept in check by the difficulty that getting hold of it entails. Similarly, animal bodies try to assuage this desire by being lazy; if something isn’t necessary, then there’s no point wasting valuable energy going after it (since this will mean spending more time going after food to replace lost energy.)

However, in recent history (and a spectacularly short period of time from evolution’s point of view), one particular species called homo sapiens came up with this great idea called civilisation, which basically entailed the pooling and sharing of skill and resources in order to best benefit everyone as a whole. As an evolutionary success story, this is right up there with developing multicellular body structures in terms of being awesome, and it has enabled us humans to live far more comfortable lives than our ancestors did, with correspondingly far greater access to food. This has proved particularly true over the last two centuries, as technological advances in a more democratic society have improved the everyman’s access to food and comfortable living to a truly astounding degree. Unfortunately (from the point of view of our waistline) the instincts of our bodies haven’t quite caught up to the idea that when we want/need food, we can just get food, without all that inconvenient running around after it to get in the way. Not only that, but a lack of pack hierarchy combined with this increased availability means that we can stock up on food until we have eaten our absolute fill if so we wish; the difference between ‘satiated’ and ‘stuffed’ can work out as well over 1000 calories per meal, and over a long period of time it only takes a little more than we should be having every day to start packing on the pounds. Combine that with our natural predilection to laziness meaning that we don’t naturally think of going out for some exercise as fun purely for its own sake, and the fact that we no longer burn calories chasing our food, or in the muscles we build up from said chasing, and we find ourselves consuming a lot more calories than we really should be.

Not only that, but during this time we have also got into the habit of spending a lot of time worrying over the taste and texture of our food. This means that, unlike our ancestors who were just fine with simply jumping on a squirrel and devouring the thing, we have to go through the whole rigmarole of getting stuff out of the fridge, spending two hours slaving away in a kitchen and attempting to cook something vaguely resembling tasty. This wait is not something out bodies enjoy very much, meaning we often turn to ‘quick fixes’ when in need of food; stuff like bread, pasta or ready meals. Whilst we all know how much crap goes into ready meals (which should, as a rule, never be bought by anyone who cares even in the slightest about their health; salt content of those things is insane) and other such ‘quick fixes’, fewer people are aware of the impact a high intake of whole grains can have on our bodies. Stuff like bread and rice only started being eaten by humans a few thousand years ago, as we discovered the benefits of farming and cooking, and whilst they are undoubtedly a good food source (and are very, very difficult to cut from one’s diet whilst still remaining healthy) our bodies have simply not had enough time, evolutionarily speaking, to get used to them. This means they have a tendency to not make us feel as full as their calorie content should suggest, thus meaning that we eat more than our body in fact needs (if you want to feel full whilst not taking in so many calories, protein is the way to go; meat, fish and dairy are great for this).

This is all rather academic, but what does it mean for you if you want to lose a bit of weight? I am no expert on this, but then again neither are most of the people acting as self-proclaimed nutritionists in the general media, and anyway, I don’t have any better ideas for posts. So, look at my next post for my, admittedly basic, advice for anyone trying to make themselves that little bit healthier, especially if you’re trying to work of a few of the pounds built up over this festive season.

NMEvolution

Music has been called by some the greatest thing the human race has ever done, and at its best it is undoubtedly a profound expression of emotion more poetic than anything Shakespeare ever wrote. True, done badly it can sound like a trapped cat in a box of staplers falling down a staircase, but let’s not get hung up on details here- music is awesome.

However, music as we know it has only really existed for around a century or so, and many of the developments in music’s  history that have shaped it into the tour de force that it is in modern culture are in direct parallel to human history. As such, the history of our development as a race and the development of music run closely alongside one another, so I thought I might attempt a set of edited highlights of the former (well, western history at least) by way of an exploration of the latter.

Exactly how and when the various instruments as we know them were invented and developed into what they currently are is largely irrelevant (mostly since I don’t actually know and don’t have the time to research all of them), but historically they fell into one of two classes. The first could be loosely dubbed ‘noble’ instruments- stuff like the piano, clarinet or cello, which were (and are) hugely expensive to make, required a significant level of skill to do so, and were generally played for and by the rich upper classes in vast orchestras, playing centuries-old music written by the very few men with the both the riches, social status and talent to compose them. On the other hand, we have the less historically significant, but just as important, ‘common’ instruments, such as the recorder and the ancestors of the acoustic guitar. These were a lot cheaper to make and thus more available to (although certainly far from widespread among) the poorer echelons of society, and it was on these instruments that tunes were passed down from generation to generation, accompanying traditional folk dances and the like; the kind of people who played such instruments very rarely had the time to spare to really write anything new for them, and certainly stood no chance of making a living out of them. And, for many centuries, that was it- what you played and what you listened to, if you did so at all, depended on who you were born as.

However, during the great socioeconomic upheaval and levelling that accompanied the 19th century industrial revolution, music began to penetrate society in new ways. The growing middle and upper-middle classes quickly adopted the piano as a respectable ‘front room’ instrument for their daughters to learn, and sheet music was rapidly becoming both available and cheap for the masses. As such, music began to become an accessible activity for far larger swathes of the population and concert attendances swelled. This was the Romantic era of music composition, with the likes of Chopin, Mendelssohn and Brahms rising to prominence, and the size of an orchestra grew considerably to its modern size of four thousand violinists, two oboes and a bored drummer (I may be a little out in my numbers here) as they sought to add some new experimentation to their music. This experimentation with classical orchestral forms was continued through the turn of the century by a succession of orchestral composers, but this period also saw music head in a new and violently different direction; jazz.

Jazz was the quintessential product of the United States’ famous motto ‘E Pluribus Unum’ (From Many, One), being as it was the result of a mixing of immigrant US cultures. Jazz originated amongst America’s black community, many of whom were descendants of imported slaves or even former slaves themselves, and was the result of traditional African music blending with that of their forcibly-adopted land. Whilst many black people were heavily discriminated against when it came to finding work, they found they could forge a living in the entertainment industry, in seedier venues like bars and brothels. First finding its feet in the irregular, flowing rhythms of ragtime music, the music of the deep south moved onto the more discordant patterns of blues in the early 20th century before finally incorporating a swinging, syncopated rhythm and an innovative sentiment of improvisation to invent jazz proper.

Jazz quickly spread like wildfire across the underground performing circuit, but it wouldn’t force its way into popular culture until the introduction of prohibition in the USA. From 1920 all the way up until the Presidency of Franklin D Roosevelt (whose dropping of the bill is a story in and of itself) the US government banned the consumption of alcohol, which (as was to be expected, in all honesty) simply forced the practice underground. Dozens of illegal speakeasies (venues of drinking, entertainment and prostitution usually run by the mob) sprung up in every district of every major American city, and they were frequented by everyone from the poorest street sweeper to the police officers who were supposed to be closing them down. And in these venues, jazz flourished. Suddenly, everyone knew about jazz- it was a fresh, new sound to everyone’s ears, something that stuck in the head and, because of its ‘common’, underground connotations, quickly became the music of the people. Jazz musicians such as Louis Armstrong (a true pioneer of the genre) became the first celebrity musicians, and the way the music’s feel resonated with the happy, prosperous feeling surrounding the economic good times of the 1920s lead that decade to be dubbed ‘the Jazz Age’.

Countless things allowed jazz and other, successive generations to spread around the world- the invention of the gramophone further enhanced the public access to music, as did the new cultural phenomenon of the cinema and even the Second World War, which allowed for truly international spread. By the end of the war, jazz, soul, blues, R&B and all other derivatives had spread from their mainly deep south origins across the globe, blazing a trail for all other forms of popular music to follow in its wake. And, come the 50s, they did so in truly spectacular style… but I think that’ll have to wait until next time.

What we know and what we understand are two very different things…

If the whole Y2K debacle over a decade ago taught us anything, it was that the vast majority of the population did not understand the little plastic boxes known as computers that were rapidly filling up their homes. Nothing especially wrong or unusual about this- there’s a lot of things that only a few nerds understand properly, an awful lot of other stuff in our life to understand, and in any case the personal computer had only just started to become commonplace. However, over 12 and a half years later, the general understanding of a lot of us does not appear to have increased to any significant degree, and we still remain largely ignorant of these little feats of electronic witchcraft. Oh sure, we can work and operate them (most of us anyway), and we know roughly what they do, but as to exactly how they operate, precisely how they carry out their tasks? Sorry, not a clue.

This is largely understandable, particularly given the value of ‘understand’ that is applicable in computer-based situations. Computers are a rare example of a complex system that an expert is genuinely capable of understanding, in minute detail, every single aspect of the system’s working, both what it does, why it is there, and why it is (or, in some cases, shouldn’t be) constructed to that particular specification. To understand a computer in its entirety, therefore, is an equally complex job, and this is one very good reason why computer nerds tend to be a quite solitary bunch, with quite few links to the rest of us and, indeed, the outside world at large.

One person who does not understand computers very well is me, despite the fact that I have been using them, in one form or another, for as long as I can comfortably remember. Over this summer, however, I had quite a lot of free time on my hands, and part of that time was spent finally relenting to the badgering of a friend and having a go with Linux (Ubuntu if you really want to know) for the first time. Since I like to do my background research before getting stuck into any project, this necessitated quite some research into the hows and whys of its installation, along with which came quite a lot of info as to the hows and practicalities of my computer generally. I thought, then, that I might spend the next couple of posts or so detailing some of what I learned, building up a picture of a computer’s functioning from the ground up, and starting with a bit of a history lesson…

‘Computer’ was originally a job title, the job itself being akin to accountancy without the imagination. A computer was a number-cruncher, a supposedly infallible data processing machine employed to perform a range of jobs ranging from astronomical prediction to calculating interest. The job was a fairly good one, anyone clever enough to land it probably doing well by the standards of his age, but the output wasn’t. The human brain is not built for infallibility and, not infrequently, would make mistakes. Most of these undoubtedly went unnoticed or at least rarely caused significant harm, but the system was nonetheless inefficient. Abacuses, log tables and slide rules all aided arithmetic manipulation to a great degree in their respective fields, but true infallibility was unachievable whilst still reliant on the human mind.

Enter Blaise Pascal, 17th century mathematician and pioneer of probability theory (among other things), who invented the mechanical calculator aged just 19, in 1642. His original design wasn’t much more than a counting machine, a sequence of cogs and wheels so constructed as to able to count and convert between units, tens, hundreds and so on (ie a turn of 4 spaces on the ‘units’ cog whilst a seven was already counted would bring up eleven), as well as being able to work with currency denominations and distances as well. However, it could also subtract, multiply and divide (with some difficulty), and moreover proved an important point- that a mechanical machine could cut out the human error factor and reduce any inaccuracy to one of simply entering the wrong number.

Pascal’s machine was both expensive and complicated, meaning only twenty were ever made, but his was the only working mechanical calculator of the 17th century. Several, of a range of designs, were built during the 18th century as show pieces, but by the 19th the release of Thomas de Colmar’s Arithmometer, after 30 years of development, signified the birth of an industry. It wasn’t a large one, since the machines were still expensive and only of limited use, but de Colmar’s machine was the simplest and most reliable model yet. Around 3,000 mechanical calculators, of various designs and manufacturers, were sold by 1890, but by then the field had been given an unexpected shuffling.

Just two years after de Colmar had first patented his pre-development Arithmometer, an Englishmen by the name of Charles Babbage showed an interesting-looking pile of brass to a few friends and associates- a small assembly of cogs and wheels that he said was merely a precursor to the design of a far larger machine: his difference engine. The mathematical workings of his design were based on Newton polynomials, a fiddly bit of maths that I won’t even pretend to understand, but that could be used to closely approximate logarithmic and trigonometric functions. However, what made the difference engine special was that the original setup of the device, the positions of the various columns and so forth, determined what function the machine performed. This was more than just a simple device for adding up, this was beginning to look like a programmable computer.

Babbage’s machine was not the all-conquering revolutionary design the hype about it might have you believe. Babbage was commissioned to build one by the British government for military purposes, but since Babbage was often brash, once claiming that he could not fathom the idiocy of the mind that would think up a question an MP had just asked him, and prized academia above fiscal matters & practicality, the idea fell through. After investing £17,000 in his machine before realising that he had switched to working on a new and improved design known as the analytical engine, they pulled the plug and the machine never got made. Neither did the analytical engine, which is a crying shame; this was the first true computer design, with two separate inputs for both data and the required program, which could be a lot more complicated than just adding or subtracting, and an integrated memory system. It could even print results on one of three printers, in what could be considered the first human interfacing system (akin to a modern-day monitor), and had ‘control flow systems’ incorporated to ensure the performing of programs occurred in the correct order. We may never know, since it has never been built, whether Babbage’s analytical engine would have worked, but a later model of his difference engine was built for the London Science Museum in 1991, yielding accurate results to 31 decimal places.

…and I appear to have run on a bit further than intended. No matter- my next post will continue this journey down the history of the computer, and we’ll see if I can get onto any actual explanation of how the things work.

The Land of the Red

Nowadays, the country to talk about if you want to be seen as being politically forward-looking is, of course, China. The most populous nation on Earth (containing 1.3 billion souls) with an economy and defence budget second only to the USA in terms of size, it also features a gigantic manufacturing and raw materials extraction industry, the world’s largest standing army and one of only five remaining communist governments. In many ways, this is China’s second boom as a superpower, after its early forays into civilisation and technological innovation around the time of Christ made it the world’s largest economy for most of the intervening time. However, the technological revolution that swept the Western world in the two or three hundred years during and preceding the Industrial Revolution (which, according to QI, was entirely due to the development and use of high-quality glass in Europe, a material almost totally unheard of in China having been invented in Egypt and popularised by the Romans) rather passed China by, leaving it a severely underdeveloped nation by the nineteenth century. After around 100 years of bitter political infighting, during which time the 2000 year old Imperial China was replaced by a republic whose control was fiercely contested between nationalists and communists, the chaos of the Second World War destroyed most of what was left of the system. The Second Sino-Japanese War (as that particular branch of WWII was called) killed around 20 million Chinese civilians, the second biggest loss to a country after the Soviet Union, as a Japanese army fresh from an earlier revolution from Imperial to modern systems went on a rampage of rape, murder and destruction throughout the underdeveloped northern China, where some war leaders still fought with swords. The war also annihilated the nationalists, leaving the communists free to sweep to power after the Japanese surrender and establish the now 63-year old People’s Republic, then lead by former librarian Mao Zedong.

Since then, China has changed almost beyond recognition. During the idolised Mao’s reign, the Chinese population near-doubled in an effort to increase the available worker population, an idea tried far less successfully in other countries around the world with significantly less space to fill. This population was then put to work during Mao’s “Great Leap Forward”, in which he tried to move his country away from its previously agricultural economy and into a more manufacturing-centric system. However, whilst the Chinese government insists to this day that three subsequent years of famine were entirely due to natural disasters such as drought and poor weather, and only killed 15 million people, most external commentators agree that the sudden change in the availability of food thanks to the Great Leap certainly contributed to the death toll estimated to actually be in the region of 20-40 million. Oh, and the whole business was an economic failure, as farmers uneducated in modern manufacturing techniques attempted to produce steel at home, resulting in a net replacement of useful food for useless, low-quality pig iron.

This event in many ways typifies the Chinese way- that if millions of people must suffer in order for things to work out better in the long run and on the numbers sheet, then so be it, partially reflecting the disregard for the value of life historically also common in Japan. China is a country that has said it would, in the event of a nuclear war, consider the death of 90% of their population acceptable losses so long as they won, a country whose main justification for this “Great Leap Forward” was to try and bring about a state of social structure & culture that the government could effectively impose socialism upon, as it tried to do during its “Cultural Revolution” during the mid-sixties. All this served to do was get a lot of people killed, resulted in a decade of absolute chaos, literally destroyed China’s education system and, despite reaffirming Mao’s godlike status (partially thanks to an intensification in the formation of his personality cult), some of his actions rather shamed the governmental high-ups, forcing the party to take the angle that, whilst his guiding thought was of course still the foundation of the People’s Republic and entirely correct in every regard, his actions were somehow separate from that and got rather brushed under the carpet. It did help that, by this point, Mao was now dead and was unlikely to have them all hung for daring to question his actions.

But, despite all this chaos, all the destruction and all the political upheaval (nowadays the government is still liable to arrest anyone who suggests that the Cultural Revolution was a good idea), these things shaped China into the powerhouse it is today. It may have slaughtered millions of people and resolutely not worked for 20 years, but Mao’s focus on a manufacturing economy has now started to bear fruit and give the Chinese economy a stable footing that many countries would dearly love in these days of economic instability. It may have an appalling human rights record and have presided over the large-scale destruction of the Chinese environment, but Chinese communism has allowed for the government to control its labour force and industry effectively, allowing it to escape the worst ravages of the last few economic downturns and preventing internal instability. And the extent to which it has forced itself upon the people of China for decades, forcing them into the party line with an iron fist, has allowed its controls to be gently relaxed in the modern era whilst ensuring the government’s position is secure, to an extent satisfying the criticisms of western commentators. Now, China is rich enough and positioned solidly enough to placate its people, to keep up its education system and build cheap housing for the proletariat. To an accountant, therefore,  this has all worked out in the long run.

But we are not all accountants or economists- we are members of the human race, and there is more for us to consider than just some numbers on a spreadsheet. The Chinese government employs thousands of internet security agents to ensure that ‘dangerous’ ideas are not making their way into the country via the web, performs more executions annually than the rest of the world combined, and still viciously represses every critic of the government and any advocate of a new, more democratic system. China has paid an enormously heavy price for the success it enjoys today. Is that price worth it? Well, the government thinks so… but do you?

Icky stuff

OK guys, time for another multi-part series (always a good fallback when I’m short of ideas). Actually, this one started out as just an idea for a single post about homosexuality, but when thinking about how much background stuff I’d have to stick in for the argument to make sense, I thought I might as well dedicate an entire post to background and see what I could do with it from there. So, here comes said background: an entire post on the subject of sex.

The biological history of sex must really start by considering the history of biological reproduction. Reproduction is a vital part of the experience of life for all species, a necessary feature for something to be classified ‘life’, and among some thinkers is their only reason for existence in the first place. In order to be successful by any measure, a species must exist; in order to exist, those of the species who die must be replaced, and in order for this to occur, the species must reproduce. The earliest form of reproduction, occurring amongst the earliest single-celled life forms, was binary fission, a basic form of asexual reproduction whereby the internal structure of the organism is replicated, and it then splits in two to create two organisms with identical genetic makeup. This is an efficient way of expanding a population size very quickly, but it has its flaws. For one thing, it does not create any variation in the genetics of a population, meaning what kills one stands a very good chance of destroying the entire population; all genetic diversity is dependent on random mutations. For another, it is only really suitable for single-celled organisms such as bacteria, as trying to split up a multi-celled organism once all the data has been replicated is a complicated geometric task. Other organisms have tried other methods of reproducing asexually, such as budding in yeast, but about 1 billion years ago an incredibly strange piece of genetic mutation must have taken place, possibly among several different organisms at once. Nobody knows exactly what happened, but one type of organism began requiring the genetic data from two, rather than one, different creatures, and thus was sexual reproduction, both metaphorically and literally, born.

Just about every complex organism alive on Earth today now uses this system in one form or another (although some can reproduce asexually as well, or self-fertilise), and it’s easy to see why. It may be a more complicated system, far harder to execute, but by naturally varying the genetic makeup of a species it makes the species as a whole far more resistant to external factors such as disease- natural selection being demonstrated at its finest. Perhaps is most basic form is that adopted by aquatic animals such as most fish and lobster- both will simply spray their eggs and sperm into the water (usually as a group at roughly the same time and place to increase the chance of conception) and leave them to mix and fertilise one another. The zygotes are then left to grow into adults of their own accord- a lot are of course lost to predators, representing a huge loss in terms of inputted energy, but the sheer number of fertilised eggs still produces a healthy population. It is interesting to note that this most basic of reproductive methods, performed in a similar matter by plants, is performed by such complex animals as fish (although their place on the evolutionary ladder is both confusing and uncertain), whilst supposedly more ‘basic’ animals such as molluscs have some of the weirdest and most elaborate courtship and mating rituals on earth (seriously, YouTube ‘snail mating’. That shit’s weird)

Over time, the process of mating and breeding in the animal kingdom has grown more and more complicated. Exactly why the male testes & penis and the female vagina developed in the way they did is unclear from an evolutionary perspective, but since most animals appear to use a broadly similar system (males have an appendage, females have a depository) we can presume this was just how it started off and things haven’t changed much since. Most vertebrates and insects have distinct sexes and mate via internal fertilisation of a female’s eggs, in many cases by several different males to enhance genetic diversity. However, many species also take the approach that ensuring they care for their offspring for some portion of their development is a worthwhile trade-off in terms of energy when compared to the advantages of giving them the best possible chance in life. This care generally (but not always, perhaps most notably in seahorses) is the role of the mother, males having usually buggered off after mating to leave mother & baby well alone, and the general ‘attitude’ of such an approach gives a species, especially females, a vested interest in ensuring their baby is as well-prepared as possible. This manifests itself in the process of a female choosing her partner prior to mating. Natural selection dictates that females who pick characteristics in males that result in successful offspring, good at surviving, are more likely to pass on their genes and the same attraction towards those characteristics, so over time these traits become ‘attractive’ to all females of a species. These traits tend to be strength-related, since strong creatures are generally better at competing for food and such, hence the fact that most pre-mating procedures involve a fight or physical contest of some sort between males to allow them to take their pick of available females. This is also why strong, muscular men are considered attractive to women among the human race, even though these people may not always be the most suitable to father their children for various reasons (although one could counter this by saying that they are more likely to produce children capable of surviving the coming zombie apocalypse). Sexual selection on the other hand is to blame for the fact that sex is so enjoyable- members of a species who enjoy sex are more likely to perform it more often, making them more likely to conceive and thus pass on their genes, hence the massive hit of endorphins our bodies experience both during and post sexual activity.

Broadly speaking then, we come to the ‘sex situation’ we have now- we mate by sticking penises in vaginas to allow sperm and egg to meet, and women generally tend to pick men who they find ‘attractive’ because it is traditionally an evolutionary advantage, as is the fact that we find sex as a whole fun. Clearly, however, the whole situation is a good deal more complicated than just this… but what is a multi parter for otherwise?