“Lies, damn lies, and statistics”

Ours is the age of statistics; of number-crunching, of quantifying, of defining everything by what it means in terms of percentages and comparisons. Statistics crop up in every walk of life, to some extent or other, in fields as widespread as advertising and sport. Many people’s livelihoods now depend on their ability to crunch the numbers, to come up with data and patterns, and much of our society’s increasing ability to do awesome things can be traced back to someone making the numbers dance.

In fact, most of what we think of as ‘statistics’ are not really statistics at all, but merely numbers; to a pedantic mathematician, a statistic is defined as a mathematical function of a sample of data, not the whole ‘population’ we are considering. We use statistics when it would be impractical to measure the whole population, usually because it’s too large, and when we instead are trying to mathematically model the whole population based on a small sample of it. Thus, next to no sporting ‘statistics’ are in fact true statistics as they tend to cover the whole game; if I heard during a rugby match that “Leicester had 59% of the possession”, that is nothing more than a number; or, to use the mathematical term, a parameter. A statistic would be to say “From our sample [of one game] we can conclude that Leicester control an average of 59% of the possession when they play rugby”, but this is quite evidently not true since we couldn’t extrapolate Leicester’s normal behaviour from a single match. It is for this reason that complex mathematical formulae are used to determine the uncertainty of a conclusion drawn from a statistical test, and these are based on the size of the sample we are testing compared to the overall size of the population we are trying to model. These uncertainty levels are often brushed under the carpet when pseudoscientists try to make dramatic, sweeping claims about something, but they are possibly the most important feature of modern statistics.

Another weapon for the poor statistician can be the mis-application of the idea of correlation. Correlation is basically what it means when you take two variables, plot them against one another on a graph, and find you get a nice neat line joining them, suggesting that the two are in some way related. Correlation tends to get scientists very excited, since if two things are linked then it suggests that you can make one thing happen by doing another, an often advantageous concept, and this is known as a causal relationship. However, whilst correlation and causation are rarely not intertwined, the first lesson every statistician learns is this; correlation DOES NOT imply causation.

Imagine, for instance, you have a cold. You feel like crap, your head is spinning, you’re dehydrated and you can’t breath through your nose. If we were, during the period before, during and after your cold, to plot a graph of one’s relative ability to breath through the nose against the severity of your headache (yeah, not very scientific I know), these two facts would both correlate, since they happen at the same time due to the cold. However, if I were to decide that this correlation implies causation, then I would draw the conclusion that all I need to do to give you a terrible headache is to plug your nose with tissue paper so you can’t breath through it. In this case, I have ignored the possibility (and, as it transpires, the eventuality) of there being a third variable (the cold virus) that causes both of the other two variables, and this is very hard to investigate without poking our head out of the numbers and looking at the real world. There are statistical techniques that enable us to do this, but they are for another time.

Whilst this example was more childish than anything, mis-extrapolation of a correlation can have deadly consequences. One example, explored in Ben Goldacre’s Bad Science, concerns beta-carotene, an antioxidant found in carrots, and in 1981 an epidemiologist called Richard Peto published a meta-analysis (post for another time) of a series of scientific studies that suggested people with high beta-carotene levels showed a reduced risk of cancer. At the time, antioxidants were considered the wonder-substance of the nutrition, and everyone got on board with the idea that beta-carotene was awesome stuff. However, all of the studies examined were observational ones; taking a lot of different people, seeing what their beta-carotene levels were and then examining whether or not they had cancer or developed it in later life. None of the studies actually gave their subjects beta-carotene and then saw if that affected their cancer risk, and this prompted the editor of Nature magazine (the scientific journal in which Peto’s paper was published) to include a footnote reading:

Unwary readers (if such there are) should not take the accompanying article as a sign that the consumption of large quantities of carrots (or other dietary sources of beta-carotene) is necessarily protective against cancer.

The editor’s footnote quickly proved a well-judged one; a study conducted in Finland some time afterwards actually gave participants at high risk of lung cancer beta-carotene and found their risk of both getting the cancer and of death were higher than for the ‘placebo’ control group. A later study, named CARET (Carotene And Retinol Efficiency Trial), also tested groups at a high risk of lung cancer, giving half of them a mixture of beta-carotene and vitamin A and the other half placebos. The idea was to run the trial for six years and see how many illnesses/deaths each group ended up with; but after preliminary data found that those having the antioxidant tablets were 46% more likely to die from lung cancer, they decided it would be unethical to continue the trial and it was terminated early. Had the Nature article been allowed to get out of hand before this research was done, then it could have put thousands of people who hadn’t read the article properly at risk; and all because of the dangers of assuming correlation=causation.

This wasn’t really the gentle ramble through statistics I originally intended it to be, but there you go; stats. Next time, something a little less random. Maybe

“The most honest three and a half minutes in television history”

OK, I know this should have been put up on Wednesday, but I wanted to get this one right. Anyway…

This video appeared on my Facebook feed a few days ago, and I have been unable to get it out of my head since. It is, I am told, the opening scene of a new HBO series (The Newsroom), and since HBO’s most famous product, Game of Thrones, is famously the most pirated TV show on earth, I hope they won’t mind me borrowing another three minute snippet too much.

OK, watched it? Good, now I can begin to get my thoughts off my chest.

This video is many things; to me, it is quite possibly one of the most poignant and beautiful, and in many ways is the best summary of greatness ever put to film. It is inspiring, it is blunt, it is great television. It is not, however, “The most honest three and a half minutes of television, EVER…” as claimed in its title; there are a lot of things I disagree with in it. For one thing, I’m not entirely sure on our protagonist’s reasons for saying ‘liberals lose’. If anything, the last century of our existence can be viewed as one long series of victories for liberal ideology; women have been given the vote, homosexuality has been decriminalised, racism has steadily been dying out, gender equality is advancing year by year and only the other day the British government legalised gay marriage. His viewpoint may have something to do with features of American politics that I’m missing, particularly his reference to the NEA (an organisation which I do not really understand), but even so. I’m basically happy with the next few seconds; I’ll agree that claiming to be the best country in the world based solely on rights and freedoms is not something that holds water in our modern, highly democratic world. Freedom of speech, information, press and so on are, to most eyes, prerequisites to any country wishing to have any claim to true greatness these days, rather than the scale against which such activities are judged. Not entirely sure why he’s putting so much emphasis on the idea of a free Australia and Belgium, but hey ho.

Now, blatant insults of intelligence directed towards the questioner aside, we then start to quote statistics- always a good foundation point to start from in any political discussion. I’ll presume all his statistics are correct, so plus points there, but I’m surprised that he apparently didn’t notice that one key area America does lead the world in is size of economy; China is still, much to its chagrin, in second place on that front. However, I will always stand up for the viewpoint that economy does not equal greatness, so I reckon his point still stands.

Next, we move on to insulting 20 year old college students, not too far off my own personal social demographic; as such, this is a generation I feel I can speak on with some confidence. This is, probably the biggest problem I have with anything said during this little clip; no justification is offered as to why this group is the “WORST PERIOD GENERATION PERIOD EVER PERIOD”. Plenty of reasons for this opinion have been suggested in the past by other commentators, and these may or may not be true; but making assumptions and insults about a person based solely on their date of manufacture is hardly the most noble of activities. In any case, in the age of the internet and mass media, a lot of the world’s problems, with the younger generation in particular, get somewhat exaggerated… but no Views here, bad Ix.

And here we come to the meat of the video, the long, passionate soliloquy containing all the message and poignancy of the video with suitably beautiful backing music. But, what he comes out with could still be argued back against by an equally vitriolic critic; no time frame of when America genuinely was ‘the greatest country in the world’ is ever given. Earlier, he attempted to justify non-greatness by way of statistics, but his choice of language in his ‘we sure as hell used to be great’ passage appears to hark back to the days of Revolutionary-era and Lincoln-era America, when America was lead by the ‘great men’ he refers to. But if we look at these periods of time, the statistics don’t add up anywhere near as well; America didn’t become the world-dominating superpower with the stated ‘world’s greatest economy’ it is today until after making a bucket load of money from the two World Wars (America only became, in the words of then President Calvin Coolidge, ‘the richest country in the history of the world’, during the 1920s). Back in the periods where American heroes were born, America was a relatively poor country, consisting of vast expanses of wilderness, hardline Christian motivation, an unflinching belief in democracy, and an obsession the American spirit of ‘rugged individualism’ that never really manifested itself into any super-economy until it became able to loan everyone vast sums of money to pay off war debts. And that’s not all; he makes mention of ‘making war for moral reasons’, but of the dozens of wars America has fought only two are popularly thought of as being morally motivated. These were the American War of Independence, which was declared less for moral reasons and more because the Americans didn’t like being taxed, and the American Civil War, which ended with the southern states being legally allowed to pass the ‘Jim Crow laws’ that limited black rights until the 1960s; here they hardly ‘passed laws, struck down laws for moral reasons’. Basically, there is no period of history in which his justifications for why America was once’the greatest country in the world’ actually stand up at once.

But this, to me, is the point of what he’s getting at; during his soliloquy, a historical period of greatness is never defined so much as a model and hope for greatness is presented.. Despite all his earlier quoting of statistics and ‘evidence’, they are not what makes a country great. Money, and the power that comes with it, are not defining features of greatness, but just stuff that makes doing great things possible. The soliloquy, intentionally or not, aligns itself with the Socratic idea of justice; that a just society is one in which every person concerns himself with doing their own, ideally suited, work, and does not concern himself with trying to be a busybody and doing someone else’s job for them. Exactly how he arrives at this conclusion is somewhat complex; Plato’s Republic gives the full discourse. This idea is applied to political parties during the soliloquy; defining ourselves by our political stance is a self-destructive idea, meaning all our political system ever does is bicker at itself rather than just concentrating on making the country a better place. Also mentioned is the idea of ‘beating our chest’, the kind of arrogant self-importance that further prevents us from seeking to do good in this world, and the equally destructive concept of belittling intelligence that prevents us from making the world a better, more righteous place, full of the artistic and technological breakthroughs that make our world so awesome to bring in. For, as he says so eloquently, what really makes a country great is to be right. To be just, to be fair, to mean and above all to stand for something. To not be obsessed about ourselves, or other people’s business; to have rightness and morality as the priority for the country as a whole. To lay down sacrifices and be willing to sacrifice ourselves for the greater good, to back our promises and ideals and to care, above all else, simply for what is right.

You know what, he put it better than I ever could analyse. I’m just going to straight up quote him:

“We stood up for what was right. We fought for moral reasons, we passed laws, struck down laws for moral reasons, we waged wars on poverty not poor people. We sacrificed, we cared about our neighbours, we put our money where our mouths were and we never beat our chest. We built great big things, made ungodly technological advances, explored the universe, cured diseases and we cultivated the world’s greatest artists and the world’s greatest economy. We reached for the stars, acted like men- we aspired to intelligence, we didn’t belittle it, it didn’t make us feel inferior. We didn’t identify ourselves by who we voted for in the last election and we didn’t scare so easy.”

Maybe his words don’t quite match the history; it honestly doesn’t matter. The message of that passage embodies everything that defines greatness, ideas of morality and justice and doing good by the world. That statement is not harking back to some mythical past, but a statement of hope and ambition for the future. That is beauty embodied. That is greatness.

3500 calories per pound

This looks set to be the concluding post in this particular little series on the subject of obesity and overweightness. So, to summarise where we’ve been so far- post 1: that there are a lot of slightly chubby people present in the western world leading to statistics supporting a massive obesity problem, and that even this mediocre degree of fatness can be seriously damaging to your health. Post 2: why we have spent recent history getting slightly chubby. And for today, post 3: how one can try to do your bit, especially following the Christmas excesses and the soon-broken promises of New Year, to lose some of that excess poundage.

It was Albert Einstein who first demonstrated that mass was nothing more than stored energy, and although the theory behind that precise idea doesn’t really correlate with biology the principle still stands; fat is your body’s way of storing energy. It’s also a vital body tissue, and is not a 100% bad and evil thing to ingest, but if you want to lose it then the aim should simply be one of ensuring that one’s energy output, in the form of exercise  exceeds one’s energy input, in the form of food. The body’s response to this is to use up some of its fat stores to replace this lost energy (although this process can take up to a week to run its full course; the body is a complicated thing), meaning that the amount of fat in/on your body will gradually decrease over time. Therefore, slimming down is a process that is best approached from two directions; restricting what’s going in, and increasing what’s going out (both at the same time is infinitely more effective than an either/or process). I’ll deal with what’s going in first.

The most important point to make about improving one’s diet, and when considering weight loss generally, is that there are no cheats. There are no wonder pills that will shed 20lb of body fat in a week, and no super-foods or nutritional supplements that will slim you down in a matter of months. Losing weight is always going to be a messy business that will take several months at a minimum (the title of this post refers to the calorie content of body fat, meaning that to lose one pound you must expend 3500 more calories than you ingest over a given period of time), and unfortunately prevention is better than cure; but moping won’t help anyone, so let’s just gather our resolve and move on.

There is currently a huge debate going on concerning the nation’s diet problems of amount versus content; whether people are eating too much, or just the wrong stuff. In most cases it’s probably going to be a mixture of the two, but I tend to favour the latter answer; and in any case, there’s not much I can say about the former beyond ‘eat less stuff’. I am not a good enough cook to offer any great advice on what foods you should or shouldn’t be avoiding, particularly since the consensus appears to change every fortnight, so instead I will concentrate on the one solid piece of advice that I can champion; cook your own stuff.

This is a piece of advice that many people find hard to cope with- as I said in my last post, our body doesn’t want to waste time cooking when it could be eating. When faced with the unknown product of one’s efforts in an hours time, and the surety of a ready meal or fast food within five minutes, the latter option and all the crap that goes in it starts to seem a lot more attractive. The trick is, therefore, to learn how to cook quickly- the best meals should either take less than 10-15 minutes of actual effort to prepare and make, or be able to be made in large amounts and last for a week or more. Or, even better, both. Skilled chefs achieve this by having their skills honed to a fine art and working at a furious rate, but then again they’re getting paid for it; for the layman, a better solution is to know the right dishes. I’m not going to include a full recipe list, but there are thousands online, and there is a skill to reading recipes; it can get easy to get lost between a long list of numbers and a complicated ordering system, but reading between the lines one can often identify which recipes mean ‘chop it all up and chuck in some water for half an hour’.

That’s a very brief touch on the issue, but now I want to move on and look at energy going out; exercise. I personally would recommend sport, particularly team sport, as the most reliably fun way to get fit and enjoy oneself on a weekend- rugby has always done me right. If you’re looking in the right place, age shouldn’t be an issue (I’ve seen a 50 year old play alongside a 19 year old student at a club rugby match near me), and neither should skill so long as you are willing to give it a decent go; but, sport’s not for everyone and can present injury issues so I’ll also look elsewhere.

The traditional form of fat-burning exercise is jogging, but that’s an idea to be taken with a large pinch of salt and caution. Regular joggers will lose weight it’s true, but jogging places an awful lot of stress on one’s joints (swimming, cycling and rowing are all good forms of ‘low-impact exercise’ that avoid this issue), and suffers the crowning flaw of being boring as hell. To me, anyway- it takes up a good chunk of time, during which one’s mind is so filled with the thump of footfalls and aching limbs that one is forced to endure the experience rather than enjoy it. I’ll put up with that for strength exercises, but not for weight loss when two far better techniques present themselves; intensity sessions and walking.

Intensity sessions is just a posh name for doing very, very tiring exercise for a short period of time; they’re great for burning fat & building fitness, but I’ll warn you now that they are not pleasant. As the name suggest, these involve very high-intensity exercise (as a general rule, you not be able to talk throughout high-intensity work) performed either continuously or next to continuously for relatively short periods of time- an 8 minute session a few times a week should be plenty. This exercise can take many forms; shuttle runs (sprinting back and forth as fast as possible between two marked points or lines), suicides (doing shuttle runs between one ‘base’ line and a number of different lines at different distances from the base, such that one’s runs change in length after each set) and tabata sets (picking an easily repeatable exercise, such as squats, performing them as fast as possible for 20 seconds, followed by 10 seconds of rest, then another 20 seconds of exercise, and so on for 4-8 minute) are just three examples. Effective though these are, it’s difficult to find an area of empty space to perform them without getting awkward looks and the odd spot of abuse from passers-by or neighbours, so they may not be ideal for many people (tabata sets or other exercises such as press ups are an exception, and can generally be done in a bedroom; Mark Lauren’s excellent ‘You Are Your Own Gym’ is a great place to start for anyone interested in pursuing this route to lose weight & build muscle). This leaves us with one more option; walking.

To my mind, if everyone ate properly and walked 10,000 steps per day, the scare stats behind the media’s obesity fix would disappear within a matter of months. 10,000 steps may seem a lot, and for many holding office jobs it may seem impossible, but walking is a wonderful form of exercise since it allows you to lose oneself in thought or music, whichever takes your fancy. Even if you don’t have time for a separate walk, with a pedometer in hand (they are built into many modern iPods, and free pedometer apps are available for both iPhone and Android) and a target in mind (10k is the standard) then after a couple of weeks it’s not unusual to find yourself subtly changing the tiny aspects of your day (stairs instead of lift, that sort of thing) to try and hit your target; and the results will follow. As car ownership, an office economy and lack of free time have all grown in the last few decades, we as a nation do not walk as much as we used to. It’s high time that changed.

The Slightly Chubby Brigade

As the news will tell you at every single available opportunity, we are living through an obesity crisis. Across the western world (USA being the worst and Britain coming in second) our average national BMI is increasing and the number of obese and overweight people, and children especially, looks to be soaring across the board. Only the other day I saw a statistic that said nearly a third of children are now leaving primary school (ie one third of eleven year-olds) overweight, and such solemn numbers frequently make headlines.

This is a huge issue, encompassing several different issues and topics that I will attempt to consider over my next few posts (yeah, ‘nother multi-parter coming up), but for many of us it seems hideously exaggerated. I mean yes, we’ve all seen the kind of super-flabby people, the kind the news footage always cuts to when we hear some obesity health scare, the kind who are wider than they are tall and need a mobility scooter just to get around most of the time. We look at these pictures and we tut, and we might consider our own shape- but we’re basically fine, aren’t we. Sure, there’s a bit of a belly showing, but that’s normal- a good energy store and piece of insulation, in fact, and we would like to have a life beyond the weight-obsessed calorie counters that hardcore slimmers all seem to be. We don’t need to worry, do we?

Well, according to the numbers, actually we do. The average height of a Briton… actually, if you’re stumbling across this at home and you consider yourself normal, go and weigh yourself and, if you can, measure your height as well. Write those numbers down, and now continue reading. The average height of a Briton at the moment is 1.75m, or around 5’9″ in old money, and we might consider a normal weight for that height to be around 80 kilos, or 170 pounds. That might seem normal enough; a bit of a paunch, but able to get around and walk, and certainly no one would call you fat. Except perhaps your doctor, because according to the BMI chart I’ve got pulled up a 5 foot 9, 80 kilo human is deemed clinically overweight. Not by much, but you’d still weigh more than is healthy- in fact, one stat I heard a while ago puts the average Briton at this BMI. Try it with your measurements; BMI charts are freely available over the web.

This, to me, is one of the real underlying causes of ‘the obesity epidemic’- a fundamental misunderstanding of what ‘overweight’ consists of. Whenever our hideously awful everyone-dead-from-McDonalds-overdose etc. etc. diet is brought up on the news, it is always annotated by pictures of hanging bellies and bouncing flab, the kind of bodies that make one almost physically sick to look at. But, whilst these people certainly exist, there are not enough of them for the obesity issue to be even worth mentioning in everyday society; whilst the proportion of morbidly obese people is significant, it’s not seriously worth thought for most of us.

No, the real cause for all the chilling statistics we hear on the news is all the people who don’t look to be overweight. The kind whose diet isn’t appalling (no 24/7 McDonaldses), who are quite capable of exercise when it suits them, and who might take a rough glance at the dietary information of the stuff they buy in the supermarket. But these people are nonetheless hovering on the overweight borderline, pulling up the national average, despite the fact that they don’t consider anything to be wrong; in fact, some women who are according to the evil numbers overweight, may consider it almost dutiful to not become obsessed over shedding every pound and to maintain their curves. Having a bit of excess weight is, after all, still better than being underweight and anorexic, and the body image pressures some young women are coming under are just as much of an issue as national obesity. Even for those who don’t have such opinions, many of the slightly overweight feel that they don’t have any weight issues and that there’s surely no significant health risk associated with a ‘bit of meat on your bones’ (it’s actually muscle, rather than fat, that technically forms meat, but ho hum); as such, they have absolutely no motivation to get their weight down, as they don’t think they need to.

I won’t waste much of my time on all the reasons for this statement, but unfortunately even this slight degree of overweight-ness will significantly increase your risk of major health problems somewhere down the line, particularly that of heart disease (which is going through the roof at the moment); diabetes isn’t likely to be a risk for the overweight unless they’re really overdoing things, but that’s also a potential, and very serious, health hazard. The trouble is that many of us find it hard to make this connection if we basically feel healthy. Despite what the doctor says and no matter how much we trust them, if we are capable of going for a nice walk and generally getting about without getting out of breath or feeling bad then we probably feel justified in thinking of ourselves as healthy. Our heart doesn’t seem about to give out, so why worry about it.

The thing to remember is that the heart is just a muscle, so if it isn’t stressed it will degrade just like any other. You know those triceps that haven’t done a press up in five years? Feel how small and weak they are? Yeah, that kind of thing can quite easily happen to the muscles that are responsible for keeping you alive. Your heart might be pumping all day long and be a different type of muscle, so the process will be slower, but give it twenty years and you might start to see the effects.

But anyway, I’m not here to lecture you about your health; that’s far too depressing and dull for my liking- the only point I was trying to make is that many of the accidental contributors to ‘the obesity epidemic’ are probably unaware that their health is in any way a problem, and not really through fault of their own. So whose fault is it then? Well, that one can wait until next time…

…but some are more equal than others

Seemingly the key default belief of any modern, respectable government and, indeed, a well brought-up child of the modern age, is that of egalitarianism- that all men are born equal. Numerous documents, from the US Declaration of Independence to the UN Bill of Human Rights, have proclaimed this as a ‘self-evident truth’, and anyone who still blatantly clings onto the idea that some people are born ‘better’ than others by virtue of their family having more money is dubbed out of touch at best, and (bizarrely) a Nazi at worst. And this might be considered surprising given the amount of approval and the extent to which we set store by a person’s rank or status.

I mean, think about it. A child from a well-respected, middle class family with two professional parents will invariably get more opportunities in life, and will frequently be considered more ‘trustworthy’, than a kid born into a broken home with a mother on benefits and a father in jail, particularly if his accent (especially) or skin colour (possibly to a slightly lesser extent in Europe than the US) suggests this fact. Someone with an expensive, tailored suit can stand a better chance at a job interview to a candidate with an old, fading jacket and worn knees on his trousers that he has never been rich enough to replace, and I haven’t even started on the wage and job availability gap between men and women, despite that there are nowadays more female university graduates than males. You get the general idea. We might think that all are born equal, but that doesn’t mean we treat them like that.

Some have said that this, particularly in the world of work, is to do with the background and age of the people concerned. Particularly in large, old and incredibly valuable corporate enterprises such as banks, the average age of senior staff and shareholders tends to be on the grey end of things, the majority of them are male and many of them will have had the top-quality private education that allowed them to get there, so the argument put forward is that these men were brought up surrounded by this sort of ‘public schoolers are fantastic and everyone else is a pleb’ mentality. And it is without doubt true that very few companies have an average age of a board member below 50, and many above 65; in fact the average age of a CEO in the UK has recently gone up from a decade-long value of 51 to nearly 53.  However, the evidence suggests that the inclusion of younger board members and CEOs generally benefits a company by providing a fresher understanding of the modern world; data that could only be gathered by the fact that there are a large number of young, high-ranking businesspeople to evaluate. And anyway; in most job interviews, it’s less likely to be the board asking the questions than it is a recruiting officer of medium business experience- this may be an issue, but I don’t think it’s the key thing here.

It could well be possible that the true answer is that there is no cause at all, and the whole business is nothing more than a statistical blip. In Freakonomics, an analysis was done to find the twenty ‘blackest’ and ‘whitest’ boy’s names in the US (I seem to remember DeShawn was the ‘blackest’ and Jake the ‘whitest’), and then compared the job prospects of people with names on either of those two lists. The results suggested that people with one of the ‘white’ names did better in the job market than those with ‘black’ names, perhaps suggesting that interviewers are being, subconsciously or not, racist. But, a statistical analysis revealed this to not, in fact, be the case; we must remember that black Americans are, on average, less well off than their white countrymen, meaning they are more likely to go to a dodgy school, have problems at home or hang around with the wrong friends. Therefore, black people do worse, on average, on the job market because they are more likely to be not as well-qualified as white equivalents, making them, from a purely analytical standpoint, often worse candidates. This meant that Jake was more likely to get a job than DeShawn because Jake was simply more likely to be a better-educated guy, so any racism on the part of job interviewers is not prevalent enough to be statistically significant. To some extent, we may be looking at the same thing here- people who turn up to an interview with cheap or hand-me-down clothes are likely to have come from a poorer background to someone with a tailored Armani suit, and are therefore likely to have had a lower standard of education and make less attractive candidates to an interviewing panel. Similarly, women tend to drop their careers earlier in life if they want to start a family, since the traditional family model puts the man as chief breadwinner, meaning they are less likely to advance up the ladder and earn the high wages that could even out the difference in male/female pay.

But statistics cannot quite cover anything- to use another slightly tangential bit of research, a study done some years ago found that teachers gave higher marks to essays written in neat handwriting than they did to identical essays that were written messier. The neat handwriting suggested a diligent approach to learning, a good education in their formative years, making the teacher think the child was cleverer, and thus deserving of more marks, than a scruffier, less orderly hand. Once again, we can draw parallels to our two guys in their different suits. Mr Faded may have good qualifications and present himself well, but his attire suggests to his interviewers that he is from a poorer background. We have a subconscious understanding of the link between poorer backgrounds and the increased risk of poor education and other compromising factors, and so the interviewers unconsciously link our man to the idea that he has been less well educated than Mr Armani, even if the evidence presented before them suggests otherwise. They are not trying to be prejudiced, they just think the other guy looks more likely to be as good as his paperwork suggests. Some of it isn’t even linked to such logical connections; research suggests that interviewers, just as people in everyday life, are drawn to those they feel are similar to them, and they might also make the subconscious link that ‘my wife stays at home and looks after the kids, there aren’t that many women in the office, so what’s this one doing here?’- again, not deliberate discrimination, but it happens.

In many ways this is an unfortunate state of affairs, and one that we should attempt to remedy in everyday life whenever and wherever we can. But a lot of the stuff that to a casual observer might look prejudiced, might be violating our egalitarian creed, we do without thinking, letting out brain make connections that logic should not. The trick is not to ‘not judge a book by it’s cover’, but not to let your brain register that there’s a cover at all.

Big Pharma

The pharmaceutical industry is (some might say amazingly) the second largest on the planet, worth over 600 billion dollars in sales every year and acting as the force behind the cutting edge of science that continues to push the science of medicine onwards as a field- and while we may never develop a cure for everything you can be damn sure that the modern medical world will have given it a good shot. In fact the pharmaceutical industry is in quite an unusual position in this regard, forming the only part of the medicinal public service, and indeed any major public service, that is privatised the world over.

The reason for this is quite simply one of practicality; the sheer amount of startup capital required to develop even one new drug, let alone form a public service of this R&D, would feature in the hundreds of millions of dollars, something that no government would be willing to set aside for a small immediate gain. All modern companies in the ‘big pharma’ demographic were formed many decades ago on the basis of a surprise cheap discovery or suchlike, and are now so big that they are the only people capable of fronting such a big initial investment. There are a few organisations (the National Institute of Health, the Royal Society, universities) who conduct such research away from the private sectors, but they are small in number and are also very old institutions.

Many people, in a slightly different field, have voiced the opinion that people whose primary concern is profit are those we should least be putting in charge of our healthcare and wellbeing (although I’m not about to get into that argument now), and a similar argument has been raised concerning private pharmaceutical companies. However, that is not to say that a profit driven approach is necessarily a bad thing for medicine, for without it many of the ‘minor’ drugs that have greatly improved the overall healthcare environment would not exist. I, for example, suffer from irritable bowel syndrome, a far from life threatening but nonetheless annoying and inconvenient condition that has been greatly helped by a drug called mebeverine hydrochloride. If all medicine focused on the greater good of ‘solving’ life-threatening illnesses, a potentially futile task anyway, this drug would never have been developed and I would be even more hateful to my fragile digestive system. In the western world, motivated-by-profit makes a lot of sense when trying to make life just that bit more comfortable. Oh, and they also make the drugs that, y’know, save your life every time you’re in hospital.

Now, normally at this point in any ‘balanced argument/opinion piece’ thing on this blog, I try to come up with another point to try and keep each side of the argument at an about equal 500 words. However, this time I’m going to break that rule, and jump straight into the reverse argument straight away. Why? Because I can genuinely think of no more good stuff to say about big pharma.

If I may just digress a little; in the UK & USA (I think, anyway) a patent for a drug or medicine lasts for 10 years, on the basis that these little capsules can be very valuable things and it wouldn’t do to let people hang onto the sole rights to make them for ages. This means that just about every really vital lifesaving drug in medicinal use today, given the time it takes for an experimental treatment to become commonplace, now exists outside its patent and is now manufactured by either the lowest bidder or, in a surprisingly high number of cases, the health service itself (the UK, for instance, is currently trying to become self-sufficient in morphine poppies to prevent it from having to import from Afghanistan or whatever), so these costs are kept relatively low by market forces. This therefore means that during their 10-year grace period, drugs companies will do absolutely everything they can to extort cash out of their product; when the antihistamine drug loratadine (another drug I use relatively regularly, it being used to combat colds) was passing through the last two years of its patent, its market price was quadrupled by the company making it; they had been trying to get the market hooked onto using it before jacking up the prices in order to wring out as much cash as possible. This behaviour is not untypical for a huge number of drugs, many of which deal with serious illness rather than being semi-irrelevant cures for the snuffles.

So far, so much normal corporate behaviour. Reaching this point, we must now turn to consider some practices of the big pharma industry that would make Rupert Murdoch think twice. Drugs companies, for example, have a reputation for setting up price fixing networks, many of which have been worth several hundred million dollars. One, featuring what were technically food supplements businesses, subsidiaries of the pharmaceutical industry, later set the world record for the largest fines levied in criminal history- this a record that persists despite the fact that the cost of producing the actual drugs themselves (at least physically) rarely exceeds a couple of pence per capsule, hundreds of times less than their asking price.

“Oh, but they need to make heavy profits because of the cost of R&D to make all their new drugs”. Good point, well made and entirely true, and it would also be valid if the numbers behind it didn’t stack up. In the USA, the National Institute of Health last year had a total budget of \$23 billion, whilst all the drug companies in the US collectively spent \$32 billion on R&D. This might seem at first glance like the private sector has won this particular moral battle; but remember that the American drug industry generated \$289 billion in 2006, and accounting for inflation (and the fact that pharmaceutical profits tend to stay high despite the current economic situation affecting other industries) we can approximate that only around 10% of company turnover is, on average, spent on R&D. Even accounting for manufacturing costs, salaries and such, the vast majority of that turnover goes into profit, making the pharmaceutical industry the most profitable on the planet.

I know that health is an industry, I know money must be made, I know it’s all necessary for innovation. I also know that I promised not to go into my Views here. But a drug is not like an iPhone, or a pair of designer jeans; it’s the health of millions at stake, the lives of billions, and the quality of life of the whole world. It’s not something to be played around with and treated like some generic commodity with no value beyond a number. Profits might need to be made, but nobody said there had to be 12 figures of them.

NUMBERS

One of the most endlessly charming parts of the human experience is our capacity to see something we can’t describe and just make something up in order to do so, never mind whether it makes any sense in the long run or not. Countless examples have been demonstrated over the years, but the mother lode of such situations has to be humanity’s invention of counting.

Numbers do not, in and of themselves, exist- they are simply a construct designed by our brains to help us get around the awe-inspiring concept of the relative amounts of things. However, this hasn’t prevented this ‘neat little tool’ spiralling out of control to form the vast field that is mathematics. Once merely a diverting pastime designed to help us get more use out of our counting tools, maths (I’m British, live with the spelling) first tentatively applied itself to shapes and geometry before experimenting with trigonometry, storming onwards to algebra, turning calculus into a total mess about four nanoseconds after its discovery of something useful, before just throwing it all together into a melting point of cross-genre mayhem that eventually ended up as a field that it as close as STEM (science, technology, engineering and mathematics) gets to art, in that it has no discernible purpose other than for the sake of its own existence.

This is not to say that mathematics is not a useful field, far from it. The study of different ways of counting lead to the discovery of binary arithmetic and enabled the birth of modern computing, huge chunks of astronomy and classical scientific experiments were and are reliant on the application of geometric and trigonometric principles, mathematical modelling has allowed us to predict behaviour ranging from economics & statistics to the weather (albeit with varying degrees of accuracy) and just about every aspect of modern science and engineering is grounded in the brute logic that is core mathematics. But… well, perhaps the best way to explain where the modern science of maths has lead over the last century is to study the story of i.

One of the most basic functions we are able to perform to a number is to multiply it by something- a special case, when we multiply it by itself, is ‘squaring’ it (since a number ‘squared’ is equal to the area of a square with side lengths of that number). Naturally, there is a way of reversing this function, known as finding the square root of a number (ie square rooting the square of a number will yield the original number). However, convention dictates that a negative number squared makes a positive one, and hence there is no number squared that makes a negative and there is no such thing as the square root of a negative number, such as -1. So far, all I have done is use a very basic application of logic, something a five-year old could understand, to explain a fact about ‘real’ numbers, but maths decided that it didn’t want to not be able to square root a negative number, so had to find a way round that problem. The solution? Invent an entirely new type of number, based on the quantity i (which equals the square root of -1), with its own totally arbitrary and made up way of fitting  on a number line, and which can in no way exist in real life.

Admittedly, i has turned out to be useful. When considering electromagnetic forces, quantum physicists generally assign the electrical and magnetic components real and imaginary quantities in order to identify said different components, but its main purpose was only ever to satisfy the OCD nature of mathematicians by filling a hole in their theorems. Since then, it has just become another toy in the mathematician’s arsenal, something for them to play with, slip into inappropriate situations to try and solve abstract and largely irrelevant problems, and with which they can push the field of maths in ever more ridiculous directions.

A good example of the way mathematics has started to lose any semblance of its grip on reality concerns the most famous problem in the whole of the mathematical world- Fermat’s last theorem. Pythagoras famously used the fact that, in certain cases, a squared plus b squared equals c squared as a way of solving some basic problems of geometry, but it was never known as to whether a cubed plus b cubed could ever equal c cubed if a, b and c were whole numbers. This was also true for all other powers of a, b and c greater than 2, but in 1637 the brilliant French mathematician Pierre de Fermat claimed, in a scrawled note inside his copy of Diohantus’ Arithmetica, to have a proof for this fact ‘that is too large for this margin to contain’. This statement ensured the immortality of the puzzle, but its eventual solution (not found until 1995, leading most independent observers to conclude that Fermat must have made a mistake somewhere in his ‘marvellous proof’) took one man, Andrew Wiles, around a decade to complete. His proof involved showing that the terms involved in the theorem could be expressed in the form of an incredibly weird equation that doesn’t exist in the real world, and that all equations of this type had a counterpart equation of an equally irrelevant type. However, since the ‘Fermat equation’ was too weird to exist in the other format, it could not logically be true.

To a mathematician, this was the holy grail; not only did it finally lay to rest an ages-old riddle, but it linked two hitherto unrelated branches of algebraic mathematics by way of proving what is (now it’s been solved) known as the Taniyama-Shimura theorem. To anyone interested in the real world, this exercise made no contribution to it whatsoever- apart from satisfying a few nerds, nobody’s life was made easier by the solution, it didn’t solve any real-world problem, and it did not make the world a tangibly better place. In this respect then, it was a total waste of time.

However, despite everything I’ve just said, I’m not going to decide that all modern day mathematics is a waste of time; very few human activities ever are. Mathematics is many things; among them ridiculous, confusing, full of contradictions and potential slip-ups and, in a field whose age of winning a major prize is younger than in any other STEM field, apparently full of those likely to belittle you out of future success should you enter the world of serious academia. But, for some people, maths is just what makes the world makes sense, and at its heart that was all it was ever created to do. And if some people want their life to be all about the little symbols that make the world make sense, then well done to the world for making a place for them.

Oh, and there’s a theory doing the rounds of cosmology nowadays that reality is nothing more than a mathematical construct. Who knows in what obscure branch of reverse logarithmic integrals we’ll find answers about that one…

The Scrum Problem

My apologies from deviating back to a personal favourite- I try too keep rugby out of these posts on the grounds that, in real life, it tends to make things kind of exclusive for people who aren’t into it, but I thought that I might be allowed one small deviation from this guideline. Today, I wish to talk about probably the single most contentious issue in the game today, one that divides, confuses and angers just about everyone involved in it: the scrum.

The scrum has always been a historic feature of the game of rugby- perhaps a historic callback to the old ‘scrums’ of viciously fighting players that formed the origins of the game of football, in the context of rugby it has proved contentious since the very first international ever played. England and Scotland were playing one another and, at the time, both played under different rules, so it was agreed that they would play under English rules for the first half and Scottish ones in the second. The game was around an hour old, tied at 0-0 (yeah it was a bit rubbish in those days), when the Scots won a scrum on the English five metre line. Rather than feed the ball into the scrum, the Scots instead began to push. The unsuspecting English forwards were caught off guard and forced back over their own line, whereupon the Scottish scrum-half grounded the ball. Whilst totally illegal under English rules, and thus generating a barrage of complaints, the Scots had one fair and square, starting off a bitter rivalry against ‘the Auld Enemy’ that continues to this day.

The scrum has developed a lot since those days (everyone now plays under the same rules for one thing), but perhaps the most important development for the modern game came in the 1990’s, specifically within the New Zealand team at the time. The All Blacks were a talented side, but their major disadvantage came up front, for whilst their front row players were skilled, Sean Fitzpatrick and company were not the biggest or heaviest front row around. Whilst not a disadvantage in open play, at scrum time it was feared that they would crumble under their opponent’s superior weight, so they had to find a way round that. In the end, they resorted to a bit of trickery. The structure adopted at scrum time by most sides of the age was to come together gently, get settled, then let the scrum half put the ball in and start to push, twist, and cheat in all the million ways discovered by front rowers over the years. However, what the Kiwis decided to do was hit the engagement hard, smashing their opponents back to get a good body position early. Then, the scrum half would feed the ball in almost immediately, allowing them to start pushing straight away and keep their opponents on the back foot, thus not allowing them time to get themselves settled and start to push back. It worked like a charm, aside from one small drawback. Everyone else started to copy them.

Even with trained wrestlers, there is only so much damage that sixteen men can do to one another when simply trying to push one another back. However, when not much below a tonne of meat slams as hard as it can into another tonne smashing back the other way, the forces involved in the impact is truly huge, and suddenly the human spine doesn’t seem all that strong. Not only that, but the slightest misalignment of the impact, and that amount of force means there is simply no way for it to all settle down nicely. Combine this fact with the immense muscle building and weight gain programs now demanded by the modern, professional game, and the attention to detail of modern coaches to get that extra edge in the impact, and we reach the inescapable and chaotic conclusion that is the modern scrum. In the last world cup in 2011, in matches between top-tier countries 50 scrums out of every 100 collapsed, and there were 31 resets and 41 free-kicks or penalties per 100. The stats were virtually the same during this year’s Six Nations, in which nearly half of all scrums resulted in the ball not coming back and creating one match (Ireland v Scotland) that spent over a quarter of its playing time spent scrummaging, resetting or collapsing.

This is despite the fact that the face of the game has changed very much against the set piece in the modern era. In the early 1970’s, analysis suggests that the average number of set-pieces (scrums and lineouts) in a match was nearly triple its current value (mid-thirties), whilst the number of rucks/mauls has gone up sixfold since then. Even since the game first turned pro in the mid-nineties, the number of set pieces has dropped by a third and the number of successful breakdowns tripled. The amount of time the ball spends in play has also risen hugely, and some are even arguing that the scrum as we know it is under threat. Indeed, in last year’s Six Nations the scrum was only the deciding factor in one game (England v Ireland), and as Paul Wallace astutely pointed out at the time that Ireland getting pushed about for the entire match was their reward for playing by the rules and not sending a front rower off ‘injured’.

Then there are the myriad of various intrigues and techniques that have lead to the scrum becoming the unstable affair it is today. Many argue that modern skintight shirts don’t allow players to grip properly, forcing them to either slip or grab hold of easier and possibly illegal positions that make the scrum decidedly wobbly. Others blame foot positioning, arguing that the modern way of setting up one’s feet, where the hooker demands the majority of space, forces the backs of his props to angle inwards and making the whole business more dangerous and less stable. Some blame poor refereeing for letting scrummagers get away with things that are now becoming dangerous, destabilising habits among front rowers, whilst others may counter this by considering the myriad of confusing signals a referee has to try and keep track off at scrum time- two offside lines, straightness of feed, hooker’s feet up early, incorrect back row binding, illegal front row binding, whether his line judge is signalling him and whether anyone’s just broken their neck. This is clearly a mighty confusing situation, and one I’d love to be able to start suggesting solutions for- but I think I’ll leave that until Saturday…

Willkommen, 2012…

Hello and happy New Year to whoever may or may not be reading this- for those who are not, please consult reality and try again. I was considering taking this opportunity to look forward and pontificate on what the new year may bring, but I eventually decided that since I don’t have a sodding clue what interesting stuff’s going to happen (bar the Olympics, which everyone knows about already), I have decided instead to give you a list of random facts to give some new stuff to confuse people with in 2012 conversations*. Read and enjoy:

The only sound Seahorses make is a small clicking or popping sound during feeding or courtship

Krispy Kreme make five million doughnuts a day

There were no red colored M&Ms from 1976 to 1987

In Belgium, there is a museum that is just for strawberries

Tomatoes were once referred to as “love apples.” This is because their was a superstition that people would fall in love by eating them

Over 90% of diseases are caused or complicated by stress

An average person uses the toilet 2500 times a year

Approximately 97.35618329% of all statistics are made up

Michael Jordan makes more money from Nike annually than all of the Nike factory workers in Malaysia combined

Pentagon estimates their computer network is hacked about 250,000 times annually

Marilyn Monroe had six toes

On a Canadian two dollar bill, the flag flying over the Parliament building is an American flag

Most heart attacks occur between the hours of 8 and 9 am

There is a town in Norway called “Hell”

The electric chair was invented by a dentist

The word “nerd” was first coined by Dr. Suess in the book “If I Ran to the Zoo.”

For every human in the world there are one million ants

After being picked an orange cannot ripen

There are more pigs than humans in Denmark

Hockey pucks were originally made from frozen cow dung

Karate actually originated in India, but was developed further in China

A group of tigers is called a streak

The average ear grows 0.01 inches in length every year

The same careers advisor dismissed both Mark Knopfler and Alan Shearer’s ambitions (to be a musician and footballer respectively), saying to Knopfler “you’ll never get anywhere playing that kind of stuff”. Shearer broke the world record in transfer fees when he signed for Newcastle, and Knopfler went on to make over £50 million and played at Live Aid

The most exclusive aftershave in the world is named after a Welsh winger and rugby captain

A bank in Paraguay was once held up by two sets of bank robbers simultaneously

A South Korean woman failed her driving test 959 times, and when she finally passed was given a car worth nearly \$17,000 by Hyundai, as well as an advertising deal

The biggest defeat in a game of football is held by a team from Madagascar, who lost 149-0 in a match in October 2002

In a 2008 council election in North Dakota, absolutely nobody voted, not even the candidates

A news reporter in Swaziland once spent a month delivering reports from a broom cupboard whilst pretending to be in Baghdad

Elvis Presley once came third in an Elvis Presley impersonator contest in Tennessee

A South African effort to promote condom usage, that included the distribution of a free government condom, ended in failure when it was noticed that the condoms had been stapled to the packaging, puncturing two holes in each of them in the process

*I make no claim to have sourced any of these- the first half come from a friend who used to post these things on Facebook, and the second half are from one of my favourite books- The Ultimate Book of Heroic Failures by Stephen Pile. The ones I have done are just the easiest to paraphrase from the first two chapters- if you want a good source of laughs for the upcoming year, buy yourself a copy and enjoy the rest