The Myth of Popularity

WARNING: Everything I say forthwith is purely speculative based on a rough approximation of a presented view of how a part of our world works, plus some vaguely related stuff I happen to know. It is very likely to differ from your own personal view of things, so please don’t get angry with me if it does.

Bad TV and cinema is a great source of inspiration; not because there’s much in it that’s interesting, but because there’s just so much of it that even without watching any it is possible to pick up enough information to diagnose trends, which are generally interesting to analyse. In this case, I refer to the picture of American schools that is so often portrayed by iteration after iteration of generic teenage romance/romcom/’drama’, and more specifically the people in it.

One of the classic plot lines of these types of things involves the ‘hopelessly lonely/unpopular nerd who has crush on Miss Popular de Cheerleader and must prove himself by [insert totally retarded idea]’. Needless to say these plot lines are more unintentionally hilarious and excruciating than anything else, but they work because they play on the one trope that so many of us are familiar with; that of the overbearing, idiotic, horrible people from the ‘popular’ social circle. Even if we were not raised within a sitcom, it’s a situation repeated in thousands of schools across the world- the popular kids are the arseholes at the top with inexplicable access to all the gadgets and girls, and the more normal, nice people lower down the social circle.

The image exists in our conciousness long after leaving school for a whole host of reasons; partly because major personal events during our formative years tend to have a greater impact on our psyche than those occurring later on in life, but also because it is often our first major interaction with the harsh unfairness life is capable of throwing at us. The whole situation seems totally unfair and unjust; why should all these horrible people be the popular ones, and get all the social benefits associated with that? Why not me, a basically nice, humble person without a Ralph Lauren jacket or an iPad 3, but with a genuine personality? Why should they have all the luck?

However, upon analysing the issue then this object of hate begins to break down; not because the ‘popular kids’ are any less hateful, but because they are not genuinely popular. If we define popular as a scale representative of how many and how much people like you (because what the hell else is it?), then it becomes a lot easier to approach it from a numerical, mathematical perspective. Those at the perceived top end of the social spectrum generally form themselves into a clique of superiority, where they all like one another (presumably- I’ve never been privy to being in that kind of group in order to find out) but their arrogance means that they receive a certain amount of dislike, and even some downright resentment, from the rest of the immediate social world. By contrast, members of other social groups (nerds, academics [often not the same people], those sportsmen not in the ‘popular’ sphere, and the myriad of groups of undefineable ‘normies’ who just splinter off into their own little cliques) tend to be liked by members of their selected group and treated with either neutrality or minor positive or negative feeling from everyone else, leaving them with an overall ‘popularity score’, from an approximated mathematical point of view, roughly equal to or even greater than the ‘popular’ kids. Thus, the image of popularity is really something of a myth, as these people are not technically speaking any more popular than anyone else.

So, then, how has this image come to present itself as one of popularity, of being the top of the social spectrum? Why are these guys on top, seemingly above group after group of normal, friendly people with a roughly level playing field when it comes to social standing?

If you were to ask George Orwell this question, he would present you with a very compelling argument concerning the nature of a social structure to form a ‘high’ class of people (shortly after asking you how you managed to communicate with him beyond the grave). He and other social commentators have frequently pointed out that the existence of a social system where all are genuinely treated equally is unstable without some ‘higher class’ of people to look up to- even if it is only in hatred. It is humanity’s natural tendency to try and better itself, try to fight its way to the top of the pile, so if the ‘high’ group disappear temporarily they will be quickly replaced; hence why there is such a disparity between rich and poor even in a country such as the USA founded on the principle that ‘all men are created free and equal’. This principle applies to social situations too; if the ‘popular’ kids were to fall from grace, then some other group would likely rise to fill the power vacuum at the top of the social spectrum. And, as we all know, power and influence are powerful corrupting forces, so this position would be likely to transform this new ‘popular’ group into arrogant b*stards too, removing the niceness they had when they were just normal guys. This effect is also in evidence that many of the previously hateful people at the top of the spectrum become very normal and friendly when spoken to one-on-one, outside of their social group (from my experience anyway; this does not apply to all people in such groups)

However, another explanation is perhaps more believable; that arrogance is a cause rather than a symptom. By acting like they are better than the rest of the world, the rest of the world subconsciously get it into their heads that, much though they are hated, they are the top of the social ladder purely because they said so. And perhaps this idea is more comforting, because it takes us back to the idea we started with; that nobody is more actually popular than anyone else, and that it doesn’t really matter in the grand scheme of things. Regardless of where your group ranks on the social scale, if it’s yours and you get along with the people in it, then it doesn’t really matter about everyone else or what they think, so long as you can get on, be happy, and enjoy yourself.

Footnote: I get most of these ideas from what is painted by the media as being the norm in American schools and from what friends have told me, since I’ve been lucky enough that the social hierarchies I encountered from my school experience basically left one another along. Judging by the horror stories other people tell me, I presume it was just my school. Plus, even if it’s total horseshit, it’s enough of a trope that I can write a post about it.

Advertisement

Pineapples (TM)

If the last few decades of consumerism have taught us anything, it is just how much faith people are able of setting store in a brand. In everything from motorbikes to washing powder, we do not simply test and judge effectiveness of competing products objectively (although, especially when considering expensive items such as cars, this is sometimes impractical); we must compare them to what we think of the brand and the label, what reputation this product has and what it is particularly good at, which we think most suits our social standing and how others will judge our use of it. And good thing too, from many companies’ perspective, otherwise the amount of business they do would be slashed. There are many companies whose success can be almost entirely put down to the effect of their branding and the impact their marketing has had on the psyche of western culture, but perhaps the most spectacular example concerns Apple.

In some ways, to typecast Apple as a brand-built company is a harsh one; their products are doubtless good ones, and they have shown a staggering gift for bringing existed ideas together into forms that, if not quite new, are always the first to be a practical, genuine market presence. It is also true that Apple products are often better than their competitors in very specific fields; in computing, for example, OS X is better at dealing with media than other operating systems, whilst Windows has traditionally been far stronger when it comes to word processing, gaming and absolutely everything else (although Windows 8 looks very likely to change all of that- I am not looking forward to it). However, it is almost universally agreed (among non-Apple whores anyway) that once the rest of the market gets hold of it Apple’s version of a product is almost never the definitive best, from a purely analytical perspective (the iPod is a possible exception, solely due to the existence of iTunes redefining the music industry before everyone else and remaining competitive to this day) and that every Apple product is ridiculously overpriced for what it is. Seriously, who genuinely thinks that top-end Macs are a good investment?

Still, Apple make high-end, high-quality products with a few things they do really, really well that are basically capable of doing everything else. They should have a small market share, perhaps among the creative or the indie, and a somewhat larger one in the MP3 player sector. They should be a status symbol for those who can afford them, a nice company with a good history but that nowadays has to face up to a lot of competitors. As it is, the Apple way of doing business has proven successful enough to make them the biggest private company in the world. Bigger than every other technology company, bigger than every hedge fund or finance company, bigger than any oil company, worth more than every single one (excluding state owned companies such as Saudi Aramco, which is estimated to be worth around 3 trillion dollars by dealing in Saudi oil exports). How has a technology company come to be worth $400 billion? How?

One undoubted feature is Apple’s uncanny knack of getting there first- the Apple II was the first real personal computer and provided the genes for Windows-powered PC’s to take the world, whilst the iPod was the first MP3 player that was genuinely enjoyable to use, the iPhone the first smartphone (after just four years, somewhere in the region of 30% of the world’s phones are now smartphones) and the iPad the first tablet computer. Being in the technology business has made this kind of innovation especially rewarding for them; every company is constantly terrified of being left behind, so whenever a new innovation comes along they will knock something together as soon as possible just to jump on the bandwagon. However, technology is a difficult business to get right, meaning that these products are usually rubbish and make the Apple version shine by comparison. This also means that if Apple comes up with the idea first, they have had a couple of years of working time to make sure they get it right, whilst everyone else’s first efforts have had only a few scance months; it takes a while for any serious competitors to develop, by which time Apple have already made a few hundred million off it and have moved on to something else; innovation matters in this business.

But the real reason for Apple’s success can be put down to the aura the company have built around themselves and their products. From their earliest infancy Apple fans have been self-dubbed as the independent, the free thinkers, the creative, those who love to be different and stand out from the crowd of grey, calculating Windows-users (which sounds disturbingly like a conspiracy theory or a dystopian vision of the future when it is articulated like that). Whilst Windows has its problems, Apple has decided on what is important and has made something perfect in this regard (their view, not mine), and being willing to pay for it is just part of the induction into the wonderful world of being an Apple customer (still their view). It’s a compelling world view, and one that thousands of people have subscribed to, simply because it is so comforting; it sells us the idea that we are special, individual, and not just one of the millions of customers responsible for Apple’s phenomenal size and success as a company. But the secret to the success of this vision is not just the view itself; it is the method and the longevity of its delivery. This is an image that has been present in their advertising campaign from its earliest infancy, and is now so ingrained that it doesn’t have to be articulated any more; it’s just present in the subtle hints, the colour scheme, the way the Apple store is structured and the very existence of Apple-dedicated shops generally. Apple have delivered the masterclass in successful branding; and that’s all the conclusion you’re going to get for today.

The Red Flower

Fire is, without a doubt, humanity’s oldest invention and its greatest friend; to many, the fundamental example what separates us from other animals. The abilities to keep warm through the coldest nights and harshest winters, to scare away predators by harnessing this strange force of nature, and to cook a joint of meat because screw it, it tastes better that way, are incredibly valuable ones, and they have seen us through many a tough moment. Over the centuries, fire in one form or another has been used for everything from being a weapon of war to furthering science, and very grateful we are for it too.

However, whilst the social history of fire is interesting, if I were to do a post on it then you dear readers would be faced with 1000 words of rather repetitive and somewhat boring myergh (technical term), so instead I thought I would take this opportunity to resort to my other old friend in these matters: science, as well as a few things learned from several years of very casual outdoorsmanship.

Fire is the natural product of any sufficiently exothermic reaction (ie one that gives out heat, rather than taking it in). These reactions can be of any type, but since fire can only form in air most of such reactions we are familiar with tend to be oxidation reactions; oxygen from the air bonding chemically with the substance in question (although there are exceptions;  a sample of potassium placed in water will float on the top and react with the water itself, become surrounded surrounded by a lilac flame sufficiently hot to melt it, and start fizzing violently and pushing itself around the container. A larger dose of potassium, or a more reactive alkali metal such as rubidium, will explode). The emission of heat causes a relatively gentle warming effect for the immediate area, but close to the site of the reaction itself a very large amount of heat is emitted in a small area. This excites the molecules of air close to the reaction and causes them to vibrate violently, emitting photons of electromagnetic radiation as they do so in the form of heat & light (among other things). These photons cause the air to glow brightly, creating the visible flame we can see; this large amount of thermal energy also ionises a lot of atoms and molecules in the area of the flame, meaning that a flame has a slight charge and is more conductive than the surrounding air. Because of this, flame probes are sometimes used to get rid of the excess charge in sensitive electromagnetic experiments, and flamethrowers can be made to fire lightning. Most often the glowing flame results in the characteristic reddy/orange colour of fire, but some reactions, such as the potassium one mentioned, cause them to emit radiation of other frequencies for a variety of reasons (chief among them the temperature of the flame and the spectral properties of the material in question), causing the flames to be of different colours, whilst a white-hot area of a fire is so hot that the molecules don’t care what frequency the photons they’re emitting are at so long as they can get rid of the things fast enough. Thus, light of all wavelengths gets emitted, and we see white light. The flickery nature of a flame is generally caused by the excited hot air moving about rapidly, until it gets far enough away from the source of heat to cool down and stop glowing; this process happens all the time with hundreds of packets of hot air, causing them to flicker back and forth.

However, we must remember that fires do not just give out heat, but must take some in too. This is to do with the way the chemical reaction to generate the heat in question works; the process requires the bonds between atoms to be broken, which uses up energy, before they can be reformed into a different pattern to release energy, and the energy needed to break the bonds and get the reaction going is known as the activation energy. Getting the molecules of the stuff you’re trying to react to the activation energy is the really hard part of lighting a fire, and different reactions (involving the burning of different stuff) have different activation energies, and thus different ‘ignition temperatures’ for the materials involved. Paper, for example, famously has an ignition temperature of 451 Fahrenheit (which means, incidentally, that you can cook with it if you’re sufficiently careful and not in a hurry to eat), whilst wood’s is only a little higher at around 300 degrees centigrade, both of which are less than that of a spark or flame. However, we must remember that neither fuel will ignite if it is wet, as water is not a fuel that can be burnt, meaning that it often takes a while to dry wood out sufficiently for it to catch, and that big, solid blocks of wood take quite a bit of energy to heat up.

From all of this information we can extrapolate the first rule that everybody learns about firelighting; that in order to catch a fire needs air, dry fuel and heat (the air provides the oxygen, the fuel the stuff it reacts with and the heat the activation energy). When one of these is lacking, one must make up for it by providing an excess of at least one of the other two, whilst remembering not to let the provision of the other ingredients suffer; it does no good, for example, to throw tons of fuel onto a new, small fire since it will snuff out its access to the air and put the fire out. Whilst fuel and air are usually relatively easy to come by when starting a fire, heat is always the tricky thing; matches are short lived, sparks even more so, and the fact that most of your fuel is likely to be damp makes the job even harder.

Provision of heat is also the main reason behind all of our classical methods of putting a fire out; covering it with cold water cuts it off from both heat and oxygen, and whilst blowing on a fire will provide it with more oxygen, it will also blow away the warm air close to the fire and replace it with cold, causing small flames like candles to be snuffed out (it is for this reason that a fire should be blown on very gently if you are trying to get it to catch and also why doing so will cause the flames, which are caused by hot air remember, to disappear but the embers to glow more brightly and burn with renewed vigour once you have stopped blowing).  Once a fire has sufficient heat, it is almost impossible to put out and blowing on it will only provide it with more oxygen and cause it to burn faster, as was ably demonstrated during the Great Fire of London. I myself have once, with a few friends, laid a fire that burned for 11 hours straight; many times it was reduced to a few humble embers, but it was so hot that all we had to do was throw another log on it and it would instantly begin to burn again. When the time came to put it out, it took half an hour for the embers to dim their glow.

What the @*$!?

WARNING: Swearing will feature prominently in this post, as will a discussion of sexual material. Children, if your parents shout at you for reading this then it is officially YOUR PROBLEM. Okay?

I feel this may also be the place to apologise for missing a week of posts; didn’t stop writing them, did stop posting them. Don’t know why

Language can enable us to do many things; articulate our ideas, express our sorrow, reveal our love, and tell somebody else’s embarrassing stories to name but a few. Every language has approached these and other practicalities of the everyday life they are designed to assist in different ways (there is one language I have heard of with no word for left or right, meaning that they refer to everything in terms of points of a compass and all members of the tribe thus have an inbuilt sense of where north is at all times), but there is one feature that every language from Japanese to Klingon has managed to incorporate, something without which a language would not be as complete and fully-fabricated as it ought, and which is almost always the first thing learnt by a student of a new language; swearing.

(Aside Note: English, partly due to its flexible nature and the fact that it didn’t really develop as a language until everyone else had rather shown it the way, has always been a particularly good language for being thoroughly foul and dirty, and since it’s the only language I have any degree of reasonable proficiency in I think I’ll stick to that for the time being. If anyone knows anything interesting about swearing in other languages, please feel free to leave them in the comments)

Swearing, swearwords and bad language itself generally have one of three sources; many of the ‘milder’ swearwords tend to have a religious origin, and more specifically refer either to content considered evil by the Church/some form of condemnation to evil (so ‘damn’, in reference to being ‘damned’ by Satan), or to stuff considered in some way blasphemous and therefore wrong (the British idiom ‘bloody’ stems from the Tudor expression ‘God’s Blood’, which along with similar references such as ‘Christ’s Passion’ suggested that the Holy Trinity was in some way fallible and human, and thus capable of human weakness and vice- this was blasphemy according to the Church and therefore wrong). The place of ‘mid-level’ swearwords is generally taken by rather crude references to excrement, egestion and bodily emissions in general (piss, shit etc.). The ‘worst swearwords’ in modern society are of course sexual in nature, be they either references to genitalia, prostitution or the act itself.

The reason for these ideas having become sweary & inappropriate is a fairly simple, but nonetheless interesting, route to track. When the Church ruled the land, anything considered blasphemous or wrong according to their literature and world view was frowned upon at best and punished severely at worst, so words connected to these ideas were simply not broached in public. People knew what they meant, of course, and in seedy or otherwise ‘underground’ places, where the Church’s reach was weak, these words found a home, instantly connecting them with this ‘dirty’ side of society. Poo and sex, of course, have always been considered ‘dirty’ among polite society, something always kept behind closed doors (I’ve done an entire post on the sex aspect of this before) and are thus equally shocking and ripe for sweary material when exposed to the real world.

A quick social analysis of these themes also reveals the reasons behind the ‘hierarchy’ of swearwords. In the past hundred years, the role of the church in everyday western society has dropped off dramatically and offending one’s local priest (or your reputation with him) has become less of a social concern. Among the many consequences of this (and I’m sure an aggressive vicar could list a hundred more) has been the increased prevalence of swearing in normal society, and the fall of Church-related swearwords in terms of how severe they are; using a word once deemed blasphemous doesn’t really seem that serious in a secular society, and the meaning it does have is almost anachronistic in nature. It helps, of course, that these words are among the oldest swearwords that have found common use, meaning that as time has gone by their original context has been somewhat lost and they have got steadily more and more tame. Perhaps in 200 years my future equivalent will be able to say dick in front of his dad for this reason.

The place of excrement and sex in our society has, however, not changed much in the last millennia or two. Both are things that are part of our everyday lives that all of us experience, but that are not done in the front room or broached in polite company- rather ugly necessities and facts of life still considered ‘wrong’ enough to become swearwords. However, whilst going to the loo is a rather inconvenient business that is only dirty because the stuff it produces is (literally), sex is something that we enjoy and often seek out. It is, therefore, a vice, something which we can form an addiction to, and addictions are something that non-addicts find slightly repulsive when observed in addicts or regular practitioners. The Church (yes, them again) has in particular found sex abhorrent if it is allowed to become rampant and undignified, historically favouring rather strict, Victorian positions and execution- all of which means that, unlike poo, sex has been actively clamped down on in one way or another at various points in history. This has, naturally, rarely done much to combat whatever has been seen as the ‘problem’, merely forcing it underground in most cases, but what it has done is put across an image of sex as something that is not just rather dirty but actively naughty and ‘wrong’. This is responsible partly for the thrill some people get when trash talking about and during sex, and the whole ‘you’ve been a naughty girl’ terminology and ideas that surround the concept of sex- but it is also responsible for making sexually explicit references even more underhand, even more to be kept out of polite spheres of movement, and thus making sexually-related swearwords the most ‘extreme’ of all those in our arsenal.

So… yeah, that’s what I got on the subject of swearing. Did anyone want a conclusion to this or something?

Determinism

In the early years of the 19th century, science was on a roll. The dark days of alchemy were beginning to give way to the modern science of chemistry as we know it today, the world of physics and the study of electromagnetism were starting to get going, and the world was on the brink of an industrial revolution that would be powered by scientists and engineers. Slowly, we were beginning to piece together exactly how our world works, and some dared to dream of a day where we might understand all of it. Yes, it would be a long way off, yes there would be stumbling blocks, but maybe, just maybe, so long as we don’t discover anything inconvenient like advanced cosmology, we might one day begin to see the light at the end of the long tunnel of science.

Most of this stuff was the preserve of hopeless dreamers, but in the year 1814 a brilliant mathematician and philosopher, responsible for underpinning vast quantities of modern mathematics and cosmology, called Pierre-Simon Laplace published a bold new article that took this concept to extremes. Laplace lived in the age of ‘the clockwork universe’, a theory that held Newton’s laws of motion to be sacrosanct truths and claimed that these laws of physics caused the universe to just keep on ticking over, just like the mechanical innards of a clock- and just like a clock, the universe was predictable. Just as one hour after five o clock will always be six, presuming a perfect clock, so every result in the world can be predicted from the results. Laplace’s arguments took such theory to its logical conclusion; if some vast intellect were able to know the precise positions of every particle in the universe, and all the forces and motions of them, at a single point in time, then using the laws of physics such an intellect would be able to know everything, see into the past, and predict the future.

Those who believed in this theory were generally disapproved of by the Church for devaluing the role of God and the unaccountable divine, whilst others thought it implied a lack of free will (although these issues are still considered somewhat up for debate to this day). However, among the scientific community Laplace’s ideas conjured up a flurry of debate; some entirely believed in the concept of a predictable universe, in the theory of scientific determinism (as it became known), whilst others pointed out the sheer difficulty in getting any ‘vast intellect’ to fully comprehend so much as a heap of sand as making Laplace’s arguments completely pointless. Other, far later, observers, would call into question some of the axiom’s upon which the model of the clockwork universe was based, such as Newton’s laws of motion (which collapse when one does not take into account relativity at very high velocities); but the majority of the scientific community was rather taken with the idea that they could know everything about something should they choose to. Perhaps the universe was a bit much, but being able to predict everything, to an infinitely precise degree, about a few atoms perhaps, seemed like a very tempting idea, offering a delightful sense of certainty. More than anything, to these scientists there work now had one overarching goal; to complete the laws necessary to provide a deterministic picture of the universe.

However, by the late 19th century scientific determinism was beginning to stand on rather shaky ground; although  the attack against it came from the rather unexpected direction of science being used to support the religious viewpoint. By this time the laws of thermodynamics, detailing the behaviour of molecules in relation to the heat energy they have, had been formulated, and fundamental to the second law of thermodynamics (which is, to this day, one of the fundamental principles of physics) was the concept of entropy.  Entropy (denoted in physics by the symbol S, for no obvious reason) is a measure of the degree of uncertainty or ‘randomness’ inherent in the universe; or, for want of a clearer explanation, consider a sandy beach. All of the grains of sand in the beach can be arranged in a vast number of different ways to form the shape of a disorganised heap, but if we make a giant, detailed sandcastle instead there are far fewer arrangements of the molecules of sand that will result in the same structure. Therefore, if we just consider the two situations separately, it is far, far more likely that we will end up with a disorganised ‘beach’ structure rather than a castle forming of its own accord (which is why sandcastles don’t spring fully formed from the sea), and we say that the beach has a higher degree of entropy than the castle. This increased likelihood of higher entropy situations, on an atomic scale, means that the universe tends to increase the overall level of entropy in it; if we attempt to impose order upon it (by making a sandcastle, rather than waiting for one to be formed purely by chance), we must input energy, which increases the entropy of the surrounding air and thus resulting in a net entropy increase. This is the second law of thermodynamics; entropy always increases, and this principle underlies vast quantities of modern physics and chemistry.

If we extrapolate this situation backwards, we realise that the universe must have had a definite beginning at some point; a starting point of order from which things get steadily more chaotic, for order cannot increase infinitely as we look backwards in time. This suggests some point at which our current universe sprang into being, including all the laws of physics that make it up; but this cannot have occurred under ‘our’ laws of physics that we experience in the everyday universe, as they could not kickstart their own existence. There must, therefore, have been some other, higher power to get the clockwork universe in motion, destroying the image of it as some eternal, unquestionable predictive cycle. At the time, this was seen as vindicating the idea of the existence of God to start everything off; it would be some years before Edwin Hubble would venture the Big Bang Theory, but even now we understand next to nothing about the moment of our creation.

However, this argument wasn’t exactly a death knell for determinism; after all, the laws of physics could still describe our existing universe as a ticking clock, surely? True; the killer blow for that idea would come from Werner Heisenburg in 1927.

Heisenburg was a particle physicist, often described as the person who invented quantum mechanics (a paper which won him a Nobel prize). The key feature of his work here was the concept of uncertainty on a subatomic level; that certain properties, such as the position and momentum of a particle, are impossible to know exactly at any one time. There is an incredibly complicated explanation for this concerning wave functions and matrix algebra, but a simpler way to explain part of the concept concerns how we examine something’s position (apologies in advance to all physics students I end up annoying). If we want to know where something is, then the tried and tested method is to look at the thing; this requires photons of light to bounce off the object and enter our eyes, or hypersensitive measuring equipment if we want to get really advanced. However, at a subatomic level a photon of light represents a sizeable chunk of energy, so when it bounces off an atom or subatomic particle, allowing us to know where it is, it so messes around with the atom’s energy that it changes its velocity and momentum, although we cannot predict how. Thus, the more precisely we try to measure the position of something, the less accurately we are able to know its velocity (and vice versa; I recognise this explanation is incomplete, but can we just take it as red that finer minds than mine agree on this point). Therefore, we cannot ever measure every property of every particle in a given space, never mind the engineering challenge; it’s simply not possible.

This idea did not enter the scientific consciousness comfortably; many scientists were incensed by the idea that they couldn’t know everything, that their goal of an entirely predictable, deterministic universe would forever remain unfulfilled. Einstein was a particularly vocal critic, dedicating the rest of his life’s work to attempting to disprove quantum mechanics and back up his famous statement that ‘God does not play dice with the universe’. But eventually the scientific world came to accept the truth; that determinism was dead. The universe would never seem so sure and predictable again.

The Offensive Warfare Problem

If life has shown itself to be particularly proficient at anything, it is fighting. There is hardly a creature alive today that does not employ physical violence in some form to get what it wants (or defend what it has) and, despite a vast array of moral arguments to the contrary of that being a good idea (I must do a post on the prisoner’s dilemma some time…), humankind is, of course, no exception. Unfortunately, our innate inventiveness and imagination as a race means that we have been able to let our brains take our fighting to the next level, with consequences that have got ever-more destructive as  time has gone  by. With the construction of the first atomic bombs, humankind had finally got to where it had threatened to for so long- the ability to literally wipe out planet earth.

This insane level of offensive firepower is not just restricted to large-scale big-guns (the kind that have been used fir political genital comparison since Napoleon revolutionised the use of artillery in warfare)- perhaps the most interesting and terrifying advancement in modern warfare and conflict has been the increased prevalence and distribution of powerful small arms, giving ‘the common man’ of the battlefield a level of destructive power that would be considered hideously overwrought in any other situation (or, indeed, the battlefield of 100 years ago). The epitomy of this effect is, of course, the Kalashnikov AK-47, whose cheapness and insane durability has rendered it invaluable to rebel groups or other hastily thrown together armies, giving them an ability to kill stuff that makes them very, very dangerous to the population of wherever they’re fighting.

And this distribution of such awesomely dangerous firepower has began to change warfare, and to explain how I need to go on a rather dramatic detour. The goal of warfare has always, basically, centred around the control of land and/or population, and as James Herbert makes so eminently clear in Dune, whoever has the power to destroy something controls it, at least in a military context. In his book Ender’s Shadow (I feel I should apologise for all these sci-fi references), Orson Scott Card makes the entirely separate point that defensive warfare in the context of space warfare makes no practical sense. For a ship & its weapons to work in space warfare, he rather convincingly argues, the level of destruction it must be able to deliver would have to be so large that, were it to ever get within striking distance of earth it would be able to wipe out literally billions- and, given the distance over which any space war must be conducted, mutually assured destruction simply wouldn’t work as a defensive strategy as it would take far too long for any counterstrike attempt to happen. Therefore, any attempt to base one’s warfare effort around defence, in a space warfare context, is simply too risky, since one ship (or even a couple of stray missiles) slipping through in any of the infinite possible approach directions to a planet would be able to cause uncountable levels of damage, leaving the enemy with a demonstrable ability to destroy one’s home planet and, thus, control over it and the tactical initiative. Thus, it doesn’t make sense to focus on a strategy of defensive warfare and any long-distance space war becomes a question of getting there first (plus a bit of luck).

This is all rather theoretical and, since we’re talking about a bunch of spaceships firing missiles at one another, not especially relevant when considering the realities of modern warfare- but it does illustrate a point, namely that as offensive capabilities increase the stakes rise of the prospect of defensive systems failing. This was spectacularly, and horrifyingly, demonstrated during 9/11, during which a handful of fanatics armed with AK’s were able to kill 5,000 people, destroy the world trade centre and irrevocably change the face of the world economy and world in general. And that came from only one mode of attack, and despite all the advances in airport security that have been made since then there is still ample opportunity for an attack of similar magnitude to happen- a terrorist organisation, we must remember, only needs to get lucky once. This means that ‘normal’ defensive methods, especially since they would have to be enforced into all of our everyday lives (given the format that terrorist attacks typically take), cannot be applied to this problem, and we must rely almost solely on intelligence efforts to try and defend ourselves.

This business of defence and offence being in imbalance in some form or another is not a phenomenon solely confined to the modern age. Once, wars were fought solely with clubs and shields, creating a somewhat balanced case of attack and defence;  attack with the club, defend with the shield. If you were good enough at defending, you could survive; simple as that. However, some bright spark then came up with the idea of the bow, and suddenly the world was in imbalance- even if an arrow couldn’t pierce an animal skin stretched over some sticks (which, most of the time, it could), it was fast enough to appear from nowhere before you had a chance to defend yourself. Thus, our defensive capabilities could not match our offensive ones. Fast forward a millennia or two, and we come to a similar situation; now we defended ourselves against arrows and such by hiding in castles behind giant stone walls  and other fortifications that were near-impossible to break down, until some smart alec realised the use of this weird black powder invented in China. The cannons that were subsequently invented could bring down castle walls in a matter of hours or less, and once again they could not be matched from the defensive standpoint- our only option now lay in hiding somewhere the artillery couldn’t get us, or running out of the way of these lumbering beasts. As artillery technology advanced throughout the ensuing centuries, this latter option became less and less feasible as the sheer numbers of high-explosive weaponry trained on opposition armies made them next-to impossible to fight in the field; but they were still difficult to aim accurately at well dug-in soldiers, and from these starting conditions we ended up with the First World War.

However, this is not a direct parallel of the situation we face now; today we deal with the simple and very real truth that a western power attempting to defend its borders (the situation is somewhat different when they are occupying somewhere like Afghanistan, but that can wait until another time) cannot rely on simple defensive methods alone- even if every citizen was an army trained veteran armed with a full complement of sub-machine guns (which they quite obviously aren’t), it wouldn’t be beyond the wit of a terrorist group to sneak a bomb in somewhere destructive. Right now, these methods may only be capable of killing or maiming hundreds or thousands at a time; tragic, but perhaps not capable of restructuring a society- but as our weapon systems get ever more advanced, and our more effective systems get ever cheaper and easier for fanatics to get hold of, the destructive power of lone murderers may increase dramatically, and with deadly consequences.

I’m not sure that counts as a coherent conclusion, or even if this counts as a coherent post, but it’s what y’got.

3500 calories per pound

This looks set to be the concluding post in this particular little series on the subject of obesity and overweightness. So, to summarise where we’ve been so far- post 1: that there are a lot of slightly chubby people present in the western world leading to statistics supporting a massive obesity problem, and that even this mediocre degree of fatness can be seriously damaging to your health. Post 2: why we have spent recent history getting slightly chubby. And for today, post 3: how one can try to do your bit, especially following the Christmas excesses and the soon-broken promises of New Year, to lose some of that excess poundage.

It was Albert Einstein who first demonstrated that mass was nothing more than stored energy, and although the theory behind that precise idea doesn’t really correlate with biology the principle still stands; fat is your body’s way of storing energy. It’s also a vital body tissue, and is not a 100% bad and evil thing to ingest, but if you want to lose it then the aim should simply be one of ensuring that one’s energy output, in the form of exercise  exceeds one’s energy input, in the form of food. The body’s response to this is to use up some of its fat stores to replace this lost energy (although this process can take up to a week to run its full course; the body is a complicated thing), meaning that the amount of fat in/on your body will gradually decrease over time. Therefore, slimming down is a process that is best approached from two directions; restricting what’s going in, and increasing what’s going out (both at the same time is infinitely more effective than an either/or process). I’ll deal with what’s going in first.

The most important point to make about improving one’s diet, and when considering weight loss generally, is that there are no cheats. There are no wonder pills that will shed 20lb of body fat in a week, and no super-foods or nutritional supplements that will slim you down in a matter of months. Losing weight is always going to be a messy business that will take several months at a minimum (the title of this post refers to the calorie content of body fat, meaning that to lose one pound you must expend 3500 more calories than you ingest over a given period of time), and unfortunately prevention is better than cure; but moping won’t help anyone, so let’s just gather our resolve and move on.

There is currently a huge debate going on concerning the nation’s diet problems of amount versus content; whether people are eating too much, or just the wrong stuff. In most cases it’s probably going to be a mixture of the two, but I tend to favour the latter answer; and in any case, there’s not much I can say about the former beyond ‘eat less stuff’. I am not a good enough cook to offer any great advice on what foods you should or shouldn’t be avoiding, particularly since the consensus appears to change every fortnight, so instead I will concentrate on the one solid piece of advice that I can champion; cook your own stuff.

This is a piece of advice that many people find hard to cope with- as I said in my last post, our body doesn’t want to waste time cooking when it could be eating. When faced with the unknown product of one’s efforts in an hours time, and the surety of a ready meal or fast food within five minutes, the latter option and all the crap that goes in it starts to seem a lot more attractive. The trick is, therefore, to learn how to cook quickly- the best meals should either take less than 10-15 minutes of actual effort to prepare and make, or be able to be made in large amounts and last for a week or more. Or, even better, both. Skilled chefs achieve this by having their skills honed to a fine art and working at a furious rate, but then again they’re getting paid for it; for the layman, a better solution is to know the right dishes. I’m not going to include a full recipe list, but there are thousands online, and there is a skill to reading recipes; it can get easy to get lost between a long list of numbers and a complicated ordering system, but reading between the lines one can often identify which recipes mean ‘chop it all up and chuck in some water for half an hour’.

That’s a very brief touch on the issue, but now I want to move on and look at energy going out; exercise. I personally would recommend sport, particularly team sport, as the most reliably fun way to get fit and enjoy oneself on a weekend- rugby has always done me right. If you’re looking in the right place, age shouldn’t be an issue (I’ve seen a 50 year old play alongside a 19 year old student at a club rugby match near me), and neither should skill so long as you are willing to give it a decent go; but, sport’s not for everyone and can present injury issues so I’ll also look elsewhere.

The traditional form of fat-burning exercise is jogging, but that’s an idea to be taken with a large pinch of salt and caution. Regular joggers will lose weight it’s true, but jogging places an awful lot of stress on one’s joints (swimming, cycling and rowing are all good forms of ‘low-impact exercise’ that avoid this issue), and suffers the crowning flaw of being boring as hell. To me, anyway- it takes up a good chunk of time, during which one’s mind is so filled with the thump of footfalls and aching limbs that one is forced to endure the experience rather than enjoy it. I’ll put up with that for strength exercises, but not for weight loss when two far better techniques present themselves; intensity sessions and walking.

Intensity sessions is just a posh name for doing very, very tiring exercise for a short period of time; they’re great for burning fat & building fitness, but I’ll warn you now that they are not pleasant. As the name suggest, these involve very high-intensity exercise (as a general rule, you not be able to talk throughout high-intensity work) performed either continuously or next to continuously for relatively short periods of time- an 8 minute session a few times a week should be plenty. This exercise can take many forms; shuttle runs (sprinting back and forth as fast as possible between two marked points or lines), suicides (doing shuttle runs between one ‘base’ line and a number of different lines at different distances from the base, such that one’s runs change in length after each set) and tabata sets (picking an easily repeatable exercise, such as squats, performing them as fast as possible for 20 seconds, followed by 10 seconds of rest, then another 20 seconds of exercise, and so on for 4-8 minute) are just three examples. Effective though these are, it’s difficult to find an area of empty space to perform them without getting awkward looks and the odd spot of abuse from passers-by or neighbours, so they may not be ideal for many people (tabata sets or other exercises such as press ups are an exception, and can generally be done in a bedroom; Mark Lauren’s excellent ‘You Are Your Own Gym’ is a great place to start for anyone interested in pursuing this route to lose weight & build muscle). This leaves us with one more option; walking.

To my mind, if everyone ate properly and walked 10,000 steps per day, the scare stats behind the media’s obesity fix would disappear within a matter of months. 10,000 steps may seem a lot, and for many holding office jobs it may seem impossible, but walking is a wonderful form of exercise since it allows you to lose oneself in thought or music, whichever takes your fancy. Even if you don’t have time for a separate walk, with a pedometer in hand (they are built into many modern iPods, and free pedometer apps are available for both iPhone and Android) and a target in mind (10k is the standard) then after a couple of weeks it’s not unusual to find yourself subtly changing the tiny aspects of your day (stairs instead of lift, that sort of thing) to try and hit your target; and the results will follow. As car ownership, an office economy and lack of free time have all grown in the last few decades, we as a nation do not walk as much as we used to. It’s high time that changed.

Why the chubs?

My last post dealt with the thorny issue of obesity, both it’s increasing presence in our everyday lives, and what for me is the underlying reason behind the stats that back up media scare stories concerning ‘the obesity epidemic’- the rise in size of the ‘average’ person over the last few decades. The precise causes of this trend can be put down to a whole host of societal factors within our modern age, but that story is boring as hell and has been repeated countless times by commenters far more adept in this field than me. Instead, today I wish present the case for modern-day obesity as a problem concerning the fundamental biology of a human being.

We, and our dim and distant ancestors of the scaly/furry variety, have spent the last few million years living wild; hunting, fighting and generally acting much like any other evolutionary pathway. Thus, we can learn a lot about our own inbuilt biology and instincts by studying the behaviour of animals currently alive today, and when we do so, several interesting animal eating habits become apparent. As anyone who has tried it as a child can attest (and I speak from personal experience), grass is not good stuff to eat. It’s tough, it takes a lot of chewing and processing (many herbivores have multiple stomachs to make sure they squeeze the maximum nutritional value out of their food), and there really isn’t much of it to power a fully-functional being. As such, grazers on grass and other such tough plant matter (such as leaves) will spend most of their lives doing nothing but guzzle the stuff, trying to get as much as possible through their system. Other animals will favour food with a higher nutritional content, such as fruits, tubers or, in many cases, meat, but these frequently present issues. Fruits are highly seasonal and rarely available in a large enough volume to support a large population, as well as being quite hard to get a lot of down; plants try to ‘design’ fruits so that each visitor takes only a few at a time, so as best to spread their seeds far and wide, and as such there are few animals that can sustain themselves on such a diet.  Other food such as tubers or nuts are hard to get at, needing to be dug up or broken in highly energy-consuming activities, whilst meat has the annoying habit of running away or fighting back whenever you try to get at it. As anyone who watches nature documentaries will attest, most large predators will only eat once every few days (admittedly rather heavily).

The unifying factor of all of this is that food is, in the wild, highly energy- and time-consuming to get hold of and consume, since every source of it guards its prize jealously. Therefore, any animal that wants to survive in this tough world must be near-constantly in pursuit of food simply to fulfil all of its life functions, and this is characterised by being perpetually hungry. Hunger is a body’s way of telling us that we should get more food, and in the wild this constant desire for more is kept in check by the difficulty that getting hold of it entails. Similarly, animal bodies try to assuage this desire by being lazy; if something isn’t necessary, then there’s no point wasting valuable energy going after it (since this will mean spending more time going after food to replace lost energy.)

However, in recent history (and a spectacularly short period of time from evolution’s point of view), one particular species called homo sapiens came up with this great idea called civilisation, which basically entailed the pooling and sharing of skill and resources in order to best benefit everyone as a whole. As an evolutionary success story, this is right up there with developing multicellular body structures in terms of being awesome, and it has enabled us humans to live far more comfortable lives than our ancestors did, with correspondingly far greater access to food. This has proved particularly true over the last two centuries, as technological advances in a more democratic society have improved the everyman’s access to food and comfortable living to a truly astounding degree. Unfortunately (from the point of view of our waistline) the instincts of our bodies haven’t quite caught up to the idea that when we want/need food, we can just get food, without all that inconvenient running around after it to get in the way. Not only that, but a lack of pack hierarchy combined with this increased availability means that we can stock up on food until we have eaten our absolute fill if so we wish; the difference between ‘satiated’ and ‘stuffed’ can work out as well over 1000 calories per meal, and over a long period of time it only takes a little more than we should be having every day to start packing on the pounds. Combine that with our natural predilection to laziness meaning that we don’t naturally think of going out for some exercise as fun purely for its own sake, and the fact that we no longer burn calories chasing our food, or in the muscles we build up from said chasing, and we find ourselves consuming a lot more calories than we really should be.

Not only that, but during this time we have also got into the habit of spending a lot of time worrying over the taste and texture of our food. This means that, unlike our ancestors who were just fine with simply jumping on a squirrel and devouring the thing, we have to go through the whole rigmarole of getting stuff out of the fridge, spending two hours slaving away in a kitchen and attempting to cook something vaguely resembling tasty. This wait is not something out bodies enjoy very much, meaning we often turn to ‘quick fixes’ when in need of food; stuff like bread, pasta or ready meals. Whilst we all know how much crap goes into ready meals (which should, as a rule, never be bought by anyone who cares even in the slightest about their health; salt content of those things is insane) and other such ‘quick fixes’, fewer people are aware of the impact a high intake of whole grains can have on our bodies. Stuff like bread and rice only started being eaten by humans a few thousand years ago, as we discovered the benefits of farming and cooking, and whilst they are undoubtedly a good food source (and are very, very difficult to cut from one’s diet whilst still remaining healthy) our bodies have simply not had enough time, evolutionarily speaking, to get used to them. This means they have a tendency to not make us feel as full as their calorie content should suggest, thus meaning that we eat more than our body in fact needs (if you want to feel full whilst not taking in so many calories, protein is the way to go; meat, fish and dairy are great for this).

This is all rather academic, but what does it mean for you if you want to lose a bit of weight? I am no expert on this, but then again neither are most of the people acting as self-proclaimed nutritionists in the general media, and anyway, I don’t have any better ideas for posts. So, look at my next post for my, admittedly basic, advice for anyone trying to make themselves that little bit healthier, especially if you’re trying to work of a few of the pounds built up over this festive season.

The Slightly Chubby Brigade

As the news will tell you at every single available opportunity, we are living through an obesity crisis. Across the western world (USA being the worst and Britain coming in second) our average national BMI is increasing and the number of obese and overweight people, and children especially, looks to be soaring across the board. Only the other day I saw a statistic that said nearly a third of children are now leaving primary school (ie one third of eleven year-olds) overweight, and such solemn numbers frequently make headlines.

This is a huge issue, encompassing several different issues and topics that I will attempt to consider over my next few posts (yeah, ‘nother multi-parter coming up), but for many of us it seems hideously exaggerated. I mean yes, we’ve all seen the kind of super-flabby people, the kind the news footage always cuts to when we hear some obesity health scare, the kind who are wider than they are tall and need a mobility scooter just to get around most of the time. We look at these pictures and we tut, and we might consider our own shape- but we’re basically fine, aren’t we. Sure, there’s a bit of a belly showing, but that’s normal- a good energy store and piece of insulation, in fact, and we would like to have a life beyond the weight-obsessed calorie counters that hardcore slimmers all seem to be. We don’t need to worry, do we?

Well, according to the numbers, actually we do. The average height of a Briton… actually, if you’re stumbling across this at home and you consider yourself normal, go and weigh yourself and, if you can, measure your height as well. Write those numbers down, and now continue reading. The average height of a Briton at the moment is 1.75m, or around 5’9″ in old money, and we might consider a normal weight for that height to be around 80 kilos, or 170 pounds. That might seem normal enough; a bit of a paunch, but able to get around and walk, and certainly no one would call you fat. Except perhaps your doctor, because according to the BMI chart I’ve got pulled up a 5 foot 9, 80 kilo human is deemed clinically overweight. Not by much, but you’d still weigh more than is healthy- in fact, one stat I heard a while ago puts the average Briton at this BMI. Try it with your measurements; BMI charts are freely available over the web.

This, to me, is one of the real underlying causes of ‘the obesity epidemic’- a fundamental misunderstanding of what ‘overweight’ consists of. Whenever our hideously awful everyone-dead-from-McDonalds-overdose etc. etc. diet is brought up on the news, it is always annotated by pictures of hanging bellies and bouncing flab, the kind of bodies that make one almost physically sick to look at. But, whilst these people certainly exist, there are not enough of them for the obesity issue to be even worth mentioning in everyday society; whilst the proportion of morbidly obese people is significant, it’s not seriously worth thought for most of us.

No, the real cause for all the chilling statistics we hear on the news is all the people who don’t look to be overweight. The kind whose diet isn’t appalling (no 24/7 McDonaldses), who are quite capable of exercise when it suits them, and who might take a rough glance at the dietary information of the stuff they buy in the supermarket. But these people are nonetheless hovering on the overweight borderline, pulling up the national average, despite the fact that they don’t consider anything to be wrong; in fact, some women who are according to the evil numbers overweight, may consider it almost dutiful to not become obsessed over shedding every pound and to maintain their curves. Having a bit of excess weight is, after all, still better than being underweight and anorexic, and the body image pressures some young women are coming under are just as much of an issue as national obesity. Even for those who don’t have such opinions, many of the slightly overweight feel that they don’t have any weight issues and that there’s surely no significant health risk associated with a ‘bit of meat on your bones’ (it’s actually muscle, rather than fat, that technically forms meat, but ho hum); as such, they have absolutely no motivation to get their weight down, as they don’t think they need to.

I won’t waste much of my time on all the reasons for this statement, but unfortunately even this slight degree of overweight-ness will significantly increase your risk of major health problems somewhere down the line, particularly that of heart disease (which is going through the roof at the moment); diabetes isn’t likely to be a risk for the overweight unless they’re really overdoing things, but that’s also a potential, and very serious, health hazard. The trouble is that many of us find it hard to make this connection if we basically feel healthy. Despite what the doctor says and no matter how much we trust them, if we are capable of going for a nice walk and generally getting about without getting out of breath or feeling bad then we probably feel justified in thinking of ourselves as healthy. Our heart doesn’t seem about to give out, so why worry about it.

The thing to remember is that the heart is just a muscle, so if it isn’t stressed it will degrade just like any other. You know those triceps that haven’t done a press up in five years? Feel how small and weak they are? Yeah, that kind of thing can quite easily happen to the muscles that are responsible for keeping you alive. Your heart might be pumping all day long and be a different type of muscle, so the process will be slower, but give it twenty years and you might start to see the effects.

But anyway, I’m not here to lecture you about your health; that’s far too depressing and dull for my liking- the only point I was trying to make is that many of the accidental contributors to ‘the obesity epidemic’ are probably unaware that their health is in any way a problem, and not really through fault of their own. So whose fault is it then? Well, that one can wait until next time…