The Myth of Popularity

WARNING: Everything I say forthwith is purely speculative based on a rough approximation of a presented view of how a part of our world works, plus some vaguely related stuff I happen to know. It is very likely to differ from your own personal view of things, so please don’t get angry with me if it does.

Bad TV and cinema is a great source of inspiration; not because there’s much in it that’s interesting, but because there’s just so much of it that even without watching any it is possible to pick up enough information to diagnose trends, which are generally interesting to analyse. In this case, I refer to the picture of American schools that is so often portrayed by iteration after iteration of generic teenage romance/romcom/’drama’, and more specifically the people in it.

One of the classic plot lines of these types of things involves the ‘hopelessly lonely/unpopular nerd who has crush on Miss Popular de Cheerleader and must prove himself by [insert totally retarded idea]’. Needless to say these plot lines are more unintentionally hilarious and excruciating than anything else, but they work because they play on the one trope that so many of us are familiar with; that of the overbearing, idiotic, horrible people from the ‘popular’ social circle. Even if we were not raised within a sitcom, it’s a situation repeated in thousands of schools across the world- the popular kids are the arseholes at the top with inexplicable access to all the gadgets and girls, and the more normal, nice people lower down the social circle.

The image exists in our conciousness long after leaving school for a whole host of reasons; partly because major personal events during our formative years tend to have a greater impact on our psyche than those occurring later on in life, but also because it is often our first major interaction with the harsh unfairness life is capable of throwing at us. The whole situation seems totally unfair and unjust; why should all these horrible people be the popular ones, and get all the social benefits associated with that? Why not me, a basically nice, humble person without a Ralph Lauren jacket or an iPad 3, but with a genuine personality? Why should they have all the luck?

However, upon analysing the issue then this object of hate begins to break down; not because the ‘popular kids’ are any less hateful, but because they are not genuinely popular. If we define popular as a scale representative of how many and how much people like you (because what the hell else is it?), then it becomes a lot easier to approach it from a numerical, mathematical perspective. Those at the perceived top end of the social spectrum generally form themselves into a clique of superiority, where they all like one another (presumably- I’ve never been privy to being in that kind of group in order to find out) but their arrogance means that they receive a certain amount of dislike, and even some downright resentment, from the rest of the immediate social world. By contrast, members of other social groups (nerds, academics [often not the same people], those sportsmen not in the ‘popular’ sphere, and the myriad of groups of undefineable ‘normies’ who just splinter off into their own little cliques) tend to be liked by members of their selected group and treated with either neutrality or minor positive or negative feeling from everyone else, leaving them with an overall ‘popularity score’, from an approximated mathematical point of view, roughly equal to or even greater than the ‘popular’ kids. Thus, the image of popularity is really something of a myth, as these people are not technically speaking any more popular than anyone else.

So, then, how has this image come to present itself as one of popularity, of being the top of the social spectrum? Why are these guys on top, seemingly above group after group of normal, friendly people with a roughly level playing field when it comes to social standing?

If you were to ask George Orwell this question, he would present you with a very compelling argument concerning the nature of a social structure to form a ‘high’ class of people (shortly after asking you how you managed to communicate with him beyond the grave). He and other social commentators have frequently pointed out that the existence of a social system where all are genuinely treated equally is unstable without some ‘higher class’ of people to look up to- even if it is only in hatred. It is humanity’s natural tendency to try and better itself, try to fight its way to the top of the pile, so if the ‘high’ group disappear temporarily they will be quickly replaced; hence why there is such a disparity between rich and poor even in a country such as the USA founded on the principle that ‘all men are created free and equal’. This principle applies to social situations too; if the ‘popular’ kids were to fall from grace, then some other group would likely rise to fill the power vacuum at the top of the social spectrum. And, as we all know, power and influence are powerful corrupting forces, so this position would be likely to transform this new ‘popular’ group into arrogant b*stards too, removing the niceness they had when they were just normal guys. This effect is also in evidence that many of the previously hateful people at the top of the spectrum become very normal and friendly when spoken to one-on-one, outside of their social group (from my experience anyway; this does not apply to all people in such groups)

However, another explanation is perhaps more believable; that arrogance is a cause rather than a symptom. By acting like they are better than the rest of the world, the rest of the world subconsciously get it into their heads that, much though they are hated, they are the top of the social ladder purely because they said so. And perhaps this idea is more comforting, because it takes us back to the idea we started with; that nobody is more actually popular than anyone else, and that it doesn’t really matter in the grand scheme of things. Regardless of where your group ranks on the social scale, if it’s yours and you get along with the people in it, then it doesn’t really matter about everyone else or what they think, so long as you can get on, be happy, and enjoy yourself.

Footnote: I get most of these ideas from what is painted by the media as being the norm in American schools and from what friends have told me, since I’ve been lucky enough that the social hierarchies I encountered from my school experience basically left one another along. Judging by the horror stories other people tell me, I presume it was just my school. Plus, even if it’s total horseshit, it’s enough of a trope that I can write a post about it.

Pineapples (TM)

If the last few decades of consumerism have taught us anything, it is just how much faith people are able of setting store in a brand. In everything from motorbikes to washing powder, we do not simply test and judge effectiveness of competing products objectively (although, especially when considering expensive items such as cars, this is sometimes impractical); we must compare them to what we think of the brand and the label, what reputation this product has and what it is particularly good at, which we think most suits our social standing and how others will judge our use of it. And good thing too, from many companies’ perspective, otherwise the amount of business they do would be slashed. There are many companies whose success can be almost entirely put down to the effect of their branding and the impact their marketing has had on the psyche of western culture, but perhaps the most spectacular example concerns Apple.

In some ways, to typecast Apple as a brand-built company is a harsh one; their products are doubtless good ones, and they have shown a staggering gift for bringing existed ideas together into forms that, if not quite new, are always the first to be a practical, genuine market presence. It is also true that Apple products are often better than their competitors in very specific fields; in computing, for example, OS X is better at dealing with media than other operating systems, whilst Windows has traditionally been far stronger when it comes to word processing, gaming and absolutely everything else (although Windows 8 looks very likely to change all of that- I am not looking forward to it). However, it is almost universally agreed (among non-Apple whores anyway) that once the rest of the market gets hold of it Apple’s version of a product is almost never the definitive best, from a purely analytical perspective (the iPod is a possible exception, solely due to the existence of iTunes redefining the music industry before everyone else and remaining competitive to this day) and that every Apple product is ridiculously overpriced for what it is. Seriously, who genuinely thinks that top-end Macs are a good investment?

Still, Apple make high-end, high-quality products with a few things they do really, really well that are basically capable of doing everything else. They should have a small market share, perhaps among the creative or the indie, and a somewhat larger one in the MP3 player sector. They should be a status symbol for those who can afford them, a nice company with a good history but that nowadays has to face up to a lot of competitors. As it is, the Apple way of doing business has proven successful enough to make them the biggest private company in the world. Bigger than every other technology company, bigger than every hedge fund or finance company, bigger than any oil company, worth more than every single one (excluding state owned companies such as Saudi Aramco, which is estimated to be worth around 3 trillion dollars by dealing in Saudi oil exports). How has a technology company come to be worth $400 billion? How?

One undoubted feature is Apple’s uncanny knack of getting there first- the Apple II was the first real personal computer and provided the genes for Windows-powered PC’s to take the world, whilst the iPod was the first MP3 player that was genuinely enjoyable to use, the iPhone the first smartphone (after just four years, somewhere in the region of 30% of the world’s phones are now smartphones) and the iPad the first tablet computer. Being in the technology business has made this kind of innovation especially rewarding for them; every company is constantly terrified of being left behind, so whenever a new innovation comes along they will knock something together as soon as possible just to jump on the bandwagon. However, technology is a difficult business to get right, meaning that these products are usually rubbish and make the Apple version shine by comparison. This also means that if Apple comes up with the idea first, they have had a couple of years of working time to make sure they get it right, whilst everyone else’s first efforts have had only a few scance months; it takes a while for any serious competitors to develop, by which time Apple have already made a few hundred million off it and have moved on to something else; innovation matters in this business.

But the real reason for Apple’s success can be put down to the aura the company have built around themselves and their products. From their earliest infancy Apple fans have been self-dubbed as the independent, the free thinkers, the creative, those who love to be different and stand out from the crowd of grey, calculating Windows-users (which sounds disturbingly like a conspiracy theory or a dystopian vision of the future when it is articulated like that). Whilst Windows has its problems, Apple has decided on what is important and has made something perfect in this regard (their view, not mine), and being willing to pay for it is just part of the induction into the wonderful world of being an Apple customer (still their view). It’s a compelling world view, and one that thousands of people have subscribed to, simply because it is so comforting; it sells us the idea that we are special, individual, and not just one of the millions of customers responsible for Apple’s phenomenal size and success as a company. But the secret to the success of this vision is not just the view itself; it is the method and the longevity of its delivery. This is an image that has been present in their advertising campaign from its earliest infancy, and is now so ingrained that it doesn’t have to be articulated any more; it’s just present in the subtle hints, the colour scheme, the way the Apple store is structured and the very existence of Apple-dedicated shops generally. Apple have delivered the masterclass in successful branding; and that’s all the conclusion you’re going to get for today.

The Red Flower

Fire is, without a doubt, humanity’s oldest invention and its greatest friend; to many, the fundamental example what separates us from other animals. The abilities to keep warm through the coldest nights and harshest winters, to scare away predators by harnessing this strange force of nature, and to cook a joint of meat because screw it, it tastes better that way, are incredibly valuable ones, and they have seen us through many a tough moment. Over the centuries, fire in one form or another has been used for everything from being a weapon of war to furthering science, and very grateful we are for it too.

However, whilst the social history of fire is interesting, if I were to do a post on it then you dear readers would be faced with 1000 words of rather repetitive and somewhat boring myergh (technical term), so instead I thought I would take this opportunity to resort to my other old friend in these matters: science, as well as a few things learned from several years of very casual outdoorsmanship.

Fire is the natural product of any sufficiently exothermic reaction (ie one that gives out heat, rather than taking it in). These reactions can be of any type, but since fire can only form in air most of such reactions we are familiar with tend to be oxidation reactions; oxygen from the air bonding chemically with the substance in question (although there are exceptions;  a sample of potassium placed in water will float on the top and react with the water itself, become surrounded surrounded by a lilac flame sufficiently hot to melt it, and start fizzing violently and pushing itself around the container. A larger dose of potassium, or a more reactive alkali metal such as rubidium, will explode). The emission of heat causes a relatively gentle warming effect for the immediate area, but close to the site of the reaction itself a very large amount of heat is emitted in a small area. This excites the molecules of air close to the reaction and causes them to vibrate violently, emitting photons of electromagnetic radiation as they do so in the form of heat & light (among other things). These photons cause the air to glow brightly, creating the visible flame we can see; this large amount of thermal energy also ionises a lot of atoms and molecules in the area of the flame, meaning that a flame has a slight charge and is more conductive than the surrounding air. Because of this, flame probes are sometimes used to get rid of the excess charge in sensitive electromagnetic experiments, and flamethrowers can be made to fire lightning. Most often the glowing flame results in the characteristic reddy/orange colour of fire, but some reactions, such as the potassium one mentioned, cause them to emit radiation of other frequencies for a variety of reasons (chief among them the temperature of the flame and the spectral properties of the material in question), causing the flames to be of different colours, whilst a white-hot area of a fire is so hot that the molecules don’t care what frequency the photons they’re emitting are at so long as they can get rid of the things fast enough. Thus, light of all wavelengths gets emitted, and we see white light. The flickery nature of a flame is generally caused by the excited hot air moving about rapidly, until it gets far enough away from the source of heat to cool down and stop glowing; this process happens all the time with hundreds of packets of hot air, causing them to flicker back and forth.

However, we must remember that fires do not just give out heat, but must take some in too. This is to do with the way the chemical reaction to generate the heat in question works; the process requires the bonds between atoms to be broken, which uses up energy, before they can be reformed into a different pattern to release energy, and the energy needed to break the bonds and get the reaction going is known as the activation energy. Getting the molecules of the stuff you’re trying to react to the activation energy is the really hard part of lighting a fire, and different reactions (involving the burning of different stuff) have different activation energies, and thus different ‘ignition temperatures’ for the materials involved. Paper, for example, famously has an ignition temperature of 451 Fahrenheit (which means, incidentally, that you can cook with it if you’re sufficiently careful and not in a hurry to eat), whilst wood’s is only a little higher at around 300 degrees centigrade, both of which are less than that of a spark or flame. However, we must remember that neither fuel will ignite if it is wet, as water is not a fuel that can be burnt, meaning that it often takes a while to dry wood out sufficiently for it to catch, and that big, solid blocks of wood take quite a bit of energy to heat up.

From all of this information we can extrapolate the first rule that everybody learns about firelighting; that in order to catch a fire needs air, dry fuel and heat (the air provides the oxygen, the fuel the stuff it reacts with and the heat the activation energy). When one of these is lacking, one must make up for it by providing an excess of at least one of the other two, whilst remembering not to let the provision of the other ingredients suffer; it does no good, for example, to throw tons of fuel onto a new, small fire since it will snuff out its access to the air and put the fire out. Whilst fuel and air are usually relatively easy to come by when starting a fire, heat is always the tricky thing; matches are short lived, sparks even more so, and the fact that most of your fuel is likely to be damp makes the job even harder.

Provision of heat is also the main reason behind all of our classical methods of putting a fire out; covering it with cold water cuts it off from both heat and oxygen, and whilst blowing on a fire will provide it with more oxygen, it will also blow away the warm air close to the fire and replace it with cold, causing small flames like candles to be snuffed out (it is for this reason that a fire should be blown on very gently if you are trying to get it to catch and also why doing so will cause the flames, which are caused by hot air remember, to disappear but the embers to glow more brightly and burn with renewed vigour once you have stopped blowing).  Once a fire has sufficient heat, it is almost impossible to put out and blowing on it will only provide it with more oxygen and cause it to burn faster, as was ably demonstrated during the Great Fire of London. I myself have once, with a few friends, laid a fire that burned for 11 hours straight; many times it was reduced to a few humble embers, but it was so hot that all we had to do was throw another log on it and it would instantly begin to burn again. When the time came to put it out, it took half an hour for the embers to dim their glow.

What the @*$!?

WARNING: Swearing will feature prominently in this post, as will a discussion of sexual material. Children, if your parents shout at you for reading this then it is officially YOUR PROBLEM. Okay?

I feel this may also be the place to apologise for missing a week of posts; didn’t stop writing them, did stop posting them. Don’t know why

Language can enable us to do many things; articulate our ideas, express our sorrow, reveal our love, and tell somebody else’s embarrassing stories to name but a few. Every language has approached these and other practicalities of the everyday life they are designed to assist in different ways (there is one language I have heard of with no word for left or right, meaning that they refer to everything in terms of points of a compass and all members of the tribe thus have an inbuilt sense of where north is at all times), but there is one feature that every language from Japanese to Klingon has managed to incorporate, something without which a language would not be as complete and fully-fabricated as it ought, and which is almost always the first thing learnt by a student of a new language; swearing.

(Aside Note: English, partly due to its flexible nature and the fact that it didn’t really develop as a language until everyone else had rather shown it the way, has always been a particularly good language for being thoroughly foul and dirty, and since it’s the only language I have any degree of reasonable proficiency in I think I’ll stick to that for the time being. If anyone knows anything interesting about swearing in other languages, please feel free to leave them in the comments)

Swearing, swearwords and bad language itself generally have one of three sources; many of the ‘milder’ swearwords tend to have a religious origin, and more specifically refer either to content considered evil by the Church/some form of condemnation to evil (so ‘damn’, in reference to being ‘damned’ by Satan), or to stuff considered in some way blasphemous and therefore wrong (the British idiom ‘bloody’ stems from the Tudor expression ‘God’s Blood’, which along with similar references such as ‘Christ’s Passion’ suggested that the Holy Trinity was in some way fallible and human, and thus capable of human weakness and vice- this was blasphemy according to the Church and therefore wrong). The place of ‘mid-level’ swearwords is generally taken by rather crude references to excrement, egestion and bodily emissions in general (piss, shit etc.). The ‘worst swearwords’ in modern society are of course sexual in nature, be they either references to genitalia, prostitution or the act itself.

The reason for these ideas having become sweary & inappropriate is a fairly simple, but nonetheless interesting, route to track. When the Church ruled the land, anything considered blasphemous or wrong according to their literature and world view was frowned upon at best and punished severely at worst, so words connected to these ideas were simply not broached in public. People knew what they meant, of course, and in seedy or otherwise ‘underground’ places, where the Church’s reach was weak, these words found a home, instantly connecting them with this ‘dirty’ side of society. Poo and sex, of course, have always been considered ‘dirty’ among polite society, something always kept behind closed doors (I’ve done an entire post on the sex aspect of this before) and are thus equally shocking and ripe for sweary material when exposed to the real world.

A quick social analysis of these themes also reveals the reasons behind the ‘hierarchy’ of swearwords. In the past hundred years, the role of the church in everyday western society has dropped off dramatically and offending one’s local priest (or your reputation with him) has become less of a social concern. Among the many consequences of this (and I’m sure an aggressive vicar could list a hundred more) has been the increased prevalence of swearing in normal society, and the fall of Church-related swearwords in terms of how severe they are; using a word once deemed blasphemous doesn’t really seem that serious in a secular society, and the meaning it does have is almost anachronistic in nature. It helps, of course, that these words are among the oldest swearwords that have found common use, meaning that as time has gone by their original context has been somewhat lost and they have got steadily more and more tame. Perhaps in 200 years my future equivalent will be able to say dick in front of his dad for this reason.

The place of excrement and sex in our society has, however, not changed much in the last millennia or two. Both are things that are part of our everyday lives that all of us experience, but that are not done in the front room or broached in polite company- rather ugly necessities and facts of life still considered ‘wrong’ enough to become swearwords. However, whilst going to the loo is a rather inconvenient business that is only dirty because the stuff it produces is (literally), sex is something that we enjoy and often seek out. It is, therefore, a vice, something which we can form an addiction to, and addictions are something that non-addicts find slightly repulsive when observed in addicts or regular practitioners. The Church (yes, them again) has in particular found sex abhorrent if it is allowed to become rampant and undignified, historically favouring rather strict, Victorian positions and execution- all of which means that, unlike poo, sex has been actively clamped down on in one way or another at various points in history. This has, naturally, rarely done much to combat whatever has been seen as the ‘problem’, merely forcing it underground in most cases, but what it has done is put across an image of sex as something that is not just rather dirty but actively naughty and ‘wrong’. This is responsible partly for the thrill some people get when trash talking about and during sex, and the whole ‘you’ve been a naughty girl’ terminology and ideas that surround the concept of sex- but it is also responsible for making sexually explicit references even more underhand, even more to be kept out of polite spheres of movement, and thus making sexually-related swearwords the most ‘extreme’ of all those in our arsenal.

So… yeah, that’s what I got on the subject of swearing. Did anyone want a conclusion to this or something?

Determinism

In the early years of the 19th century, science was on a roll. The dark days of alchemy were beginning to give way to the modern science of chemistry as we know it today, the world of physics and the study of electromagnetism were starting to get going, and the world was on the brink of an industrial revolution that would be powered by scientists and engineers. Slowly, we were beginning to piece together exactly how our world works, and some dared to dream of a day where we might understand all of it. Yes, it would be a long way off, yes there would be stumbling blocks, but maybe, just maybe, so long as we don’t discover anything inconvenient like advanced cosmology, we might one day begin to see the light at the end of the long tunnel of science.

Most of this stuff was the preserve of hopeless dreamers, but in the year 1814 a brilliant mathematician and philosopher, responsible for underpinning vast quantities of modern mathematics and cosmology, called Pierre-Simon Laplace published a bold new article that took this concept to extremes. Laplace lived in the age of ‘the clockwork universe’, a theory that held Newton’s laws of motion to be sacrosanct truths and claimed that these laws of physics caused the universe to just keep on ticking over, just like the mechanical innards of a clock- and just like a clock, the universe was predictable. Just as one hour after five o clock will always be six, presuming a perfect clock, so every result in the world can be predicted from the results. Laplace’s arguments took such theory to its logical conclusion; if some vast intellect were able to know the precise positions of every particle in the universe, and all the forces and motions of them, at a single point in time, then using the laws of physics such an intellect would be able to know everything, see into the past, and predict the future.

Those who believed in this theory were generally disapproved of by the Church for devaluing the role of God and the unaccountable divine, whilst others thought it implied a lack of free will (although these issues are still considered somewhat up for debate to this day). However, among the scientific community Laplace’s ideas conjured up a flurry of debate; some entirely believed in the concept of a predictable universe, in the theory of scientific determinism (as it became known), whilst others pointed out the sheer difficulty in getting any ‘vast intellect’ to fully comprehend so much as a heap of sand as making Laplace’s arguments completely pointless. Other, far later, observers, would call into question some of the axiom’s upon which the model of the clockwork universe was based, such as Newton’s laws of motion (which collapse when one does not take into account relativity at very high velocities); but the majority of the scientific community was rather taken with the idea that they could know everything about something should they choose to. Perhaps the universe was a bit much, but being able to predict everything, to an infinitely precise degree, about a few atoms perhaps, seemed like a very tempting idea, offering a delightful sense of certainty. More than anything, to these scientists there work now had one overarching goal; to complete the laws necessary to provide a deterministic picture of the universe.

However, by the late 19th century scientific determinism was beginning to stand on rather shaky ground; although  the attack against it came from the rather unexpected direction of science being used to support the religious viewpoint. By this time the laws of thermodynamics, detailing the behaviour of molecules in relation to the heat energy they have, had been formulated, and fundamental to the second law of thermodynamics (which is, to this day, one of the fundamental principles of physics) was the concept of entropy.  Entropy (denoted in physics by the symbol S, for no obvious reason) is a measure of the degree of uncertainty or ‘randomness’ inherent in the universe; or, for want of a clearer explanation, consider a sandy beach. All of the grains of sand in the beach can be arranged in a vast number of different ways to form the shape of a disorganised heap, but if we make a giant, detailed sandcastle instead there are far fewer arrangements of the molecules of sand that will result in the same structure. Therefore, if we just consider the two situations separately, it is far, far more likely that we will end up with a disorganised ‘beach’ structure rather than a castle forming of its own accord (which is why sandcastles don’t spring fully formed from the sea), and we say that the beach has a higher degree of entropy than the castle. This increased likelihood of higher entropy situations, on an atomic scale, means that the universe tends to increase the overall level of entropy in it; if we attempt to impose order upon it (by making a sandcastle, rather than waiting for one to be formed purely by chance), we must input energy, which increases the entropy of the surrounding air and thus resulting in a net entropy increase. This is the second law of thermodynamics; entropy always increases, and this principle underlies vast quantities of modern physics and chemistry.

If we extrapolate this situation backwards, we realise that the universe must have had a definite beginning at some point; a starting point of order from which things get steadily more chaotic, for order cannot increase infinitely as we look backwards in time. This suggests some point at which our current universe sprang into being, including all the laws of physics that make it up; but this cannot have occurred under ‘our’ laws of physics that we experience in the everyday universe, as they could not kickstart their own existence. There must, therefore, have been some other, higher power to get the clockwork universe in motion, destroying the image of it as some eternal, unquestionable predictive cycle. At the time, this was seen as vindicating the idea of the existence of God to start everything off; it would be some years before Edwin Hubble would venture the Big Bang Theory, but even now we understand next to nothing about the moment of our creation.

However, this argument wasn’t exactly a death knell for determinism; after all, the laws of physics could still describe our existing universe as a ticking clock, surely? True; the killer blow for that idea would come from Werner Heisenburg in 1927.

Heisenburg was a particle physicist, often described as the person who invented quantum mechanics (a paper which won him a Nobel prize). The key feature of his work here was the concept of uncertainty on a subatomic level; that certain properties, such as the position and momentum of a particle, are impossible to know exactly at any one time. There is an incredibly complicated explanation for this concerning wave functions and matrix algebra, but a simpler way to explain part of the concept concerns how we examine something’s position (apologies in advance to all physics students I end up annoying). If we want to know where something is, then the tried and tested method is to look at the thing; this requires photons of light to bounce off the object and enter our eyes, or hypersensitive measuring equipment if we want to get really advanced. However, at a subatomic level a photon of light represents a sizeable chunk of energy, so when it bounces off an atom or subatomic particle, allowing us to know where it is, it so messes around with the atom’s energy that it changes its velocity and momentum, although we cannot predict how. Thus, the more precisely we try to measure the position of something, the less accurately we are able to know its velocity (and vice versa; I recognise this explanation is incomplete, but can we just take it as red that finer minds than mine agree on this point). Therefore, we cannot ever measure every property of every particle in a given space, never mind the engineering challenge; it’s simply not possible.

This idea did not enter the scientific consciousness comfortably; many scientists were incensed by the idea that they couldn’t know everything, that their goal of an entirely predictable, deterministic universe would forever remain unfulfilled. Einstein was a particularly vocal critic, dedicating the rest of his life’s work to attempting to disprove quantum mechanics and back up his famous statement that ‘God does not play dice with the universe’. But eventually the scientific world came to accept the truth; that determinism was dead. The universe would never seem so sure and predictable again.

The Offensive Warfare Problem

If life has shown itself to be particularly proficient at anything, it is fighting. There is hardly a creature alive today that does not employ physical violence in some form to get what it wants (or defend what it has) and, despite a vast array of moral arguments to the contrary of that being a good idea (I must do a post on the prisoner’s dilemma some time…), humankind is, of course, no exception. Unfortunately, our innate inventiveness and imagination as a race means that we have been able to let our brains take our fighting to the next level, with consequences that have got ever-more destructive as  time has gone  by. With the construction of the first atomic bombs, humankind had finally got to where it had threatened to for so long- the ability to literally wipe out planet earth.

This insane level of offensive firepower is not just restricted to large-scale big-guns (the kind that have been used fir political genital comparison since Napoleon revolutionised the use of artillery in warfare)- perhaps the most interesting and terrifying advancement in modern warfare and conflict has been the increased prevalence and distribution of powerful small arms, giving ‘the common man’ of the battlefield a level of destructive power that would be considered hideously overwrought in any other situation (or, indeed, the battlefield of 100 years ago). The epitomy of this effect is, of course, the Kalashnikov AK-47, whose cheapness and insane durability has rendered it invaluable to rebel groups or other hastily thrown together armies, giving them an ability to kill stuff that makes them very, very dangerous to the population of wherever they’re fighting.

And this distribution of such awesomely dangerous firepower has began to change warfare, and to explain how I need to go on a rather dramatic detour. The goal of warfare has always, basically, centred around the control of land and/or population, and as James Herbert makes so eminently clear in Dune, whoever has the power to destroy something controls it, at least in a military context. In his book Ender’s Shadow (I feel I should apologise for all these sci-fi references), Orson Scott Card makes the entirely separate point that defensive warfare in the context of space warfare makes no practical sense. For a ship & its weapons to work in space warfare, he rather convincingly argues, the level of destruction it must be able to deliver would have to be so large that, were it to ever get within striking distance of earth it would be able to wipe out literally billions- and, given the distance over which any space war must be conducted, mutually assured destruction simply wouldn’t work as a defensive strategy as it would take far too long for any counterstrike attempt to happen. Therefore, any attempt to base one’s warfare effort around defence, in a space warfare context, is simply too risky, since one ship (or even a couple of stray missiles) slipping through in any of the infinite possible approach directions to a planet would be able to cause uncountable levels of damage, leaving the enemy with a demonstrable ability to destroy one’s home planet and, thus, control over it and the tactical initiative. Thus, it doesn’t make sense to focus on a strategy of defensive warfare and any long-distance space war becomes a question of getting there first (plus a bit of luck).

This is all rather theoretical and, since we’re talking about a bunch of spaceships firing missiles at one another, not especially relevant when considering the realities of modern warfare- but it does illustrate a point, namely that as offensive capabilities increase the stakes rise of the prospect of defensive systems failing. This was spectacularly, and horrifyingly, demonstrated during 9/11, during which a handful of fanatics armed with AK’s were able to kill 5,000 people, destroy the world trade centre and irrevocably change the face of the world economy and world in general. And that came from only one mode of attack, and despite all the advances in airport security that have been made since then there is still ample opportunity for an attack of similar magnitude to happen- a terrorist organisation, we must remember, only needs to get lucky once. This means that ‘normal’ defensive methods, especially since they would have to be enforced into all of our everyday lives (given the format that terrorist attacks typically take), cannot be applied to this problem, and we must rely almost solely on intelligence efforts to try and defend ourselves.

This business of defence and offence being in imbalance in some form or another is not a phenomenon solely confined to the modern age. Once, wars were fought solely with clubs and shields, creating a somewhat balanced case of attack and defence;  attack with the club, defend with the shield. If you were good enough at defending, you could survive; simple as that. However, some bright spark then came up with the idea of the bow, and suddenly the world was in imbalance- even if an arrow couldn’t pierce an animal skin stretched over some sticks (which, most of the time, it could), it was fast enough to appear from nowhere before you had a chance to defend yourself. Thus, our defensive capabilities could not match our offensive ones. Fast forward a millennia or two, and we come to a similar situation; now we defended ourselves against arrows and such by hiding in castles behind giant stone walls  and other fortifications that were near-impossible to break down, until some smart alec realised the use of this weird black powder invented in China. The cannons that were subsequently invented could bring down castle walls in a matter of hours or less, and once again they could not be matched from the defensive standpoint- our only option now lay in hiding somewhere the artillery couldn’t get us, or running out of the way of these lumbering beasts. As artillery technology advanced throughout the ensuing centuries, this latter option became less and less feasible as the sheer numbers of high-explosive weaponry trained on opposition armies made them next-to impossible to fight in the field; but they were still difficult to aim accurately at well dug-in soldiers, and from these starting conditions we ended up with the First World War.

However, this is not a direct parallel of the situation we face now; today we deal with the simple and very real truth that a western power attempting to defend its borders (the situation is somewhat different when they are occupying somewhere like Afghanistan, but that can wait until another time) cannot rely on simple defensive methods alone- even if every citizen was an army trained veteran armed with a full complement of sub-machine guns (which they quite obviously aren’t), it wouldn’t be beyond the wit of a terrorist group to sneak a bomb in somewhere destructive. Right now, these methods may only be capable of killing or maiming hundreds or thousands at a time; tragic, but perhaps not capable of restructuring a society- but as our weapon systems get ever more advanced, and our more effective systems get ever cheaper and easier for fanatics to get hold of, the destructive power of lone murderers may increase dramatically, and with deadly consequences.

I’m not sure that counts as a coherent conclusion, or even if this counts as a coherent post, but it’s what y’got.

3500 calories per pound

This looks set to be the concluding post in this particular little series on the subject of obesity and overweightness. So, to summarise where we’ve been so far- post 1: that there are a lot of slightly chubby people present in the western world leading to statistics supporting a massive obesity problem, and that even this mediocre degree of fatness can be seriously damaging to your health. Post 2: why we have spent recent history getting slightly chubby. And for today, post 3: how one can try to do your bit, especially following the Christmas excesses and the soon-broken promises of New Year, to lose some of that excess poundage.

It was Albert Einstein who first demonstrated that mass was nothing more than stored energy, and although the theory behind that precise idea doesn’t really correlate with biology the principle still stands; fat is your body’s way of storing energy. It’s also a vital body tissue, and is not a 100% bad and evil thing to ingest, but if you want to lose it then the aim should simply be one of ensuring that one’s energy output, in the form of exercise  exceeds one’s energy input, in the form of food. The body’s response to this is to use up some of its fat stores to replace this lost energy (although this process can take up to a week to run its full course; the body is a complicated thing), meaning that the amount of fat in/on your body will gradually decrease over time. Therefore, slimming down is a process that is best approached from two directions; restricting what’s going in, and increasing what’s going out (both at the same time is infinitely more effective than an either/or process). I’ll deal with what’s going in first.

The most important point to make about improving one’s diet, and when considering weight loss generally, is that there are no cheats. There are no wonder pills that will shed 20lb of body fat in a week, and no super-foods or nutritional supplements that will slim you down in a matter of months. Losing weight is always going to be a messy business that will take several months at a minimum (the title of this post refers to the calorie content of body fat, meaning that to lose one pound you must expend 3500 more calories than you ingest over a given period of time), and unfortunately prevention is better than cure; but moping won’t help anyone, so let’s just gather our resolve and move on.

There is currently a huge debate going on concerning the nation’s diet problems of amount versus content; whether people are eating too much, or just the wrong stuff. In most cases it’s probably going to be a mixture of the two, but I tend to favour the latter answer; and in any case, there’s not much I can say about the former beyond ‘eat less stuff’. I am not a good enough cook to offer any great advice on what foods you should or shouldn’t be avoiding, particularly since the consensus appears to change every fortnight, so instead I will concentrate on the one solid piece of advice that I can champion; cook your own stuff.

This is a piece of advice that many people find hard to cope with- as I said in my last post, our body doesn’t want to waste time cooking when it could be eating. When faced with the unknown product of one’s efforts in an hours time, and the surety of a ready meal or fast food within five minutes, the latter option and all the crap that goes in it starts to seem a lot more attractive. The trick is, therefore, to learn how to cook quickly- the best meals should either take less than 10-15 minutes of actual effort to prepare and make, or be able to be made in large amounts and last for a week or more. Or, even better, both. Skilled chefs achieve this by having their skills honed to a fine art and working at a furious rate, but then again they’re getting paid for it; for the layman, a better solution is to know the right dishes. I’m not going to include a full recipe list, but there are thousands online, and there is a skill to reading recipes; it can get easy to get lost between a long list of numbers and a complicated ordering system, but reading between the lines one can often identify which recipes mean ‘chop it all up and chuck in some water for half an hour’.

That’s a very brief touch on the issue, but now I want to move on and look at energy going out; exercise. I personally would recommend sport, particularly team sport, as the most reliably fun way to get fit and enjoy oneself on a weekend- rugby has always done me right. If you’re looking in the right place, age shouldn’t be an issue (I’ve seen a 50 year old play alongside a 19 year old student at a club rugby match near me), and neither should skill so long as you are willing to give it a decent go; but, sport’s not for everyone and can present injury issues so I’ll also look elsewhere.

The traditional form of fat-burning exercise is jogging, but that’s an idea to be taken with a large pinch of salt and caution. Regular joggers will lose weight it’s true, but jogging places an awful lot of stress on one’s joints (swimming, cycling and rowing are all good forms of ‘low-impact exercise’ that avoid this issue), and suffers the crowning flaw of being boring as hell. To me, anyway- it takes up a good chunk of time, during which one’s mind is so filled with the thump of footfalls and aching limbs that one is forced to endure the experience rather than enjoy it. I’ll put up with that for strength exercises, but not for weight loss when two far better techniques present themselves; intensity sessions and walking.

Intensity sessions is just a posh name for doing very, very tiring exercise for a short period of time; they’re great for burning fat & building fitness, but I’ll warn you now that they are not pleasant. As the name suggest, these involve very high-intensity exercise (as a general rule, you not be able to talk throughout high-intensity work) performed either continuously or next to continuously for relatively short periods of time- an 8 minute session a few times a week should be plenty. This exercise can take many forms; shuttle runs (sprinting back and forth as fast as possible between two marked points or lines), suicides (doing shuttle runs between one ‘base’ line and a number of different lines at different distances from the base, such that one’s runs change in length after each set) and tabata sets (picking an easily repeatable exercise, such as squats, performing them as fast as possible for 20 seconds, followed by 10 seconds of rest, then another 20 seconds of exercise, and so on for 4-8 minute) are just three examples. Effective though these are, it’s difficult to find an area of empty space to perform them without getting awkward looks and the odd spot of abuse from passers-by or neighbours, so they may not be ideal for many people (tabata sets or other exercises such as press ups are an exception, and can generally be done in a bedroom; Mark Lauren’s excellent ‘You Are Your Own Gym’ is a great place to start for anyone interested in pursuing this route to lose weight & build muscle). This leaves us with one more option; walking.

To my mind, if everyone ate properly and walked 10,000 steps per day, the scare stats behind the media’s obesity fix would disappear within a matter of months. 10,000 steps may seem a lot, and for many holding office jobs it may seem impossible, but walking is a wonderful form of exercise since it allows you to lose oneself in thought or music, whichever takes your fancy. Even if you don’t have time for a separate walk, with a pedometer in hand (they are built into many modern iPods, and free pedometer apps are available for both iPhone and Android) and a target in mind (10k is the standard) then after a couple of weeks it’s not unusual to find yourself subtly changing the tiny aspects of your day (stairs instead of lift, that sort of thing) to try and hit your target; and the results will follow. As car ownership, an office economy and lack of free time have all grown in the last few decades, we as a nation do not walk as much as we used to. It’s high time that changed.