Once Were Hairy

Aesthetically, humans are somewhat standout from the rest of natural creation. We are multicellular organisms, instantly making us completely different to the vast majority of the species’ currently on earth today, and warm-blooded, differentiating us from every plant, fungi, invertebrate, fish, amphibian and reptile. We stand on two legs, but at the same time cannot fly, differentiating us from almost every species of bird and mammal. But this is only so much basic classification; the one trait that aesthetically differentiates from nearly all of these is our hairlessness. Brian May excepted.

Technically, there are other members of the order mammalia who go without fur; the simultaneously cute and horrifying naked mole rat is but one land-borne example, and no swimming mammal (whales, dolphins etc.) have fur. And yes, we technically do have a full layer of fur covering us, meaning that you have more hairs (in terms of number, rather than volume) than a chimpanzee- but our hairy covering is so small as to be practically not there, and the amount of insulation and protection that it provides is minimal. Across most of our body, the only natural protection we have against our enemies and the elements is our bare skin.

Exactly why this is the case is somewhat unclear, because fur is very useful stuff. It offers a surprising degree of protection against cuts and attacks, and is just as effective at keeping out the elements, be they cold, wind or rain. Length can and does vary widely depending on location and need, and many species (including some humans) have incorporated their fur as a form of bodily decoration to attract mates and intimidate rivals; the lion’s mane is the most obvious example.

Also confusing is why we have hair where we do; upon our heads and around the pubic regions. It is thought that hair on the head may be either an almost vestigial thing, left over from the days when our ancestors did have hair, although this theory doesn’t explain why it remained on our head. Better explanations include the slight degree of extra shielding it provides to our brain, our greatest evolutionary advantage, or because the obviousness of the head makes it a natural point for us to rig our hair into elaborate, ceremonial styles and headdresses, raising the social standing of those who are able to get away with such things and helping them attract a mate and further their genes. However, the pubic region is of particular interest to evolutionary biologists, in part because hair there seems counter-productive; the body keeps the testicles outside the body because they need to be kept slightly cooler than the body’s interior temperature in order to keep sperm count high and ensure fertility (an interesting side effect of which is that people who take regular hot baths tend to have a lower sperm count). Surrounding such an area with hair seems evolutionarily dumb, reducing our fertility and reducing our chances of passing our genes onto the next generation. It is however thought that hair around these regions may aid the release of sexual pheremones, helping us to attract a mate, or that it may have helped to reduce chafing during sex and that women tended to choose men with pubic hair (and vice versa) to make sex comfortable. This is an example of sexual selection, where evolution is powered by our sexual preferences rather than environmental necessity, and this itself has been suggested as a theory as to why we humans lost our hair in the first place, or at least stayed that way once we lost it; we just found it more attractive that way. This theory was proposed by Charles Darwin, which seems odd given the truly magnificent beard he wore. However, the ‘chafing’ theory regarding pubic hair is rather heavily disputed from a number of angles, among them the fact that many couples choose to shave their pubic region in order to enhance sexual satisfaction. Our preferences could, of course, have changed over time.

One of the more bizarre theories concerning human hairlessness is the ‘aquatic apes’ theory; it is well known that all swimming mammals, from river dolphins to sea lions, favour fat (or ‘blubber’) in place of fur as it is more streamlined and efficient for swimming and is better for warmth underwater. Therefore, some scientists have suggested that humans went through a period of evolution where we adopted a semi-aquatic lifestyle, fishing in shallow waters and making our homes in and around the water. They also point to the slight webbing effect between our fingers as evidence of a change that was just starting to happen before we left our waterborne lifestyle, and to humanity’s ability to swim (I am told that if a newborn baby falls into water he will not sink but will instinctively ‘swim’, an ability we lose once we become toddlers and must re-learn later, but I feel it may be inappropriate to test this theory out). However, there is no evidence for these aquatic apes, so most scientists feel we should look elsewhere.

Others have suggested that the reason may have been lice; one only needs to hear the horror stories of the First World War to know of the horrible-ness of a lice infestation, and such parasites are frequently the vectors for virulent diseases that can wipe out a population with ease. Many animals spend the majority of their time picking through their fur to remove them (in other apes this is a crucial part of social bonding), but if we have no fur then the business becomes infinitely simpler because we can actually see the lice. Once again, adherents point to sexual selection- without hair we can display our untarnished, healthy, parasite-free skin to the world and our prospective mates (along with any impressive scars we want to show off), allowing them to know they are choosing a healthy partner, and this may go some way to explaining why the ultimate expression of male bodily beauty is considered a strong, hairless chest and six-pack, symbolising both strength and health. Ironically, a loss of fur and our subsequent use of clothes developed an entire new species; the body louse lives only within the folds of our clothes, and was thought to have evolved from hair lice some 50,000 years ago (interestingly, over a million years passed between our African ancestors passing through the hairless phase and our use of clothes, during which time we diverged as a species from Neanderthals, discovered tools and lived through an Ice Age. Must have been chilly, even in Africa). It’s a nice theory, but one considered redundant by some in the face of another; homeostasis.

Apart from our brainpower, homeostasis (or the ability to regulate our body temperature) is humanity’s greatest evolutionary advantage; warm blooded mammals are naturally adept at it anyway, giving us the ability to hunt & forage in all weathers, times and climates, and in cold weather fur provides a natural advantage in this regard. However, without fur to slow the process of heat regulation (sweating, dilation of blood vessels and such all become less effective when insulated by fur) human beings are able to maintain an ambient bodily temperature almost regardless of the weather or climate. African tribesmen have been known to run through the bush for an hour straight and raise their body temperature by less than a degree, whilst our ability to regulate heat in colder climates was enough for scores of Ice Age-era human bones to be found across the then-freezing Europe. Our ability to regulate temperature surpasses even those other ‘naked’ land mammals, the elephant and rhinoceros, thanks to our prominent nose and extremities that allow us to control heat even more precisely. In short, we’re not 100% sure exactly why we humans evolved to be hairless, but it has proved a surprisingly useful trait.

Advertisement

The Alternative Oven

During the Second World War, the RAF pioneered the use of radar to detect the presence of the incoming Luftwaffe raids. One of the key pieces of equipment used in the construction of the radars was called a magnetron, which uses a magnetic field to propel high-speed electrons and generate the kind of high-powered radio waves needed for such a technology to be successful over long distances. After the war was over, the British government felt it could share such technology with its American allies, and so granted permission for Raytheon, a private American enterprise, to produce them. Whilst experimenting with such a radar set in 1945, a Raytheon engineer called Percy Spencer reached to the chocolate bar in his pocket; and discovered it had melted. He later realised that the electromagnetic radiation generated by the radar set had been the cause of this heating effect, and thought that such technology could be put to a different, non-military use- and so the microwave oven was born.

Since then, the microwave has become the epitome of western capitalism’s golden age; the near-ubiquitous kitchen gadget, usually in the traditional white plastic casing, designed to make certain specific aspects of a process already technically performed  by another appliance (the oven) that bit faster and more convenient. As such, it has garnered its fair share of hate over the years, shunned by serious foodies as a taste-ruining harbinger of doom to one’s gastric juices that wouldn’t be seen dead in any serious kitchen. The simplicity of the microwaving process (especially given that there is frequently no need for a pot or container) has also lead to the rise of microwavable meals, designed to take the concept of ultra-simple cooking to its extreme by creating an entire meal  from a few minutes in the microwave. However, as everyone who’s every attempted a bit of home cooking will know, such process does not naturally occur quite so easily and thus these ready meals generally require large quantities of what is technically known as ‘crap’ for them to function as meals. This low quality food has become distinctly associated with the microwave itself, further enhancing its image as a tool for the lazy and the kind of societal dregs that the media like to portray in scare statistics.

In fairness, this is hardly the device’s fault, and it is a pretty awesome one. Microwave ovens work thanks to the polarity of water molecules; they consist of one positively charged end (where the hydrogen part of H2O is) and a negatively charged end (where the electron-rich oxygen bit is). Also charged are electromagnetic waves, such as the microwaves after which the oven takes its name, and such waves (being as they are, y’know, waves) also oscillate (aka ‘wobble) back and forth. This charge wobbling back and forth causes the water molecules (technically it works with other polarised molecules too, but there are very few other liquids consisting of polarised molecules that one encounters in cookery; this is why microwaves can heat up stuff without water in, but don’t do it very well) to oscillate too. This oscillation means that they gain kinetic energy from the microwave radiation; it just so happens that the frequency of the microwave radiation is chosen so that it closely matches the resonant frequency of the oscillation of the water molecules, meaning this energy transfer is very efficient*; a microwave works out as a bit over 60% efficient (most of the energy being lost in the aforementioned magnetron used to generate the microwaves), which is exceptional compared to a kettle’s level of around 10%. The efficiency of an oven really depends on the meal and how it’s being used, but for small meals or for reheating cold (although not frozen, since ice molecules aren’t free to vibrate as much as liquid water) food the microwave is definitely the better choice. It helps even more that microwaves are really bad at penetrating the metal & glass walls of a microwave, meaning they tend to bounce off until they hit the food and that very little of the energy gets lost to the surroundings once it’s been emitted. However, if nothing is placed in the microwave then these waves are not ‘used up’ in heating food and tend to end up back in the microwave emitter, causing it to burn out and doing the device some serious damage.

*I have heard it said that this is in fact a myth, and that microwaves are in fact selected to be slightly off the resonant frequency range so that they don’t end up heating the food too violently. I can’t really cite my sources on this one nor explain why it makes sense.

This use of microwave radiation to heat food incurs some rather interesting side-effects; up first is the oft-cited myth that microwaves cook food ‘from the inside out’. This isn’t actually true, for although the inside of a piece of food may be slightly more insulated than the outside the microwaves should transfer energy to all of the food at a roughly equal rate; if anything the outside will get more heating since it is hit first by the microwaves. This effect is observed thanks to the chemical makeup of a lot of the food put in a microwave, which generally have the majority of their water content beneath the surface; this makes the surface relatively cool and crusty, with little water to heat it up, and the inside scaldingly hot. The use of high-power microwaves also means that just about everyone in the country has in their home a death ray capable of quite literally boiling someone’s brain if the rays were directed towards them (hence why dismantling a microwave is semi-illegal as I understand it), but it also means that everyone has ample opportunity to, so long as they don’t intend to use the microwave again afterwards  and have access to a fire extinguisher, do some seriously cool stuff with it. Note that this is both dangerous, rather stupid and liable to get you into some quite weird stuff, nothing is a more sure fire indicator of a scientific mind than an instinct to go ‘what happens when…’ and look at the powerful EM radiation emitter sitting in your kitchen. For the record, I did not say that this was a good idea…

The Price of Freedom

First of all, apologies for missing my post on Wednesday, and apologies in advance for missing one on Wednesday; I’ve had a lot of stuff to do over the past week and will be away during the next one. Ah well, on with the post…

We in the west set a lot of store by democracy; in America especially you will hardly be hard-pressed to find someone willing to defend their ‘rights’ and freedom to the hilt, regardless of how dumb you think that particular right is. Every time a government attempts to ban or restrict some substance or activity, vast waves of protesters will take to the streets/TV/internet that their right or ability to do X or Y is being restricted in direct contradiction to every document from the Magna Carta to the UN Declaration of Human Rights.

However, if we were permitted to be totally free (the ‘Altair’ end of the Order of Assassins/Knights Templar spectrum), with no laws restricting our activity, then we would quickly descend into an anarchic society. Murder, rape and theft would go unpunished as the minority of the evil-minded quickly became the majority by simple need. Various models of a system of anarchy, including mine predict an eventual return to an ordered society of laws and structure, and we can all agree that serious crimes are Bad Things that probably are worth restricting, even if it requires us to restrict our ‘freedom’ to a certain degree. Clearly, freedom is not worth such crimes, and thus we have laws.

In fact, most of our legal system can be counted as a direct result of the law-setter in question asking ‘what is freedom worth?’. If the law is in place to restrict an activity, then freedom is counted as not being worth this activity for either moral, financial or practical reasons (or a combination of the three), whilst other, more unrestricted, activities, freedom is considered worth allowing. And, perhaps more interestingly, a vast majority of political debate can be essentially boiled down to two people’s different opinions concerning what price we are prepared to pay for freedom.

Take, as a simple example, the British government’s recent ‘pastie tax’, levied on hot baked goods. This was partly an attempt to bring in some much-needed cash for the government in their efforts to cut the deficit, but also has some  degree of a health motivation. Such food is frequently sold cheaply from fast food retailers and the like, meaning it is an easy source of hot, tasty food for the poorer or lazier sections of society; but their fat content is not kind to the waistline and an overconsumption of such foods has been linked to ‘the national obesity epidemic’ that everyone gets so worked up about. This obesity problem is a major source of concern to the NHS, and thus the government who pay for it, since in the long term it causes a dramatic upsurge in the number of diabetes cases. This is an expensive problem to combat and presents a major health hazard for the country as a whole, and the government (or at least George Osborne, whose annual statement the tax first appeared in) decided that this dual cost is not worth the freedom to enjoy such a snack so cheaply. This, as with all vaguely new and interesting decisions in a rather dull report concerning how poor the country is, was debated aggressively in the media, with the healthy eating people and economists broadly speaking backing the idea (or complaining that there was not enough done/government is stifling growth/insert predictable complaint about economy here) whilst others criticised the plan as just another example of the Tories targeting the lower rungs of society who most frequently enjoy a cheap meal from these sources. To these people, today’s world is an expensive and difficult one to live in, and the ability to have a hot, greasy, tasty meal for a price that they could easily budget for in the long run is a freedom well worth whatever obesity problems it is causing. Such fundamental differences of opinion, particularly concerning taxation policy, are the irreconcilable forces that mean two political opponents will frequently find it impossible to back down.

In some other cases, the two participants of an argument will agree that freedom isn’t worth cost X, but will disagree on the mechanism for restricting said cost. The debate concerning the legalisation of drugs is one such example, for whilst part of the debate centres around a difference of opinion as to whether the freedom to get stoned is worth the cost of a country full of stoners and the consequences thereof (don’t believe anyone who tells you marijuana is a harmless drug; it isn’t, although the degree of harm it causes is generally the cause behind such debate), another cause of disagreement concerns the problems of the drugs war. Opium is the biggest source of income for the Taliban (and a very large one for Afghanistan as a whole), whilst the gangs and cartels who operate the Latin American drugs trade have been directly linked to human trafficking, prostitution and other atrocities during the ongoing drugs wars with their local government. This is a particular problem in Mexico, where since the government’s announcement of the ‘war on drugs’ there have been over 47,000 drugs-related murders. Everyone agrees that this is a Bad Thing, but a difference of opinion arises when considering which course of action would prove the most successful at combating the problem; the ‘legalise’ faction say that to legalise drugs would be to force the small-time criminals out of business as the well-policed official channels of trade took over, where sourcing and supply is performed by businessmen held accountable for their actions. At the very least, they suggest, it could do us good to lessen the sentencing of drug offenders and try to encourage quitters rather than just clamp people in jail, as this allows us to discourage people more easily and get to know more about the problem. This approach is implemented to an extent in Europe (especially the Netherlands), whilst the more stringent laws of the United States (states such as Colorado excepted) take the opposite line; they say that to relax drug restrictions simply encourages use, gives more trade to the cartels and only increases their power. Whether they are right or not is very much up for debate since the alternative hasn’t really been tried on a large scale, particularly in America; but the growing movement to look for an alternative solution to the problem, combined with the statement from former presidents of Guatemala, Mexico and Colombia that ‘the war on drugs has failed’ means that we may soon see how the other approach ends up. For the record, I remain undecided on the issue- the stats from the Netherlands tell me that drug use will increase with decriminalisation, which I don’t especially like the prospect of (that stuff’s not for me, and I’m not entirely sure why it should be for anyone else either), but it strikes me that this approach may reap dividends when it comes to combating the secondary problems caused by the drug wars. A friend who is kind of into this business (and, incidentally, comes firmly down on the ‘legalise’ side) recommends the YouTube film ‘Breaking The Taboo’, which you may want to watch if this kind of thing interests you.

…OK, that one slightly got away from me, but the discussion got kind of interesting. The key message here, really, is one of self-examination. Take a look at your political views, your outlook on life in general, and then ask yourself: to me, what is freedom worth?

So. It is done…

Yes, the party’s finally over; the Six Nations done and dusted for another year. Saturday’s matches were a mixed bunch, yet most definitely not as dull as in previous rounds. This week’s awards ceremony will be undergoing something of a reshuffle; rather than doing the matches in chronological order, losers first (as usual), I’m going to leave England-Wales until last. Anyone who saw, or even heard about, the match will probably be able to work out why.

But we must begin somewhere; IRELAND, to be precise, whose award for both this match and, arguably, their championship as a whole is the Another One Bites The Dust Award for Highest Attrition Rate. I talked in a previous post about Ireland’s depressingly high injury rate against England, and there was more of the same today; promising young centre Luke Marshall and winger Keith Earls were off within 25 minutes, and no sooner had Earls’ replacement Luke Fitzgerald entered the fray before he was limping off with a leg injury. With barely half an hour of the match played and all but one backs substitutes used, Ireland flanker Peter O’Mahoney was forced to spend the remainder of the match out on the wing, and given O’Mahoney’s efforts at the breakdown in recent matches it was no wonder Ireland lost momentum without him in the thick of things. However, Ireland’s injury rows were compounded by three yellow cards; firstly to Brian O’Driscoll after a stamp that really should have warranted red (although that would have been something of an ignominious end (if so it proves) to the international career of the greatest centre of all time), and later to Donnacha Ryan and Connor Murray. I felt rather sorry for them; trying to keep any form of structure through all that is nigh-on impossible.

ITALY also picked up a yellow card, this time to captain Sergio Parisse, but they were not hamstrung by injuries or errors in the same way of the Irish and took home not only the win but also the Maori Sidestep Award for Most Exciting Use of The Crash Ball. There were many impressive facets of Italy’s game on Saturday; their handling was superb (Parisse producing another exquisite underhand flick in the same fashion of last week), Luciano Orquera once again ran the show and some of the running rugby put on display was quite superb to watch. However, what most had me entertained most of all was Italy’s use of their forwards; whilst sending the big man through on a collision course with some poor defender is hardly a new strategy, rarely is it executed with quite the same excitement, speed and aggression that the Italians managed. No taking the ball standing still for them, no slowing down before the hit; every crash ball came at sprinting pace, and much credit is due to the Irish defence for their ability to counter the Italian efforts. All in all, a very entertaining match, a well-deserved win, and a fitting end to the career of 104-cap veteran prop Andrea Lo Cicero.

SCOTLAND‘s match against France was slightly less exciting, and a 9-9 half-time scoreline was rather more reflective of the game than similar results in the weekend’s other two matches. However, things picked up (at least for the French) in the second half and Scotland were, eventually able to get a try- in doing so taking the …Is That Legal? Award for Most Dubious Try-Scoring Tactic. With 75 minutes on the clock and 14 points down, the Scots could be somewhat forgiven for a slightly frayed temper, but Sean Lamont’s bit of very subtley-executed and rather impressive cheating was perhaps a shade too far to be really fair. Scotland had won a lineout near halfway and were putting the ball through the hands, Lamont running the dummy line- so far, so normal. What is less normal was Lamont’s subsequent decision to ‘accidentally’ finish his dummy line by running straight into Gael Fickou, knocking the unsuspecting youngster to the ground and leaving a nice hole for centre partner Matt Scott to break through, before offloading to Tim Visser for the try. The French crowd at the time appeared to express their disapproval, but referee Nigel Owens apparently didn’t see it and the try stood. If the scores had been closer at the time, I think the French would be somewhat angrier.

As for FRANCE themselves, coach Phillippe Saint-Andre could easily have won Best Half-Time Team Talk, such was the transformation in his team when they ran out for the second 40; but I think it is perhaps more reflective of their championship for Vincent Debaty to take the Swing And A Miss Award for Most Fluffed Opportunity. The move had started brightly enough, Debaty taking the ball on the run and using all of his considerable bulk to smash two desperate Scotsmen out of the way. The big prop rumbled off down the wing, and the try seemed fairly certain; Stuart Hogg remained as Scotland’s last line of defence, and France’s flying winger Vincent Clerc was jogging up on Debaty’s outside just waiting to receive the winning pass. However, so apparently engrossed was Debaty with the prospect of only the lithe, skinny Hogg standing between him and the try line that he never even looked at Clerc, and arguably was totally unaware of his team-mate’s existence. Rather than give the pass that would surely have made the five points a formality, Debaty went on his own, was (somehow) taken down by Hogg and France gave away the penalty at the resulting ruck. It was the perfect metaphor for France’s tournament; plenty of promise, an opportunity ripe for the taking, but it all amounted to nothing.

However, by far the best match of the weekend, and arguably the championship, had taken place a couple of hours earlier, where ENGLAND, who had travelled over the Severn in search of a Grand Slam, were soundly thwacked by a rampant Welsh side. I could think of half a dozen awards England could have won; Most Passionate Singing of The Anthems, Worst Rucking, Worst Scrummaging, Biggest Pissing-Off Of A Referee, but in the end I couldn’t look beyond the At Least You Didn’t Give Up Award for Most Optimistic Way to End A Game. As the game entered it’s final couple of minutes, England were well beaten; 27 points down, decidedly on the back foot and looking like they just wanted to leave all thoughts of rugby behind for a day or two. This is the time where you just wind down the clock, boot the ball out and walk off disgusted- but apparently nobody had told them out. When awarded a penalty just a few seconds from time, Danny Care (winner of the Least Necessary And Appropriate Chip Kick award ten minutes previously) decided to take the tap penalty and run for it, and his team joined in with gusto. For a minute, the England side managed to muster great energy and desire to play, showing a bit of much needed character. It might have ended with a dropped ball, but I will always take my hat off to a team prepared to have a go even when all else is lost. Or I might just be getting overly patriotic.

Also deserving of a whole host of awards were WALES; their rucking game was superb, man of the match Justin Tipuric matched only by his blindside flanker partner Sam Warburton, and even Dan Biggar managed to break free of his more customary ‘meh, he’s alright’-ness (my apologies if he ever ends up reading this; just not my type of player I guess) to operate the Welsh back line effectively and slot a cheeky drop-goal. However, the man I want to single out is tighthead prop Adam Jones, my pick for the MOTM award and worthy recipient of the Understated Lynchpin Award for Most Significant Contribution from a Single Player. Of the several areas where Wales controlled the game, the scrum was perhaps the most spectacular; England can’t have won more than two all match and their front row was getting ripped to shreds. Every scrum, the procedure was the same; the experienced scrummaging master that is Adam Jones completely nullified Joe Marler, who should have had the advantage from loosehead, before driving between him and hooker Tom Youngs to split the English scrum and force the penalty. Penalties came for collapsing, missing binds, standing up and just about every other clause of Law 20, not only turning referee Steve Walsh in Wales’ favour (I am not going to say he was biased as some others on the web have done, merely that Wales played him far better than the English) but setting England on the back foot for the rest of the game. Every time a scrum went down, we might as well have saved time by awarding Wales a penalty then and there, allowing England to build no attacking momentum. Combine that with the fact that Wales were competing properly in the rucks, slowing down ball in precisely the way that England weren’t, and all the momentum went the way of the home side. After that, victory was not long in coming.

As an Englishman, I don’t like admitting that Wales were the better side, and I certainly don’t like losing both match, tournament, Grand Slam and (potentially, although I hope for the sake of victory that it doesn’t happen) Lions places to them. But, as I said elsewhere before this weekend: “I’d be fine with Wales winning so long as they actually decided to play some damn rugby for a change”. I will quite happily accept that as them “playing some damn rugby”. Well played Wales. Well bloody played ye bastads.

Final Scores: Italy 22-15 Ireland
Wales 30-3 England
France 23-16 Scotland

Components of components of components…

By the end of my last post, science had reached the level of GCSE physics/chemistry; the world is made of atoms, atoms consist of electrons orbiting a nucleus, and a nucleus consists of a mixture of positively charged protons and neutrally charged neutrons. Some thought that this was the deepest level things could go; that everything was made simply of these three things and that they were the fundamental particles of the universe. However, others pointed out the enormous size difference between an electron and proton, suggesting that the proton and neutron were not as fundamental as the electron, and that we could look even deeper.

In any case, by this point our model of the inside of a nucleus was incomplete anyway; in 1932 James Chadwick had discovered (and named) the neutron, first theorised about by Ernest Rutherford to act as a ‘glue’ preventing the protons of a nucleus from repelling one another and causing the whole thing to break into pieces. However, nobody actually had any idea exactly how this worked, so in 1934 a concept known as the nuclear force was suggested. This theory, proposed by Hideki Yukawa, held that nucleons (then still considered fundamental particles) emitted particles he called mesons; smaller than nucleons, they acted as carriers of the nuclear force. The physics behind this is almost unintelligible to anyone who isn’t a career academic (as I am not), but this is because there is no equivalent to the nuclear force that we encounter during the day-to-day. We find it very easy to understand electromagnetism because we have all seen magnets attracting and repelling one another and see the effects of electricity everyday, but the nuclear force was something more fundamental; a side effect of the constant exchange of mesons between nucleons*. The meson was finally found (proving Yukawa’s theory) in 1947, and Yukawa won the 1949 Nobel Prize for it. Mesons are now understood to belong to a family of particles called gluons, which all act as the intermediary for the nuclear strong force between various different particles; the name gluon hints at this purpose, coming from the word ‘glue’.

*This, I am told, becomes a lot easier to understand once electromagnetism has been studied from the point of view of two particles exchanging photons, but I’m getting out of my depth here; time to move on.

At this point, the physics world decided to take stock; the list of all the different subatomic particles that had been discovered became known as ‘the particle zoo’, but our understanding of them was still patchy. We knew nothing of what the various nucleons and mesons consisted of, how they were joined together, or what allowed the strong nuclear force to even exist; where did mesons come from? How could these particles, 2/3 the size of a proton, be emitted from one without tearing the thing to pieces?

Nobody really had the answers to these, but when investigating them people began to discover other new particles, of a similar size and mass to the nucleons. Most of these particles were unstable and extremely short-lived, decaying into the undetectable in trillionths of trillionths of a second, but whilst they did exist they could be detected using incredibly sophisticated machinery and their existence, whilst not ostensibly meaning anything, was a tantalising clue for physicists. This family of nucleon-like particles was later called baryons, and in 1961 American physicist Murray Gell-Mann organised the various baryons and mesons that had been discovered into groups of eight, a system that became known as the eightfold way. There were two octets to be considered; one contained the mesons, and all the baryons with a ‘spin’ (a quantum property of subatomic particles that I won’t even try to explain) of 1/2. Other baryons had a spin of 3/2 (or one and a half), and they formed another octet; except that only seven of them had been discovered. Gell-Mann realised that each member of the ‘spin 1/2’ group had a corresponding member in the ‘spin 3/2’ group, and so by extrapolating this principle he was able to theorise about the existence of an eighth ‘spin 3/2’ baryon, which he called the omega baryon. This particle, with properties matching almost exactly those he predicted, was discovered in 1964 by a group experimenting with a particle accelerator (a wonderful device that takes two very small things and throws them at one another in the hope that they will collide and smash to pieces; particle physics is a surprisingly crude business, and few other methods have ever been devised for ‘looking inside’ these weird and wonderful particles), and Gell-Mann took the Nobel prize five years later.

But, before any of this, the principle of the eightfold way had been extrapolated a stage further. Gell-Mann collaborated with George Zweig on a theory concerning entirely theoretical particles known as quarks; they imagined three ‘flavours’ of quark (which they called, completely arbitrarily, the up, down and strange quarks), each with their own properties of spin, electrical charge and such. They theorised that each of the properties of the different hadrons (as mesons and baryons are collectively known) could be explained by the fact that each was made up of a different combination of these quarks, and that the overall properties of  each particle were due, basically, to the properties of their constituent quarks added together. At the time, this was considered somewhat airy-fairy; Zweig and Gell-Mann had absolutely no physical evidence, and their theory was essentially little more than a mathematical construct to explain the properties of the different particles people had discovered. Within a year, supporters of the theory Sheldon Lee Glashow and James Bjorken suggested that a fourth quark, which they called the ‘charm’ quark, should be added to the theory, in order to better explain radioactivity (ask me about the weak nuclear force, go on, I dare you). It was also later realised that the charm quark might explain the existence of the kaon and pion, two particles discovered in cosmic rays 15 years earlier that nobody properly understood. Support for the quark theory grew; and then, in 1968, a team studying deep inelastic scattering (another wonderfully blunt technique that involves firing an electron at a nucleus and studying how it bounces off in minute detail) revealed a proton to consist of three point-like objects, rather than being the solid, fundamental blob of matter it had previously been thought of. Three point-like objects matched exactly Zweig and Gell-Mann’s prediction for the existence of quarks; they had finally moved from the mathematical theory to the physical reality.

(The quarks discovered were of the up and down flavours; the charm quark wouldn’t be discovered until 1974, by which time two more quarks, the top and bottom, had been predicted to account for an incredibly obscure theory concerning the relationship between antimatter and normal matter. No, I’m not going to explain how that works. For the record, the bottom quark was discovered in 1977 and the top quark in 1995)

Nowadays, the six quarks form an integral part of the standard model; physics’ best attempt to explain how everything in the world works, or at least on the level of fundamental interactions. Many consider them, along with the six leptons and four bosons*, to be the fundamental particles that everything is made of; these particles exist, are fundamental, and that’s an end to it. But, the Standard Model is far from complete; it isn’t readily compatible with the theory of relativity and doesn’t explain either gravity or many observed effects in cosmology blamed on ‘dark matter’ or ‘dark energy’- plus it gives rise to a few paradoxical situations that we aren’t sure how to explain. Some say it just isn’t finished yet, and that we just need to think of another theory or two and discover another boson. Others say that we need to look deeper once again and find out what quarks themselves contain…

*A boson is anything, like a gluon, that ‘carries’ a fundamental force; the recently discovered Higgs boson is not really part of the list of fundamental particles since it exists solely to effect the behaviour of the W and Z bosons, giving them mass

The Story of the Atom

Possibly the earliest scientific question we as a race attempted to answer was ‘what is our world made of’. People reasoned that everything had to be made of something- all the machines and things we build have different components in them that we can identify, so it seemed natural that those materials and components were in turn made of some ‘stuff’ or other. Some reasoned that everything was made up of the most common things present in our earth; the classical ‘elements’ of earth, air, fire and water, but throughout the latter stages of the last millennia the burgeoning science of chemistry began to debunk this idea. People sought for a new theory to answer what everything consisted of, what the building blocks were, and hoped to find in this search an answer to several other questions; why chemicals that reacted together did so in fixed ratios, for example. For a solution to this problem, they returned to an idea almost as old as science itself; that everything consisted of tiny blobs of matter, invisible to the naked eye, that joined to one another in special ways. The way they joined together varied depending on the stuff they made up, hence the different properties of different materials, and the changing of these ‘joinings’ was what was responsible for chemical reactions and their behaviour. The earliest scientists who theorised the existence of these things called them corpuscles; nowadays we call them atoms.

By the turn of the twentieth century, thanks to two hundred years of chemistry using atoms to conveniently explain their observations, it was considered common knowledge among the scientific community that an atom was the basic building block of matter, and it was generally considered to be the smallest piece of matter in the universe; everything was made of atoms, and atoms were fundamental and solid. However, in 1897 JJ Thomson discovered the electron, with a small negative charge, and his evidence suggested that electrons were a constituent part of atoms. But atoms were neutrally charged, so there had to be some positive charge present to balance out; Thomson postulated that the negative electrons ‘floated’ within a sea of positive charge, in what became known as the plum pudding model. Atoms were not fundamental at all; even these components of all matter had components themselves. A later experiment by Ernest Rutherford sought to test the theory of the plum pudding model; he bombarded a thin piece of gold foil with positively charged alpha particles, and found that some were deflected at wild angles but that most passed straight through. This suggested, rather than a large uniform area of positive charge, a small area of very highly concentrated positive charge, such that when the alpha particle came close to it it was repelled violently (just like putting two like poles of a magnet together) but that most of the time it would miss this positive charge completely; most of the atom was empty space. So, he thought the atom must be like the solar system, with the negative electrons acting like planets orbiting a central, positive nucleus.

This made sense in theory, but the maths didn’t check out; it predicted the electrons to either spiral into the nucleus and for the whole of creation to smash itself to pieces, or for it all to break apart. It took Niels Bohr to suggest that the electrons might be confined to discrete orbital energy levels (roughly corresponding to distances from the nucleus) for the model of the atom to be complete; these energy levels (or ‘shells’) were later extrapolated to explain why chemical reactions occur, and the whole of chemistry can basically be boiled down to different atoms swapping electrons between energy levels in accordance with the second law of thermodynamics. Bohr’s explanation drew heavily from Max Planck’s recent explanation of quantum theory, which modelled photons of light as having discrete energy levels, and this suggested that electrons were also quantum particles; this ran contrary to people’s previous understanding of them, since they had been presumed to be solid ‘blobs’ of matter. This was but one step along the principle that defines quantum theory; nothing is actually real, everything is quantum, so don’t even try to imagine how it all works.

However, this still left the problem of the nucleus unsolved; what was this area of such great charge density packed  tightly into the centre of each atom, around which the electrons moved? What was it made of? How big was it? How was it able to account for almost all of a substance’s mass, given how little the electrons weighed?

Subsequent experiments have revealed an atomic nucleus to tiny almost beyond imagining; if your hand were the size of the earth, an atom would be roughly one millimetre in diameter, but if an atom were the size of St. Paul’s Cathedral then its nucleus would be the size of a full stop. Imagining the sheer tinyness of such a thing defies human comprehension. However, this tells us nothing about the nucleus’ structure; it took Ernest Rutherford (the guy who had disproved the plum pudding model) to take the first step along this road when he, in 1918, confirmed that the nucleus of a hydrogen atom comprised just one component (or ‘nucleon’ as we collectively call them today). Since this component had a positive charge, to cancel out the one negative electron of a hydrogen atom, he called it a proton, and then (entirely correctly) postulated that all the other positive charges in larger atomic nuclei were caused by more protons stuck together in the nucleus. However, having multiple positive charges all in one place would normally cause them to repel one another, so Rutherford suggested that there might be some neutrally-charged particles in there as well, acting as a kind of electromagnetic glue. He called these neutrons (since they were neutrally charged), and he has since been proved correct; neutrons and protons are of roughly the same size, collectively constitute around 99.95% of any given atom’s mass, and are found in all atomic nuclei. However, even these weren’t quite fundamental subatomic particles, and as the 20th century drew on, scientists began to delve even deeper inside the atom; and I’ll pick up that story next time.

The Penultimate Round…

It’s that time of week again; time for the Six Nations to dust itself off after another week’s hiatus and give me my rugby fix again this weekend. And when the tournament comes back, so too do my awards.

SCOTLAND are this week’s starting point, and takers of the Shooting Themselves In The Foot Award for Most Idiotic Penalties. Scotland’s match against Wales on Saturday was a dull, dour and undoubtedly boring affair governed almost exclusively by penalties; indeed, the match broke the world record for most penalty attempts on goal in international rugby history. As Andrew Cotter said, “Occasional bouts of rugby… threatened to break out between the penalties”. This can partly be blamed on two sides with good kickers and weather that was hardly conducive to free-flowing rugby, but both sets of forwards must take their own, fairly large, share of the blame. A total of twenty-eight penalties were conceded throughout the course of the game, 18 of which resulted in a shot at the post and the majority of them seemed to come courtesy of the Scottish forwards. All of them appeared hell-bent on committing as many blatantly obvious infringements as possible well within the range of Leigh Halfpenny, and all seemed really surprised when Craig Joubert blew his whistle after watching them flying into the side of the ruck right under his nose. Particularly persistent offenders include hooker Ross Ford and second row Jim Hamilton (the latter of whom committed what BBC Sport described as ‘possibly the most blatant infringement in rugby history), and both were exceedingly lucky to receive only severe talkings-to from Joubert rather than anything more severe.

WALES‘ award is related to Scotland’s; the Dude, Seriously? Award for Least Deserved Yellow Card. As the game entered its final two minutes, many in the Welsh camp would have been justifiably miffed to have played the entire game against 15 men. To be sure, Wales were hardly blameless on the penalty front (conceding 12 in all), but theirs never seemed either as blatant, cynical or downright stupid as the Scots’, and the Welsh-favoured scoreline was demonstrative of the fact. However, whilst a few diehard Welshmen may have been convinced that Joubert was letting the Scots get away with murder, I don’t think too many would have been vastly angry with his disciplinary decisions  until, that is, he decided to show a yellow card to Welshman Paul James. For one thing, James had only been on the pitch for around 10 minutes, and for another it was 2 minutes to the end with Scotland 10 points behind in a game where a score never looked likely. James had infringed, but was far from the worst offender on most definitely not the worst offending team. I am sure that it made sense to Craig Joubert at the time; it didn’t very much to me, sat on my sofa.

Saturday’s next game proved far more entertaining, thanks both to Steve Walsh’s well-managed refereeing and to IRELAND‘s That’s More Like It Award for Most Positive Outlook Given The Conditions. The weather in Dublin was, if anything, worse than it had been at Murrayfield earlier in the day, and having played in such conditions on Thursday I can attest that such conditions do not lend themselves to flowing rugby by any stretch of the imagination; indeed, just keeping hold of the ball proved a decent challenge for both me and the internationals. Ireland were also coming off a bad run of form, with their first-choice fly half injured and coach Declan Kidney fearing for his job. Combine that with a match against a lacklustre French side lying bottom of the Six Nations table, and we have all the ingredients for a decidedly bad game.

However, nobody appeared to have told the Irish this, and they attacked Saturday’s match with all the vim and vigour of a midsummer warm-up game. Paddy Jackson bossed things from fly half, and along with Rob Kearney & Connor Murray executed a sublime kicking game that had the French on the back foot all game. This combined well with a slick Irish lineout and sublime mauling game, all of which seemed infused by a genuine sense of fluidity and wanting to take the game to the French. Did it result in points? Not to any great extent (the conditions were too unkind for high scoring, and the French defending was pretty solid), but it put the French decidedly on the back foot for the entire first half and rescued an afternoon of rugby that had the potential to be decidedly awful.

I am more than willing to compliment FRANCE too, and offer them the Hang On In There Award for Most Tenacious Performance. France barely survived the first half; Ireland seemed perpetually camped in their half and offered them practically zero attacking opportunities. Indeed, every scrap of French possession seemingly went straight to Freddie Michalak, under a lot of pressure having been bizarrely reinstated at fly half in place of the in-form Francois Trinh-Duc, and the mercurial talent that is Wesley Fofana can’t have touched the ball more than twice. Even Yoann Huget seemed somewhat out of it, and only Louis Picamoles offered France go-forward.

Nonetheless, they hung on; France’s gritty defending meant they were only 10 points behind at half time, and after the interval their strategy began to get more offensive. Their defence began to blitz more, killing the Irish momentum and jump starting their turnover rate. With a bit more ball, they started to do a bit of attacking of their own, and with 20 minutes to go picked up their first points since the first half. A try, courtesy of Picamoles, followed not long afterwards, and whilst I wouldn’t go so far as to say that they deserved to beat the Irish, they certainly acquitted themselves far better than in recent weeks.

Sunday’s game looked, on the face of it, set to at least revert the try drought that has plagued these past three rounds, but in the end twas not to be. This can partly be put down to the efforts of a heroic ITALY team, who battled through their underdogs tag and some slightly harsh refereeing decisions to claim the How Did We Not Win This? Award for Most Man of the Match Contenders. It could be argued that nobody in the Italian side had an out-and-out flawless game, the kind that wins matches on its own, but nobody would deny the number of merely very good performances put on display. Luke McLean showed some great attacking nous, eventually picking up the game’s only try, and a good defensive showing as well, whilst any member of the Italian front row could have been nominated for doing a number on the English scrum. Behind them Alessandro Zanni appeared to be popping up everywhere, Sergio Parisse had a magnificent return following his truncated ban (including one sublime pass that fooled me even on the third replay), Luciano Orquera bossed the show with a return to his form earlier in the championship, and the eventual man of the match Andrea Masi put in a typically defiant, bullish performance from fullback. Unfortunately, Italy’s penalty count was simply too high, and they were as unable as England to execute the majority of their opportunities in a dominant second half display. Good though Italy undoubtedly were, and tense though the match was, it wasn’t quite enough to secure a second victory for the Azzurri. Roll on Ireland next week…

ENGLAND were somewhat less impressive, and take the Rugby Playing Equivalent Of The Amazon Rainforest for Least Sustainable Winning Strategy. England’s victory came courtesy of six penalties from Toby Flood, one of the few England players to do a good job yesterday. After victory over France and Ireland came in a similar fashion, pundits were quick to praise England’s opportunism, composure and ability to execute, to force their opposition into infringements and take the victory from there. However, against Italy they enjoyed none of the dominance they had in previous matches, and the high penalty count against the Italians that ultimately gave them the win seemed as much down to luck and a period of early territory as much as anything else. Better sides, the southern hemisphere giants in particular, will not give away that many penalties, and England will not be able to manufacture such opportunities against them. It could be that Sunday’s game was the perfect wake up call England needed to get their act together in time for Wales next week; or it could be that England’s current way of playing is a tactical time bomb waiting to go off in their face.

Final Scores:
Scotland 18-28 Wales
Ireland 13-13 France
England 18-11 Italy

What’s so bad?

We humans love a good bit of misery. We note when bad luck befalls us, chronicle our ailments and often consider ourselves to be having a harder-than-average time of it all. The news and media constantly bombard us with stories of injustice, crime, health scares and why we are basically the worst country imaginable in every conceivable respect, and one only needs to spend a few minutes on any internet forum or discussion centre to find a thousand new and innovative reasons as to why you, and everything you stand for, is totally horrible and stupid and deserves to die. And/or that any faith in humanity you have is entirely misplaced.

However, optimists across the globe have pointed out that if the human race was actually as evil, despicable or otherwise useless a barrel of skunks as it often appears, we probably wouldn’t actually be around; or, at the very least, there certainly is nice and good stuff in this world that we humans are responsible for. So why are we so fixated on the bad? Why our attraction to misfortune? Are we all secretly schadenfreude junkies?

Part of the reason is of course traceable back to the simple fact that we humans are decidedly imperfect creatures, and that there is an awful lot of bad stuff in this world; these scare stories have to come from somewhere, after all. Take the two things that I feel most strongly about; climate change and slavery. In the two centuries since the industrial revolution, we have inflicted some catastrophic damage on our planet in the frankly rather shallow pursuit of profit and material wealth that always seem so far away; not only has this left vast scars of human neglect on many parts of our earth, but the constant pumping of pollutants into our atmosphere has sorely depleted our precious, irredeemable natural resources and is currently in the process of royally screwing with our global climate; it may be centuries before the turmoil calms down, and that’s assuming we ever manage to get our act together at all. On the other front, there are currently more slaves in existence today than at any other point in history (27 million, or roughly the combined population of Australia and New Zealand), a horrifying tribute to the sheer ruthlessness and disrespect of their fellow man of some people. The going rate of a slave today is 500 times less than it was in the days before William Wilberforce ended the Atlantic slave trade, at just $90, and this website allows you to calculate approximately how many slaves worldwide work to maintain your lifestyle. I got 40. I felt kinda sick.

However, as previously said there is also a fairly large quantity of awesome stuff in this world, so why doesn’t this seem to go as well-documented and studied as what’s going wrong. We never hear, for example, that X government department was really efficiently run this year, or that our road system is, on average, one of the best and safest in the world; it’s only ever the horror stories that get out.

Maybe it’s that we actively take pleasure in such pain; that we really do crave schadenfreude, or at the very least that negative feeling has a large emotional connotation. We use such emotion constantly in other situations after all; many films exploit or explore the world of the dark and horrifying in order to get under our skin and elicit a powerful emotional response, and music is often ‘designed’ to do the same thing. The emotions of hatred, of horror,  of loss, even fear, all elicit some primal response within us, and can create reciprocating emotions of catharsis, the sense of realisation and acceptance combined with a sense of purification of the soul. This emotion was the most sought-after feature of classical Greek tragedy, and required great influxes of negative emotion for it to work (hence why most theatre displays incorporated comedic satyrs to break the sheer monotony of depression); maybe we seek this sense of destructive satisfaction in our everyday lives too, revelling in the horror of the world because it makes us, in an almost perverse way, feel better about the world. And hey; I like a good mope now and again as much as the next man.

But to me, this isn’t the real reason, if only because it overlooks the most simple explanation; that bad stuff is interesting because it stands out. Humans are, in a surprising number of ways, like magpies, and we always get drawn to everything outside the norm. If it’s new, it’s unusual, so we find it intriguing and want to hear about it. Nowadays, we live a very sheltered existence in which an awful lot of stuff goes right for us, and the majority of experiences of life and other people fall into the decidedly ‘meh, OK’ category. Rarely are you ecstatic about the friendliness of the staff in your local newsagents, as most such people tend to be not much more than a means to the end; they are efficient and not horrible about allowing you to purchase something, as it should be. This is so commonplace, purely by virtue of being good business practice, that this is considered the norm, and it’s not as if there’s much they can do to elevate the experience and make it particularly enjoyable for you- but it’s far easier for them to be surly and unhelpful, making you note not to visit that shop again. This applies in countless other ways of life; the strictness of our national driving test means that the majority of people on any given road are going to behave in a predictable, safe fashion, meaning the guy who almost kills you pulling out onto the motorway sticks in your mind as an example of how standards are falling everywhere and the roads are hideously unsafe.

To me, the real proof of this theory is that we are capable of focusing on the good things in life too; when they too are somewhat dramatic and unusual. During last summer’s Olympics, an event that is unlikely to occur in Britain again in my lifetime, the news recorded various athletes’ medal success and the general awesomeness of the event every evening and everyone seemed positively taken aback by how great the event was and how much everyone had got behind it; it was genuinely touching to see people enjoying themselves so much. But in our current society, always striving to improve itself, finding examples of things hitting well below par is far easier than finding stuff acting above and beyond the call of awesome.

Although admittedly being happy the whole time would be kinda tiring. And impractical

The Plight of Welsh Rugby

It being a rugby time of year, I thought I might once again cast my gaze over the world of rugby in general. Rugby is the sport I love, and the coming of professionalism has seen it become bigger, faster, and more of a spectacle than ever before. The game itself has, to my mind at least, greatly benefited from the coming of the professional age; but with professionalism comes money, and where there’s money there are problems.

Examples of how financial problems have ruined teams abound all over the world, from England (lead by the financial powerhouse of the RFU) to New Zealand (where player salary caps are, if I remember correctly, set at £50,000 to avoid bankrupting themselves). But the worst examples are to be found in Britain, specifically in Wales (and, to a lesser extent, Scotland).

Back in the day, Wales was the powerhouse of northern hemisphere rugby. Clubs like Bridgend, Pontypool and Llanelli, among others, churned out international-level stars at a quite astounding rate for such relatively small clubs. Amidst the valleys, rugby was a way of life, something that united whole communities who would turn out to watch their local clubs in fierce local derbies. And the results followed; despite England and France enjoying the benefit of far superior playing numbers, Wales were among the most successful sides in the then Five Nations Championship, Welsh sides were considered the major challenge for touring southern hemisphere sides, and the names of such Welsh greats as JPR Williams, Barry John, Phil Bennett and, most famous of the lot, Gareth Edwards, have resonated down the ages. Or at least the nostalgic rugby press tells me, since I wasn’t really in a position to notice at the time.

However, professionalism demands that clubs pay their players if they wish to keep hold of them, and that requires them to generate a not insignificant degree of income. Income requires fans, and more importantly a large number of fans who are willing and able to travel to games and pay good money for tickets and other paraphernalia, and this requires a team to be based in an area of sufficient population and wealth. This works best when clubs are based in and around large cities; but since rugby is a game centred around rolling around in a convenient acre of mud it does not always translate well to a city population. As such, many rugby heartlands tend to be fairly rural, and thus present major issues when considering a professional approach to the game. This was a major problem in Scotland; their greatest talent pool came from the borders region, home of such famous clubs as Melrose and Galashiels, but when the game went pro in 1995 the area only had a population of around 100,000 and was declining economically. For the SRU to try and support all their famous clubs would have been nigh-on impossible, since there are only so many potential fans to go around those many with proud rugby heritage in such a relatively small area, and to pick one club over another would have been a move far too dangerous to contemplate. So they opted for a regional model; here, the old clubs would form their own leagues to act as a talent pool for regional sides who would operate as big, centrally contracted, professional outfits. The idea was that everyone, regardless of their club of origin, would come together to back their region, the proud sum of its many parts; but in reality many consider regional sides to be rather soulless outfits without the heritage or locality to drum up support. In Scotland they formed four regions originally, but the Caledonia Reds (covering the vast, lowly populated area north of the major cities) were disbanded after just a season and the Border Reivers, sprung from Soctland’s rugby heartland, went in 2005 after poor results and worse attendances. Now only Edinburgh and Glasgow are left, doing what they can in places with all the money and none of the heritage.

Ireland also adopted the regional model, but there it was far less of a problem. Ireland (which for rugby purposes incorporates Northern Ireland as well) is a larger, more densely populated country than Scotland, and actually has four major cities to base its four regional sides in (Limerick, Galway, Belfast and Dublin, whose potential to grow into a rugby powerhouse, as the largest conurbation of people in Europe without a major football side, is huge). Not only that, but relatively few Irish clubs had garnered the fame and prestige of their fellow Celts, so the regions didn’t have so many heritage problems. And its shown; Ireland is now the most successful country in the Celtic League (or RaboDirect Pro12, to satisfy the sponsors), Leinster have won 3 Heineken Cups in 5 years, and just four years ago, the national side achieved their country’s second-ever Grand Slam.

But it was in Wales that rugby had the farthest to fall, and fall it did; without the financial, geographical and club structure advantages of England or the virgin potential of Ireland, Welsh fortunes have been topsy-turvy. Initially five regions were set up, but the Celtic Warriors folded after just a few seasons and left only four, covering the four south coast cities of Llanelli (Scarlets), Swansea (Ospreys), Newport (Dragons) and Cardiff. Unfortunately, these cities are not huge and are all very close to one another, giving them a small catchment area and very little sense of regional rivalry; since they are all, apparently, part of the same region. Their low population means the clubs struggle to support themselves from the city population, but without any sense of historic or community identity they find it even harder to build a dedicated fan base; and with the recent financial situation, with professional rugby living through its first depression as player wages continue to rise, these finances are getting stretched ever thinner.

Not only that, but all the old clubs, whilst they still exist, are losing out on the deal too. Whilst the prestige and heritage are still there, with the WRU’s and the rugby world’s collective focus on the regional teams’ top-level performance nobody cares about the clubs currently tussling it out in the Principality Premiership, and many of these communities have lost their connection with clubs that once very much belonged to the community. This loss of passion for the game on a local level may partly be inspired by the success of football clubs such as Swansea, enjoying an impressive degree of Premier League success. Many of these local clubs also have overspent in pursuit of success in the professional era, and with dwindling crowds this has come back to bite; some prestigious clubs have gone into administration and tumbled down the leagues, tarnishing a reputation and dignity that is, for some, the best thing they have left. Even the Welsh national team, so often a source of pride no matter what befalls the club game, has suffered over the last year, only recently breaking an eight-match losing streak that drew stark attention to the Welsh game’s ailing health.

The WRU can’t really win in this situation; it’s too invested in the regional model to scrap it without massive financial losses, and to try and invest in a club game would have stretch the region’s wallets even further than they are currently. And yet the regional model isn’t working brilliantly either, failing to regularly produce either the top-quality games that such a proud rugby nation deserves or sufficient money to support the game. Wales’ economic situation, in terms of population and overall wealth, is simply not ideally suited to the excesses of professional sport, and the game is suffering as a result. And there’s just about nothing the WRU can do about it, except to just keep on pushing and hoping that their regions will gather loyalty, prestige and (most importantly) cash in due time. Maybe the introduction of an IRB-enforced universal salary cap, an idea I have long supported, would help the Welsh, but it’s not a high-priority idea within the corridors of power. Let us just hope the situation somehow manages to resolve itself.

Plato’s Cave

Everyone’s heard of Plato, to some extent anyway; ‘Greek bloke, lived quite a while ago, had a beard’ is probably the limit of what could be considered universal knowledge. This he most certainly was, but what made him famous was his work, for Plato was taught by Socrates and was one of the finest philosophers and thinkers to grace human history. His greatest work was ‘The Republic’, a ten book piece exploring the nature of justice and government through a series of imagined conversations, hypothetical situations, metaphors and allegories.  One of these allegories has become especially linked to Plato’s name, which is somewhat surprising given how little the actual allegory is known to the world in general, so I thought I might explore it today; the allegory of the cave.

Plato believed in a separate level of reality, more fundamental than the physical world we encounter and interact with using our body and senses, that he called The Forms. To summarise briefly, a Form is the philosophical essence of an object; in the real world, a shelf is three bits of wood and some nails all joined together, but the Form of this is the ability to store some books within easy reach, for example.  Without the essence of shelf-ness, the shelf literally is nothing more than some wood, and ceases to be a shelf on a fundamental level any more. Similarly, when we turn a piece of plastic into a toy, we have fundamentally changed the Form of that plastic, even though the material is exactly the same.

Plato based most of his philosophical work around his Theory of Forms, and took the concept to great extremes; to him, the sole objective scale against which to measure intelligence was one’s ability to grasp the concept of the Form of something, and he also held that understanding the Form of a situation was the key to its correct management. However, he found his opinions on Forms hard to communicate to many people (and it can’t have helped that he was born to a rich family, where he was given plenty of opportunity to be intelligent, whilst many of the poor were uneducated), and some considered him to be talking rubbish, and so he came up with the allegory of the cave to explain what he was on about.

Imagine a large group of prisoners, chained to the wall of a cave for some unspecified reason. They are fixed in position, unable to move at all, and their necks are also fixed in position so they cannot look around. Worst of all, however, they have absolutely no memory of the world or how anything in it works; in many ways, their minds are like that of a newborn toddler trying to grasp the concept of the world around him. Everything they are to know must be learnt from experience and experimentation. But in front of them, they can see nothing but bare rock.

However, there are a few features of this cave that make it interesting. It is very deep and comprises multiple levels, with the prisoners at the bottom. On the level above the prisoners, and directly behind them, is an enormous fire, stoked and fed day and night (although being at the bottom of a cave, the prisoners don’t have any concept of day and night), brightly illuminating the wall that the prisoner’s see. Also on the level above, but in front of the fire, is a walkway, across which people walk along with their children, animals and whatever items they happen to be carrying. As they cross in front of the fire, their shadows are cast onto the wall the prisoners can see, and the sounds they make echo down to the prisoners too. Over time (and we’re presuming years here) the prisoners get used to the shadows they see on the wall in front of them; they learn to recognise the minute details of the shadows, to differentiate and identify them. They learn to call one figure a man, another a woman, and call others cat, dog, box, pot or whatever. They learn that sometimes it gets cold, and then hot again some time later, before reverting back to cold (thanks to the seasons). And then, they begin to make connections between the echoes they hear and the shadows. They learn that man shadows and woman shadows talk differently from one another and from dog shadows, and that basket shadows make hardly any noise.

Now remember, we’re presuming here that the prisoners have no memory/knowledge of the ‘real world’, so the shadows become, to them, a reality. They think it is the shadows of a dog that make the barking sound, and that when the shadow of a clay pot is dropped and breaks, then it is the shadow that has broken. Winter and summer are not caused by anything, they merely happen. What is to us merely an image of reality becomes their reality.

Now, Plato has us imagine we take one of our prisoners away; free him, show him the real world. As he says, if we suppose “that the man was compelled to look at the fire: wouldn’t he be struck blind and try to turn his gaze back toward the shadows, as toward what he can see clearly and hold to be real?” Wouldn’t he be simultaneously amazed and terrified by the world he found around him, to see a fully-fledged person causing the shadow he had once thought of as a fundamental reality? Perhaps he would be totally unable to even see, much less comprehend, this strange, horrifying new world, unable to recognise it as real.

However, humans are nothing if not adaptable creatures, and after some time ‘up top’ our freed prisoner would surely grow accustomed to his surroundings. He would see a person, rather than their shadow, think of putting something in a box, rather than seeing a black square on a wall, and would eventually feel confident enough to venture out of the cave, look at and comprehend the sun, and eventually even recognise it as “source of the seasons and the years, and is the steward of all things in the visible place, and is in a certain way the cause of all those things he and his companions had been seeing”. (Plato often used the sun as a metaphor for enlightenment or illumination from knowledge, so here it represents the prisoner’s final understanding of the nature of reality).

Now, our prisoner could be said to be educated in the ways of the world, and after a time he would surely think back to those long days he spent chained to that wall. He would think of his fellow prisoners, how piteous their lives and their recognition of reality was when compared to him, and how much he could teach them to aid their understanding and make them happier. “And wouldn’t he disdain whatever honours, praises, and prizes were awarded there to the ones who guessed best which shadows followed which?”. So, Plato has our man return to his cave, to his old spot, and try to teach his fellow prisoners what reality really is.

And it is here where Plato’s analogy gets really interesting; for, rather than accepting this knowledge, the fellow prisoners would be far more likely to reject them. What are these colour things? What do you mean, stuff goes ‘inside’ other things- there are only two dimensions. What is this big fiery ball in this ‘sky’ thing? And, after all, why should they listen to him; after so long away, he’s going to be pretty bad at the whole ‘guessing what each shadow is’ business, so they would probably think him stupid; insane, even, going on about all these concepts that are, to the prisoners, quite obviously not real. He would be unable to educate them without showing them what he means, because he can’t express his thought in terms of the shadows they see in front of them. If anything, his presence would only scare them, convince them that this strange ‘other world’ he talks about is but a feat of madness causing one’s eyes to become corrupted, scaring them away from attempting to access anything beyond their limited view of ‘shadow-reality’. As Plato says, “if they were somehow able to get their hands on and kill the man who attempts to release and lead them up, wouldn’t they kill him?”

To Plato, the world of his Forms was akin to the real world; true, enlightened, the root cause of the physical reality we see and encounter. And the real, material world; that was the shadows, mere imprints of The Forms that we experienced as physical phenomena. We as people have the ability to, unlike the prisoners, elevate ourselves beyond the physical world and try to understand the philosophical world, the level of reality where we can comprehend what causes things, what things mean, what their consequences are; where we can explore with an analytical mind and understand our world better on a fundamental level. Or, we can choose not to, and stay looking at shadows and dismissing those willing to think higher.