One Year On

A year is a long time.

On the 16th of December last year, I was on Facebook. Nothing unusual about this (I spent and indeed, to a slightly lesser extent, still spend rather too much time with that little blue f in the top corner of my screen), especially given that it was the run up to Christmas and I was bored, and neither was the precise content of the bit of Facebook I was looking at- an argument. Such things are common in the weird world of social networking, although they surely shouldn’t be, and this was just another such time. Three or four people were posting long, eloquent, semi-researched and furiously defended messages over some point of ethics, politics or internet piracy, I know not which (it was probably one of those anyway, since that’s what most of them seem to be about among my friends list). Unfortunately, one of those people was me, and I was losing. Well, I say losing; I don’t think anybody could be said to be winning, but I was getting angry and upset all the same, made worse by the realisation that what I was doing was a COMPLETE WASTE OF TIME. I am not in any position whereby my Views are going to have a massive impact on the lives of everyone else, nobody wants to hear what they are, and there was no way in hell that I was going to convince anyone that my opinion was more ‘right’ than their strongly-held conviction- all I and my fellow arguees were achieving was getting very, very angry at one another, actively making us all more miserable. We could pretend that we were debating an important issue, but in reality were just another group of people screaming at one another via the interwebs.

A little under a week later, the night after the winter solstice (22nd of December, which you should notice was exactly 366 days ago), I was again to be found watching an argument unfold on Facebook. Thankfully this time I was not participating, merely looking on with horror as another group of four or five people made their evening miserable by pretending they could convince others that they were ‘wrong’. The provocativeness of the original post, spouting one set of Views as gospel truth over the web, the self-righteousness of the responses and the steadily increasing vitriol of the resulting argument, all struck me as a terrible waste of some wonderful brains. Those participating I knew to be good people, smart people, capable of using their brains for, if not betterment of the world around them, then perhaps a degree of self-betterment or at the very least something that was not making the world a more unhappy place. The moment was not a happy one.

However, one of the benefits of not competing in such an argument is that I didn’t have to be reminded of it or spend much time watching it unfold, so I turned back to my news feed and began scrolling down. As I did so, I came to another friend, putting a link up to his blog. This was a recent experiment for him, only a few posts old at the time, and he self-publicised it religiously every time a post went up. He has since discontinued his blogging adventures, to my disappointment, but they made fun reading whilst they lasted; short (mostly less than 300 words) and covering a wide range of random topics. He wasn’t afraid to just be himself online, and wasn’t concerned about being definitively right; if he offered an opinion, it was just something he thought, no more & no less, and there was no sense that it was ever combative. Certainly it was never the point of any post he made; each was just something he’d encountered in the real world or online that he felt would be relatively cool and interesting to comment on. His description described his posts as ‘musings’, and that was the right word for them; harmless, fun and nice. They made the internet and world in general, in some tiny little way, a nicer place to explore.

So, I read through his post. I smirked a little, smiled and closed the tab, returning once more to Facebook and the other distractions & delights the net had to offer. After about an hour or so, my thoughts once again turned to the argument, and I rashly flicked over to look at how it was progressing. It had got to over 100 comments and, as these things do, was gradually wandering off-topic to a more fundamental, but no less depressing, point of disagreement. I was once again filled with a sense that these people were wasting their lives, but this time my thoughts were both more decisive and introspective. I thought about myself; listless, counting down the last few empty days before Christmas, looking at the occasional video or blog, not doing much with myself. My schedule was relatively free, I had a lot of spare time, but I was wasting it. I thought of all the weird and wonderful thoughts that flew across my brain, all the ideas that would spring and fountain of their own accord, all of the things that I thought were interesting, amazing or just downright wonderful about our little mental, spinning ball of rock and water and its strange, pink, fleshy inhabitants that I never got to share. Worse, I never got to put them down anywhere, so after time all these thoughts would die in some forgotten corner of my brain, and the potential they had to remind me of themselves was lost. Once again, I was struck by a sense of waste, but also of resolve; I could try to remedy this situation. So, I opened up WordPress, I filled out a few boxes, and I had my own little blog. My fingers hovered over the keyboard, before falling to the keys. I began to write a little introduction to myself.

Today, the role of my little corner of the interwebs has changed somewhat. Once, I would post poetry, lists, depressed trains of thought and last year’s ’round robin letter of Planet Earth’, which I still regard as one of the best concepts I ever put onto the net (although I don’t think I’ll do one this year- not as much major stuff has hit the news). Somewhere along the line, I realised that essays were more my kind of thing, so I’ve (mainly) stuck to them since; I enjoy the occasional foray into something else, but I find that I can’t produce as much regular stuff this was as otherwise. In any case, the essays have been good for me; I can type, research and get work done so much faster now, and it has paid dividends to my work rate and analytical ability in other fields. I have also found that in my efforts to add evidence to my comments, I end up doing a surprising amount of research that turns an exercise in writing down what I know into one of increasing the kind of stuff I know, learning all sorts of new and random stuff to pack into my brain. I have also violated my own rules about giving my Views on a couple of occasions (although I would hope that I haven’t been too obnoxious about it when I have), but broadly speaking the role of my blog has stayed true to those goals stated in my very first post; to be a place free from rants, to be somewhere to have a bit of a laugh and to be somewhere to rescue unwary travellers dredging the backwaters of the internet who might like what they’ve stumbled upon. But, really, this little blog is like a diary for me; a place that I don’t publicise on my Facebook feed, that I link to only rarely, and that I keep going because I find it comforting. It’s a place where there’s nobody to judge me, a place to house my mind and extend my memory. It’s stressful organising my posting time and coming up with ideas, but whilst blogging, the rest of the world can wait for a bit. It’s a calming place, a nice place, and over the last year it has changed me.

A year is a long time.

Advertisement

The Epitome of Nerd-dom

A short while ago, I did a series of posts on computing based on the fact that I had done a lot of related research when studying the installation of Linux. I feel that I should now come clean and point out that between the time of that first post being written and now, I have tried and failed to install Ubuntu on an old laptop six times already, which has served to teach me even more about exactly how it works, and how it differs from is more mainstream competitors. So, since I don’t have any better ideas, I thought I might dedicate this post to Linux itself.

Linux is named after both its founder, Linus Torvalds, a Finnish programmer who finished compiling the Linux kernel in 1992, and Unix, the operating system that could be considered the grandfather of all modern OSs and which Torvalds based his design upon (note- whilst Torvald’s first name has a soft, extended first syllable, the first syllable of the word Linux should be a hard, short, sharp ‘ih’ sound). The system has its roots in the work of Richard Stallman, a lifelong pioneer and champion of the free-to-use, open source movement, who started the GNU project in 1983. His ultimate goal was to produce a free, Unix-like operating system, and in keeping with this he wrote a software license allowing anyone to use and distribute software associated with it so long as they stayed in keeping with the license’s terms (ie nobody can use the free software for personal profit). The software compiled as part of the GNU project was numerous (including a still widely-used compiler) and did eventually come to fruition as an operating system, but it never caught on and the project was, in regards to its achieving of its final aims, a failure (although the GNU General Public License remains the most-used software license of all time).

Torvalds began work on Linux as a hobby whilst a student in April 1991, using another Unix clone MINIX to write his code in and basing it on MINIX’s structure. Initially, he hadn’t been intending to write a complete operating system at all, but rather a type of display interface called a terminal emulator- a system that tries to emulate a graphical terminal, like a monitor, through a more text-based medium (I don’t really get it either- it’s hard to find information a newbie like me can make good sense of). Strictly speaking a terminal emulator is a program, existing independent of an operating system and acting almost like one in its own right, directly within the computer’s architecture. As such, the two are somewhat related and it wasn’t long before Torvalds ‘realised’ he had written a kernel for an operating system and, since the GNU operating system had fallen through and there was no widespread, free-to-use kernel out there, he pushed forward with his project. In August of that same year he published a now-famous post on a kind of early internet forum called Usenet, saying that he was developing an operating system that was “starting to get ready”, and asking for feedback concerning where MINIX was good and where it was lacking, “as my OS resembles it somewhat”. He also, interestingly,  said that his OS “probably never will support anything other than AT-harddisks”. How wrong that statement has proved to be.

When he finally published Linux, he originally did so under his own license- however, he borrowed heavily from GNU software in order to make it run properly (so to have a proper interface and such), and released later versions under the GNU GPL. Torvalds and his associates continue to maintain and update the Linux kernel (Version 3.0 being released last year) and, despite some teething troubles with those who have considered it old-fashioned, those who thought MINIX code was stolen (rather than merely borrowed from), and Microsoft (who have since turned tail and are now one of the largest contributors to the Linux kernel), the system is now regarded as the pinnacle of Stallman’s open-source dream.

One of the keys to its success lies in its constant evolution, and the interactivity of this process. Whilst Linus Torvalds and co. are the main developers, they write very little code themselves- instead, other programmers and members of the Linux community offer up suggestions, patches and additions to either the Linux distributors (more on them later) or as source code to the kernel itself. All the main team have to do is pick and choose the features they want to see included, and continually prune what they get to maximise the efficiency and minimise the vulnerability to viruses of the system- the latter being one of the key features that marks Linux (and OS X) over Windows. Other key advantages Linux holds includes its size and the efficiency with which it allocates CPU usage; whilst Windows may command a quite high percentage of your CPU capacity just to keep itself running, not counting any programs running on it, Linux is designed to use your CPU as efficiently as possible, in an effort to keep it running faster. The kernel’s open source roots mean it is easy to modify if you have the technical know-how, and the community of followers surrounding it mean that any problem you have with a standard distribution is usually only a few button clicks away. Disadvantages include a certain lack of user-friendliness to the uninitiated or not computer-literate user since a lot of programs require an instruction typed into the command bar, far fewer  programs, especially commercial, professional ones, than Windows, an inability to process media as well as OS X (which is the main reason Apple computers appear to exist), and a tendency to go wrong more frequently than commercial operating systems. Nonetheless, many ‘computer people’ consider this a small price to pay and flock to the kernel in their thousands.

However, the Linux kernel alone is not enough to make an operating system- hence the existence of distributions. Different distributions (or ‘distros’ as they’re known) consist of the Linux kernel bundled together with all the other features that make up an OS: software, documentation, window system, window manager, and desktop interface, to name but some. A few of these components, such as the graphical user interface (or GUI, which covers the job of several of the above components), or the package manager (that covers program installation, removal and editing), tend to be fairly ubiquitous (GNOME or KDE are common GUIs, and Synaptic the most typical package manager), but different people like their operating system to run in slightly different ways. Therefore, variations on these other components are bundled together with the kernel to form a distro, a complete package that will run as an operating system in exactly the same fashion as you would encounter with Windows or OS X. Such distros include Ubuntu (the most popular among beginners), Debian (Ubuntu’s older brother), Red Hat, Mandriva and Crunchbang- some of these, such as Ubuntu, are commercially backed enterprises (although how they make their money is a little beyond me), whilst others are entirely community-run, maintained solely thanks to the dedication, obsession and boundless free time of users across the globe.

If you’re not into all this computer-y geekdom, then there is a lot to dislike about Linux, and many an average computer user would rather use something that will get them sneered at by a minority of elitist nerds but that they know and can rely upon. But, for all of our inner geeks, the spirit, community, inventiveness and joyous freedom of the Linux system can be a wonderful breath of fresh air. Thank you, Mr. Torvalds- you have made a lot of people very happy.

The Science of Iron

I have mentioned before that I am something of a casual gymgoer- it’s only a relatively recent hobby, and only in the last couple of months have I given any serious thought and research to my regime (in which time I have also come to realise that some my advice in previous posts was either lacking in detail or partially wrong- sorry, it’s still basically useful). However, whilst the internet is, as could be reasonably expected, inundated with advice about training programs, tips on technique & exercises to work different muscle groups (often wildly disagreeing with one another), there is very little available information concerning the basic science behind building muscle- it’s just not something the average gymgoer knows. Since I am fond of a little research now and then, I thought I might attempt an explanation of some of the basic biology involved.

DISCLAIMER: I am not a biologist, and am getting this information via the internet and a bit of ad libbing, so don’t take this as anything more than a basic guideline

Everything in your body is made up of tiny, individual cells, each a small sac consisting of a complex (and surprisingly ‘intelligent’) membrane, a nucleus to act as its ‘brain’ (although no-one is entirely sure exactly how they work) and a lot of watery, chemical-y stuff called cytoplasm squelching about and reacting with things. It follows from this that to increase the size of an organ or tissue requires these cells to do one of two things; increase in number (hyperplasia) or in size (hypertrophy). The former case is mainly associated with growths such as neoplasia (tumours), and has only been shown to have an impact on muscles in response to the injection of growth hormones, so when we’re talking about strength, fitness and muscle building we’re really interested in going for hypertrophy.

Hypertrophy itself is still a fairly broad term biologically, and only two aspects of it are interesting from an exercise point of view; muscular and ventricular hypertrophy. As the respective names suggest, the former case relates to the size of cells in skeletal muscle increasing, whilst the latter is concerned with the increase in size & strength of the muscles making up the walls of the heart (the largest chambers of which are called the ventricles). Both are part of the body’s long-term response to exercise, and for both the basic principle is the same- but before I get onto that, a quick overview of exactly how muscles work may be in order.

A muscle cell (or muscle fibre) is on of the largest in the body, vaguely tubular in shape and consisting in part of many smaller structures known as myofibrils (or muscle fibrils). Muscle cells are also unusual in that they contain multiple cell nuclei, as a response to their size & complex function, and instead of cytoplasm contain another liquid called sarcoplasm (more densely packed with glycogen fuel and proteins to bind oxygen, and thus enabling the muscles to respire more quickly & efficiently in response to sudden & severe demand). These myofibrils consist of multiple sections called myofilaments, (themselves made of a family of proteins called myosins) joined end-to-end as repeating units known as sarcomeres. This structure is only present in skeletal, rather than smooth muscle cells (giving the latter a more regular, smoothly connected structure when viewed under the microscope, hence the name) and are responsible for the increased strength available to skeletal muscles. When a muscle fibril receives an electrical impulse from the brain or spinal cord, certain areas or ‘bands’ making up the sarcomeres shrink in size, causing the muscle as a whole to contract. When the impulse is removed, the muscle relaxes; but it cannot extend itself, so another muscle working with it in what is known as an antagonistic pair will have to pull back on it to return it to its original position.

Now, when that process is repeated a lot in a small time frame, or when a large load is placed on the muscle fibre, the fibrils can become damaged. If they are actually torn then a pulled muscle results, but if the damage is (relatively) minor then the body can repair it by shipping in more amino acids (the building blocks of the proteins that make up our bodies) and fuel (glycogen and, most importantly, oxygen). However, to try and safeguard against any future such event causing damage the body does its bit to overcompensate on its repairs, rebuilding the protein structures a little more strongly and overcompensating for the lost fuel in the sarcoplasm. This is the basic principle of muscular hypertrophy; the body’s repair systems overcompensating for minor damage.

There are yet more subdivisions to consider, for there are two main types of muscular hypertrophy. The first is myofibrillated hypertrophy, concerning the rebuilding of the myofibrils with more proteins so they are stronger and able to pull against larger loads. This enables the muscle to lift larger weights & makes one stronger, and is the prominent result of doing few repetitions of a high load, since this causes the most damage to the myofibrils themselves. The other type is sarcoplasmic hypertrophy, concerning the packing of more sarcoplasm into the muscle cell to better supply the muscle with fuel & oxygen. This helps the muscle deal better with exercise and builds a greater degree of muscular endurance, and also increases the size of the muscle, as the increased liquid in it causes it to swell in volume. It is best achieved by doing more repetitions on a lower load, since this longer-term exercise puts more strain on the ability of the sarcoplasm to supply oxygen. It is also advisable to do fewer sets (but do them properly) of this type of training since it is more tiring; muscles get tired and hurt due to the buildup of lactic acid in them caused by an insufficient supply of oxygen requiring them to respire anaerobically. This is why more training on a lower weight feels like harder work, but is actually going to be less beneficial if you are aiming to build muscular strength.

Ventricular (or cardiac) hypertrophy combines both of these effects in a response to the increased load placed on the muscles in the heart from regular exercise. It causes the walls of the ventricles to thicken as a result of sarcoplasmic hypertrophy, and also makes them stronger so that the heart has to beat less often (but more powerfully) to supply blood to the body. In elite athletes, this has another effect; in response to exercise the heart’s response is not so much to beat more frequently, but to do so more strongly, swelling more in size as it pumps to send more blood around the body with each beat. Athletic heart syndrome, where the slowing of the pulse and swelling of heart size are especially magnified, can even be mistaken for severe heart disease by an ill-informed doctor.

So… yeah, that’s how muscle builds (I apologise, by the way, for my heinous overuse of the word ‘since’ in the above explanation). I should point out quickly that this is not a fast process; each successive rebuilding of the muscle only increases the strength of that muscle by a small amount, even for serious weight training, and the body’s natural tendency to let a muscle degrade over time if it is not well-used means that hard work must constantly be put in to maintain the effect of increased muscular size, strength and endurance. But then again, I suppose that’s partly what we like about the gym; the knowledge that we have earned our strength, and that our willingness to put in the hard work is what is setting us apart from those sitting on the sofa watching TV. If that doesn’t sound too massively arrogant.

Drunken Science

In my last post, I talked about the societal impact of alcohol and its place in our everyday culture; today, however, my inner nerd has taken it upon himself to get stuck into the real meat of the question of alcohol, the chemistry and biology of it all, and how all the science fits together.

To a scientist, the word ‘alcohol’ does not refer to a specific substance at all, but rather to a family of chemical compounds containing an oxygen and hydrogen atom bonded to one another (known as an OH group) on the end of a chain of carbon atoms. Different members of the family (or ‘homologous series’, to give it its proper name) have different numbers of carbon atoms and have slightly different physical properties (such as melting point), and they also react chemically to form slightly different compounds. The stuff we drink is that with two carbon atoms in its chain, and is technically known as ethanol.

There are a few things about ethanol that make it special stuff to us humans, and all of them refer to chemical reactions and biological interactions. The first is the formation of it; there are many different types of sugar found in nature (fructose & sucrose are two common examples; the ‘-ose’ ending is what denotes them as sugars), but one of the most common is glucose, with six carbon atoms. This is the substance our body converts starch and other sugars into in order to use for energy or store as glycogen. As such, many biological systems are so primed to convert other sugars into glucose, and it just so happens that when glucose breaks down in the presence of the right enzymes, it forms carbon dioxide and an alcohol; ethanol, to be precise, in a process known as either glycolosis (to a scientist) or fermentation (to everyone else).

Yeast performs this process in order to respire (ie produce energy) anaerobically (in the absence of oxygen), so leading to the two most common cases where this reaction occurs. The first we know as brewing, in which an anaerobic atmosphere is deliberately produced to make alcohol; the other occurs when baking bread. The yeast we put in the bread causes the sugar (ie glucose) in it to produce carbon dioxide, which is what causes the bread to rise since it has been filled with gas, whilst the ethanol tends to boil off in the heat of the baking process. For industrial purposes, ethanol is made by hydrating (reacting with water) an oil by-product called ethene, but the product isn’t generally something you’d want to drink.

But anyway, back to the booze itself, and this time what happens upon its entry into the body. Exactly why alcohol acts as a depressant and intoxicant (if that’s a proper word) is down to a very complex interaction with various parts and receptors of the brain that I am not nearly intelligent enough to understand, let alone explain. However, what I can explain is what happens when the body gets round to breaking the alcohol down and getting rid of the stuff. This takes place in the liver, an amazing organ that performs hundreds of jobs within the body and contains a vast repetoir of enzymes. One of these is known as alcohol dehydrogenase, which has the task of oxidising the alcohol (not a simple task, and one impossible without enzymes) into something the body can get rid of. However, most ethanol we drink is what is known as a primary alcohol (meaning the OH group is on the end of the carbon chain), and this causes it to oxidise in two stages, only the first of which can be done using alcohol dehydrogenase. This process converts the alcohol into an aldehyde (with an oxygen chemically double-bonded to the carbon where the OH group was), which in the case of ethanol is called acetaldehyde (or ethanal). This molecule cannot be broken down straight away, and instead gets itself lodged in the body’s tissues in such a way (thanks to its shape) to produce mild toxins, activate our immune system and make us feel generally lousy. This is also known as having a hangover, and only ends when the body is able to complete the second stage of the oxidation process and convert the acetaldehyde into acetic acid, which the body can get rid of relatively easily. Acetic acid is commonly known as the active ingredient in vinegar, which is why alcoholics smell so bad and are often said to be ‘pickled’.

This process occurs in the same way when other alcohols enter the body, but ethanol is unique in how harmless (relatively speaking) its aldehyde is. Methanol, for example, can also be oxidised by alcohol dehydrogenase, but the aldehyde it produces (officially called methanal) is commonly known as formaldehyde; a highly toxic substance used in preservation work and as a disinfectant that will quickly poison the body. It is for this reason that methanol is present in the fuel commonly known as ‘meths’- ethanol actually produces more energy per gram and makes up 90% of the fuel by volume, but since it is cheaper than most alcoholic drinks the toxic methanol is put in to prevent it being drunk by severely desperate alcoholics. Not that it stops many of them; methanol poisoning is a leading cause of death among many homeless people.

Homeless people were also responsible for a major discovery in the field of alcohol research, concerning the causes of alcoholism. For many years it was thought that alcoholics were purely addicts mentally rather than biologically, and had just ‘let it get to them’, but some years ago a young student (I believe she was Canadian, but certainty of that fact and her name both escape me) was looking for some fresh cadavers for her PhD research. She went to the police and asked if she could use the bodies of the various dead homeless people who they found on their morning beats, and when she started dissecting them she noticed signs of a compound in them that was known to be linked to heroin addiction. She mentioned to a friend that all these people appeared to be on heroin, but her friend said that these people barely had enough to buy drink, let alone something as expensive as heroin. This young doctor-to-be realised she might be onto something here, and changed the focus of her research onto studying how alcohol was broken down by different bodies, and discovered something quite astonishing. Inside serious alcoholics, ethanol was being broken down into this substance previously only linked to heroin addiction, leading her to believe that for some unlucky people, the behaviour of their bodies made alcohol as addictive to them as heroin was to others. Whilst this research has by no means settled the issue, it did demonstrate two important facts; firstly, that whilst alcoholism certainly has some links to mental issues, it is also fundamentally biological and genetic by nature and cannot be solely put down as the fault of the victim’s brain. Secondly, it ‘sciencified’ (my apologies to grammar nazis everywhere for making that word up) a fact already known by many reformed drinkers; that when a former alcoholic stops drinking, they can never go back. Not even one drink. There can be no ‘just having one’, or drinking socially with friends, because if one more drink hits their body, deprived for so long, there’s a very good chance it could kill them.

Still, that’s not a reason to get totally down about alcohol, for two very good reasons. The first of these comes from some (admittely rather spurious) research suggesting that ‘addictive personalities’, including alcoholics, are far more likely to do well in life, have good jobs and overall succeed; alcoholics are, by nature, present at the top as well as the bottom of our society. The other concerns the one bit of science I haven’t tried to explain here- your body is remarkably good at dealing with alcohol, and we all know it can make us feel better, so if only for your mental health a little drink now and then isn’t an all bad thing after all. And anyway, it makes for some killer YouTube videos…

Big Pharma

The pharmaceutical industry is (some might say amazingly) the second largest on the planet, worth over 600 billion dollars in sales every year and acting as the force behind the cutting edge of science that continues to push the science of medicine onwards as a field- and while we may never develop a cure for everything you can be damn sure that the modern medical world will have given it a good shot. In fact the pharmaceutical industry is in quite an unusual position in this regard, forming the only part of the medicinal public service, and indeed any major public service, that is privatised the world over.

The reason for this is quite simply one of practicality; the sheer amount of startup capital required to develop even one new drug, let alone form a public service of this R&D, would feature in the hundreds of millions of dollars, something that no government would be willing to set aside for a small immediate gain. All modern companies in the ‘big pharma’ demographic were formed many decades ago on the basis of a surprise cheap discovery or suchlike, and are now so big that they are the only people capable of fronting such a big initial investment. There are a few organisations (the National Institute of Health, the Royal Society, universities) who conduct such research away from the private sectors, but they are small in number and are also very old institutions.

Many people, in a slightly different field, have voiced the opinion that people whose primary concern is profit are those we should least be putting in charge of our healthcare and wellbeing (although I’m not about to get into that argument now), and a similar argument has been raised concerning private pharmaceutical companies. However, that is not to say that a profit driven approach is necessarily a bad thing for medicine, for without it many of the ‘minor’ drugs that have greatly improved the overall healthcare environment would not exist. I, for example, suffer from irritable bowel syndrome, a far from life threatening but nonetheless annoying and inconvenient condition that has been greatly helped by a drug called mebeverine hydrochloride. If all medicine focused on the greater good of ‘solving’ life-threatening illnesses, a potentially futile task anyway, this drug would never have been developed and I would be even more hateful to my fragile digestive system. In the western world, motivated-by-profit makes a lot of sense when trying to make life just that bit more comfortable. Oh, and they also make the drugs that, y’know, save your life every time you’re in hospital.

Now, normally at this point in any ‘balanced argument/opinion piece’ thing on this blog, I try to come up with another point to try and keep each side of the argument at an about equal 500 words. However, this time I’m going to break that rule, and jump straight into the reverse argument straight away. Why? Because I can genuinely think of no more good stuff to say about big pharma.

If I may just digress a little; in the UK & USA (I think, anyway) a patent for a drug or medicine lasts for 10 years, on the basis that these little capsules can be very valuable things and it wouldn’t do to let people hang onto the sole rights to make them for ages. This means that just about every really vital lifesaving drug in medicinal use today, given the time it takes for an experimental treatment to become commonplace, now exists outside its patent and is now manufactured by either the lowest bidder or, in a surprisingly high number of cases, the health service itself (the UK, for instance, is currently trying to become self-sufficient in morphine poppies to prevent it from having to import from Afghanistan or whatever), so these costs are kept relatively low by market forces. This therefore means that during their 10-year grace period, drugs companies will do absolutely everything they can to extort cash out of their product; when the antihistamine drug loratadine (another drug I use relatively regularly, it being used to combat colds) was passing through the last two years of its patent, its market price was quadrupled by the company making it; they had been trying to get the market hooked onto using it before jacking up the prices in order to wring out as much cash as possible. This behaviour is not untypical for a huge number of drugs, many of which deal with serious illness rather than being semi-irrelevant cures for the snuffles.

So far, so much normal corporate behaviour. Reaching this point, we must now turn to consider some practices of the big pharma industry that would make Rupert Murdoch think twice. Drugs companies, for example, have a reputation for setting up price fixing networks, many of which have been worth several hundred million dollars. One, featuring what were technically food supplements businesses, subsidiaries of the pharmaceutical industry, later set the world record for the largest fines levied in criminal history- this a record that persists despite the fact that the cost of producing the actual drugs themselves (at least physically) rarely exceeds a couple of pence per capsule, hundreds of times less than their asking price.

“Oh, but they need to make heavy profits because of the cost of R&D to make all their new drugs”. Good point, well made and entirely true, and it would also be valid if the numbers behind it didn’t stack up. In the USA, the National Institute of Health last year had a total budget of $23 billion, whilst all the drug companies in the US collectively spent $32 billion on R&D. This might seem at first glance like the private sector has won this particular moral battle; but remember that the American drug industry generated $289 billion in 2006, and accounting for inflation (and the fact that pharmaceutical profits tend to stay high despite the current economic situation affecting other industries) we can approximate that only around 10% of company turnover is, on average, spent on R&D. Even accounting for manufacturing costs, salaries and such, the vast majority of that turnover goes into profit, making the pharmaceutical industry the most profitable on the planet.

I know that health is an industry, I know money must be made, I know it’s all necessary for innovation. I also know that I promised not to go into my Views here. But a drug is not like an iPhone, or a pair of designer jeans; it’s the health of millions at stake, the lives of billions, and the quality of life of the whole world. It’s not something to be played around with and treated like some generic commodity with no value beyond a number. Profits might need to be made, but nobody said there had to be 12 figures of them.

What we know and what we understand are two very different things…

If the whole Y2K debacle over a decade ago taught us anything, it was that the vast majority of the population did not understand the little plastic boxes known as computers that were rapidly filling up their homes. Nothing especially wrong or unusual about this- there’s a lot of things that only a few nerds understand properly, an awful lot of other stuff in our life to understand, and in any case the personal computer had only just started to become commonplace. However, over 12 and a half years later, the general understanding of a lot of us does not appear to have increased to any significant degree, and we still remain largely ignorant of these little feats of electronic witchcraft. Oh sure, we can work and operate them (most of us anyway), and we know roughly what they do, but as to exactly how they operate, precisely how they carry out their tasks? Sorry, not a clue.

This is largely understandable, particularly given the value of ‘understand’ that is applicable in computer-based situations. Computers are a rare example of a complex system that an expert is genuinely capable of understanding, in minute detail, every single aspect of the system’s working, both what it does, why it is there, and why it is (or, in some cases, shouldn’t be) constructed to that particular specification. To understand a computer in its entirety, therefore, is an equally complex job, and this is one very good reason why computer nerds tend to be a quite solitary bunch, with quite few links to the rest of us and, indeed, the outside world at large.

One person who does not understand computers very well is me, despite the fact that I have been using them, in one form or another, for as long as I can comfortably remember. Over this summer, however, I had quite a lot of free time on my hands, and part of that time was spent finally relenting to the badgering of a friend and having a go with Linux (Ubuntu if you really want to know) for the first time. Since I like to do my background research before getting stuck into any project, this necessitated quite some research into the hows and whys of its installation, along with which came quite a lot of info as to the hows and practicalities of my computer generally. I thought, then, that I might spend the next couple of posts or so detailing some of what I learned, building up a picture of a computer’s functioning from the ground up, and starting with a bit of a history lesson…

‘Computer’ was originally a job title, the job itself being akin to accountancy without the imagination. A computer was a number-cruncher, a supposedly infallible data processing machine employed to perform a range of jobs ranging from astronomical prediction to calculating interest. The job was a fairly good one, anyone clever enough to land it probably doing well by the standards of his age, but the output wasn’t. The human brain is not built for infallibility and, not infrequently, would make mistakes. Most of these undoubtedly went unnoticed or at least rarely caused significant harm, but the system was nonetheless inefficient. Abacuses, log tables and slide rules all aided arithmetic manipulation to a great degree in their respective fields, but true infallibility was unachievable whilst still reliant on the human mind.

Enter Blaise Pascal, 17th century mathematician and pioneer of probability theory (among other things), who invented the mechanical calculator aged just 19, in 1642. His original design wasn’t much more than a counting machine, a sequence of cogs and wheels so constructed as to able to count and convert between units, tens, hundreds and so on (ie a turn of 4 spaces on the ‘units’ cog whilst a seven was already counted would bring up eleven), as well as being able to work with currency denominations and distances as well. However, it could also subtract, multiply and divide (with some difficulty), and moreover proved an important point- that a mechanical machine could cut out the human error factor and reduce any inaccuracy to one of simply entering the wrong number.

Pascal’s machine was both expensive and complicated, meaning only twenty were ever made, but his was the only working mechanical calculator of the 17th century. Several, of a range of designs, were built during the 18th century as show pieces, but by the 19th the release of Thomas de Colmar’s Arithmometer, after 30 years of development, signified the birth of an industry. It wasn’t a large one, since the machines were still expensive and only of limited use, but de Colmar’s machine was the simplest and most reliable model yet. Around 3,000 mechanical calculators, of various designs and manufacturers, were sold by 1890, but by then the field had been given an unexpected shuffling.

Just two years after de Colmar had first patented his pre-development Arithmometer, an Englishmen by the name of Charles Babbage showed an interesting-looking pile of brass to a few friends and associates- a small assembly of cogs and wheels that he said was merely a precursor to the design of a far larger machine: his difference engine. The mathematical workings of his design were based on Newton polynomials, a fiddly bit of maths that I won’t even pretend to understand, but that could be used to closely approximate logarithmic and trigonometric functions. However, what made the difference engine special was that the original setup of the device, the positions of the various columns and so forth, determined what function the machine performed. This was more than just a simple device for adding up, this was beginning to look like a programmable computer.

Babbage’s machine was not the all-conquering revolutionary design the hype about it might have you believe. Babbage was commissioned to build one by the British government for military purposes, but since Babbage was often brash, once claiming that he could not fathom the idiocy of the mind that would think up a question an MP had just asked him, and prized academia above fiscal matters & practicality, the idea fell through. After investing £17,000 in his machine before realising that he had switched to working on a new and improved design known as the analytical engine, they pulled the plug and the machine never got made. Neither did the analytical engine, which is a crying shame; this was the first true computer design, with two separate inputs for both data and the required program, which could be a lot more complicated than just adding or subtracting, and an integrated memory system. It could even print results on one of three printers, in what could be considered the first human interfacing system (akin to a modern-day monitor), and had ‘control flow systems’ incorporated to ensure the performing of programs occurred in the correct order. We may never know, since it has never been built, whether Babbage’s analytical engine would have worked, but a later model of his difference engine was built for the London Science Museum in 1991, yielding accurate results to 31 decimal places.

…and I appear to have run on a bit further than intended. No matter- my next post will continue this journey down the history of the computer, and we’ll see if I can get onto any actual explanation of how the things work.

Way more punctuation than is probably strictly necessary*

I am not a ‘gamer’. Well, certainly not one by the popular, semi-obsessive, definition- I like computer games, sure, and I spend a reasonable amount of my time playing them, but they’re not a predominant weekend pastime, and they are far from being a focal point of my existence.

However, part of the reason I am wary to get into games is because I have an annoying habit of never wanting to let an argument die, and given the number of arguments I see online and elsewhere on the subject of gaming, its probably best for all concerned if I give in to my better judgement and give myself no reason to join in (I could spend an entire post talking about arguing online, but that’s for another time). Gaming is a topic that causes far more argument and controversy than it appears to warrant, both within the gaming community (which is normal for any modern mass media- film and TV fans argue among themselves too) and, more interestingly, between the gamers and the ‘rest of the world’. For such a rich and massive medium, this, frankly seems odd. Why such argument? Why so much worry from parents and politicians? Why are gamers always thought of as seemingly laughable, the stereotype being an overweight nerd cocooned in his basement at 3am fuelled by Mountain Dew and chips? Why, basically, do people not like gamers?

I should pause at this point to say two things- firstly that the image I portray here of the prevailing attitude towards gamers is just what I have picked up from my (actually pretty limited) interactions with the non-gaming community, and second, that this is probably going to have to be a two parter. The first will aim to lay out the complaints lain at gaming’s feet by the main protagonists (and a few other things besides while I have the opportunity), and the second will go into my favourite question: why?

So, what exactly is it that people seem to dislike about gaming? The list is quite substantial, but can basically be broken down to (in no particular order)…

1) Modern gaming encourages violence/desensitises people to it
This is probably the biggest one, and the one to which politicians and such make the biggest deal over, and it’s not hard to see why. The hypothesis seems perfectly reasonable- modern games such as Battlefield and Call of Duty are violent (true), and the general lives of everyday people aren’t (true). Thus, the only exposure gamers have to this level of violence is through these games (basically true), and since this violence doesn’t hurt anyone real (true), they subconsciously think that violence isn’t actually that harmful and this desensitises them to its effects (okay, here we’re getting into speculation…)
There is some evidence to support this idea- watching people playing FPS’s and similar can be a quite revealing experience (next time you’re watching someone else play, watch them rather than the screen). Sometimes there are smiles and gentle laughs as they’re playing for fun (evidence point 1- the violent acts they are performing onscreen are not really registering with them), sometimes there is a quite alarming sense of detachment from the actions they are performing on screen (evidence point 2- the sign of conscious realisation that what they’re doing doesn’t really matter), and sometimes people will get seriously aggressive, gritting teeth, shouting and swearing as they bite the dust once again (third, and most compelling, point of evidence- people have gone from being ambivalent about the consequences in a scenario in which, let’s face it, there are no consequences, to getting genuinely aggressive and yet simultaneously compelled to play by such action sequences)
The fundamental flaws in this idea are twofold- firstly there is the simple “Well, DUH! Of course they’re lackadaisical about all the violence- THEY KNOW IT’S NOT REAL, SO THEY DON’T CARE!”. Plonk the average person, even a game-hater, in front of an FPS, and their prevailing emotion will not be the writhing under the chair screaming in abject terror that they would most likely demonstrate if they were really suddenly transported to a gunfight in Afghanistan or somewhere. The second flaw is based more upon the fundamentals of human psychology-  people and animals, at a fundamental level, respond well to action and violence. It’s in our nature- in the distant past it was necessary for us to prompt us to go out and hunt for food, or to make us run rather than go rabbit-in-headlights when the lion appeared in the path ahead. Plus… well even before games, guns and swords were just damn cool. Thus, you cannot complain at a person getting really into a violent game (which, by the way, has had millions poured into it to MAKE it compelling), to the point where they start to feel it is semi-real enough to make them slightly aggressive over it. With a world that is nowadays largely devoid of violence, this is about their only chance to make contact with their inner hunter, and unleash the adrenaline that entails. This is why a soldier, who gets plenty of action in his everyday life, will not relax by playing CoD after his patrol, but a suburban child will. People are not, from my point of view, getting aggressive from playing the game too much, but merely during the experience the game provides.
The case study that always gets quoted by supporters of this argument is inevitably ‘The Manhunt Murder’, referring to an incident in 2004 when a 14 year-old boy (Stefan Pakeerah) in Leicester was stabbed to death by a 17 year-old friend (Warren LeBlanc). While the authorities put the motive down to attempted theft, the victim’s parents insisted that their son’s murderer was obsessed by the game Manhunt. The game itself is undoubtedly bloody and violent, rewarding particularly savage kills, and so too was the murder- Stefan was repeatedly stabbed and beaten with a claw hammer, a method of execution the game features. The event has since be seized upon by those worried by the the violence in modern gaming and has been held up repeatedly as an example of ‘what can happen’.
However, the link is, according to many, a completely invalid one. The only copy of the game found at any point of the investigation was found in Pakeerah’s bedroom (his parents claim it was given to him by LeBlanc two days prior to his death), so if his murderer was ‘obsessed’ by the game, he didn’t play it for at least 48 hours previously. Perhaps more importantly however, only two people involved in the scenario blamed the game itself- Stefan’s parents. His father described the game as: “a video instruction on how to murder somebody, it just shows how you kill people and what weapons you use”. However, the police and legal authorities, at all stages of the investigation, said that LeBlanc’s aim and motive was robbery- gaming did not come into it.
This ties into the results of several research studies that have been made into the possible link between virtual and real-world violence, all of which have been unable to come to any conclusions (although this may partly be due to lack of data). My thoughts on the matter? Well, I am not learned enough in this field to comment on the in-depth psychology of it all, but I like to remember this: as of 2009 (according to Wikipedia, anyway), 55 million copies of Call of Duty had been sold, and I have yet to hear of anyone getting killed over it.

Okay onto part two… actually, 1200 words? Already? Ach, dammit, this is looking like it’s going to be a three-parter at least then. Saturday I will try and wrap up the complaints levelled at the games industry, the Six Nations series will continue on Monday, and Wednesday I’ll try and go into whys and wherefores. See you then

*Now let’s see who can get the gaming reference I’ve made in the title…