Big Pharma

The pharmaceutical industry is (some might say amazingly) the second largest on the planet, worth over 600 billion dollars in sales every year and acting as the force behind the cutting edge of science that continues to push the science of medicine onwards as a field- and while we may never develop a cure for everything you can be damn sure that the modern medical world will have given it a good shot. In fact the pharmaceutical industry is in quite an unusual position in this regard, forming the only part of the medicinal public service, and indeed any major public service, that is privatised the world over.

The reason for this is quite simply one of practicality; the sheer amount of startup capital required to develop even one new drug, let alone form a public service of this R&D, would feature in the hundreds of millions of dollars, something that no government would be willing to set aside for a small immediate gain. All modern companies in the ‘big pharma’ demographic were formed many decades ago on the basis of a surprise cheap discovery or suchlike, and are now so big that they are the only people capable of fronting such a big initial investment. There are a few organisations (the National Institute of Health, the Royal Society, universities) who conduct such research away from the private sectors, but they are small in number and are also very old institutions.

Many people, in a slightly different field, have voiced the opinion that people whose primary concern is profit are those we should least be putting in charge of our healthcare and wellbeing (although I’m not about to get into that argument now), and a similar argument has been raised concerning private pharmaceutical companies. However, that is not to say that a profit driven approach is necessarily a bad thing for medicine, for without it many of the ‘minor’ drugs that have greatly improved the overall healthcare environment would not exist. I, for example, suffer from irritable bowel syndrome, a far from life threatening but nonetheless annoying and inconvenient condition that has been greatly helped by a drug called mebeverine hydrochloride. If all medicine focused on the greater good of ‘solving’ life-threatening illnesses, a potentially futile task anyway, this drug would never have been developed and I would be even more hateful to my fragile digestive system. In the western world, motivated-by-profit makes a lot of sense when trying to make life just that bit more comfortable. Oh, and they also make the drugs that, y’know, save your life every time you’re in hospital.

Now, normally at this point in any ‘balanced argument/opinion piece’ thing on this blog, I try to come up with another point to try and keep each side of the argument at an about equal 500 words. However, this time I’m going to break that rule, and jump straight into the reverse argument straight away. Why? Because I can genuinely think of no more good stuff to say about big pharma.

If I may just digress a little; in the UK & USA (I think, anyway) a patent for a drug or medicine lasts for 10 years, on the basis that these little capsules can be very valuable things and it wouldn’t do to let people hang onto the sole rights to make them for ages. This means that just about every really vital lifesaving drug in medicinal use today, given the time it takes for an experimental treatment to become commonplace, now exists outside its patent and is now manufactured by either the lowest bidder or, in a surprisingly high number of cases, the health service itself (the UK, for instance, is currently trying to become self-sufficient in morphine poppies to prevent it from having to import from Afghanistan or whatever), so these costs are kept relatively low by market forces. This therefore means that during their 10-year grace period, drugs companies will do absolutely everything they can to extort cash out of their product; when the antihistamine drug loratadine (another drug I use relatively regularly, it being used to combat colds) was passing through the last two years of its patent, its market price was quadrupled by the company making it; they had been trying to get the market hooked onto using it before jacking up the prices in order to wring out as much cash as possible. This behaviour is not untypical for a huge number of drugs, many of which deal with serious illness rather than being semi-irrelevant cures for the snuffles.

So far, so much normal corporate behaviour. Reaching this point, we must now turn to consider some practices of the big pharma industry that would make Rupert Murdoch think twice. Drugs companies, for example, have a reputation for setting up price fixing networks, many of which have been worth several hundred million dollars. One, featuring what were technically food supplements businesses, subsidiaries of the pharmaceutical industry, later set the world record for the largest fines levied in criminal history- this a record that persists despite the fact that the cost of producing the actual drugs themselves (at least physically) rarely exceeds a couple of pence per capsule, hundreds of times less than their asking price.

“Oh, but they need to make heavy profits because of the cost of R&D to make all their new drugs”. Good point, well made and entirely true, and it would also be valid if the numbers behind it didn’t stack up. In the USA, the National Institute of Health last year had a total budget of $23 billion, whilst all the drug companies in the US collectively spent $32 billion on R&D. This might seem at first glance like the private sector has won this particular moral battle; but remember that the American drug industry generated $289 billion in 2006, and accounting for inflation (and the fact that pharmaceutical profits tend to stay high despite the current economic situation affecting other industries) we can approximate that only around 10% of company turnover is, on average, spent on R&D. Even accounting for manufacturing costs, salaries and such, the vast majority of that turnover goes into profit, making the pharmaceutical industry the most profitable on the planet.

I know that health is an industry, I know money must be made, I know it’s all necessary for innovation. I also know that I promised not to go into my Views here. But a drug is not like an iPhone, or a pair of designer jeans; it’s the health of millions at stake, the lives of billions, and the quality of life of the whole world. It’s not something to be played around with and treated like some generic commodity with no value beyond a number. Profits might need to be made, but nobody said there had to be 12 figures of them.

Advertisement

What we know and what we understand are two very different things…

If the whole Y2K debacle over a decade ago taught us anything, it was that the vast majority of the population did not understand the little plastic boxes known as computers that were rapidly filling up their homes. Nothing especially wrong or unusual about this- there’s a lot of things that only a few nerds understand properly, an awful lot of other stuff in our life to understand, and in any case the personal computer had only just started to become commonplace. However, over 12 and a half years later, the general understanding of a lot of us does not appear to have increased to any significant degree, and we still remain largely ignorant of these little feats of electronic witchcraft. Oh sure, we can work and operate them (most of us anyway), and we know roughly what they do, but as to exactly how they operate, precisely how they carry out their tasks? Sorry, not a clue.

This is largely understandable, particularly given the value of ‘understand’ that is applicable in computer-based situations. Computers are a rare example of a complex system that an expert is genuinely capable of understanding, in minute detail, every single aspect of the system’s working, both what it does, why it is there, and why it is (or, in some cases, shouldn’t be) constructed to that particular specification. To understand a computer in its entirety, therefore, is an equally complex job, and this is one very good reason why computer nerds tend to be a quite solitary bunch, with quite few links to the rest of us and, indeed, the outside world at large.

One person who does not understand computers very well is me, despite the fact that I have been using them, in one form or another, for as long as I can comfortably remember. Over this summer, however, I had quite a lot of free time on my hands, and part of that time was spent finally relenting to the badgering of a friend and having a go with Linux (Ubuntu if you really want to know) for the first time. Since I like to do my background research before getting stuck into any project, this necessitated quite some research into the hows and whys of its installation, along with which came quite a lot of info as to the hows and practicalities of my computer generally. I thought, then, that I might spend the next couple of posts or so detailing some of what I learned, building up a picture of a computer’s functioning from the ground up, and starting with a bit of a history lesson…

‘Computer’ was originally a job title, the job itself being akin to accountancy without the imagination. A computer was a number-cruncher, a supposedly infallible data processing machine employed to perform a range of jobs ranging from astronomical prediction to calculating interest. The job was a fairly good one, anyone clever enough to land it probably doing well by the standards of his age, but the output wasn’t. The human brain is not built for infallibility and, not infrequently, would make mistakes. Most of these undoubtedly went unnoticed or at least rarely caused significant harm, but the system was nonetheless inefficient. Abacuses, log tables and slide rules all aided arithmetic manipulation to a great degree in their respective fields, but true infallibility was unachievable whilst still reliant on the human mind.

Enter Blaise Pascal, 17th century mathematician and pioneer of probability theory (among other things), who invented the mechanical calculator aged just 19, in 1642. His original design wasn’t much more than a counting machine, a sequence of cogs and wheels so constructed as to able to count and convert between units, tens, hundreds and so on (ie a turn of 4 spaces on the ‘units’ cog whilst a seven was already counted would bring up eleven), as well as being able to work with currency denominations and distances as well. However, it could also subtract, multiply and divide (with some difficulty), and moreover proved an important point- that a mechanical machine could cut out the human error factor and reduce any inaccuracy to one of simply entering the wrong number.

Pascal’s machine was both expensive and complicated, meaning only twenty were ever made, but his was the only working mechanical calculator of the 17th century. Several, of a range of designs, were built during the 18th century as show pieces, but by the 19th the release of Thomas de Colmar’s Arithmometer, after 30 years of development, signified the birth of an industry. It wasn’t a large one, since the machines were still expensive and only of limited use, but de Colmar’s machine was the simplest and most reliable model yet. Around 3,000 mechanical calculators, of various designs and manufacturers, were sold by 1890, but by then the field had been given an unexpected shuffling.

Just two years after de Colmar had first patented his pre-development Arithmometer, an Englishmen by the name of Charles Babbage showed an interesting-looking pile of brass to a few friends and associates- a small assembly of cogs and wheels that he said was merely a precursor to the design of a far larger machine: his difference engine. The mathematical workings of his design were based on Newton polynomials, a fiddly bit of maths that I won’t even pretend to understand, but that could be used to closely approximate logarithmic and trigonometric functions. However, what made the difference engine special was that the original setup of the device, the positions of the various columns and so forth, determined what function the machine performed. This was more than just a simple device for adding up, this was beginning to look like a programmable computer.

Babbage’s machine was not the all-conquering revolutionary design the hype about it might have you believe. Babbage was commissioned to build one by the British government for military purposes, but since Babbage was often brash, once claiming that he could not fathom the idiocy of the mind that would think up a question an MP had just asked him, and prized academia above fiscal matters & practicality, the idea fell through. After investing £17,000 in his machine before realising that he had switched to working on a new and improved design known as the analytical engine, they pulled the plug and the machine never got made. Neither did the analytical engine, which is a crying shame; this was the first true computer design, with two separate inputs for both data and the required program, which could be a lot more complicated than just adding or subtracting, and an integrated memory system. It could even print results on one of three printers, in what could be considered the first human interfacing system (akin to a modern-day monitor), and had ‘control flow systems’ incorporated to ensure the performing of programs occurred in the correct order. We may never know, since it has never been built, whether Babbage’s analytical engine would have worked, but a later model of his difference engine was built for the London Science Museum in 1991, yielding accurate results to 31 decimal places.

…and I appear to have run on a bit further than intended. No matter- my next post will continue this journey down the history of the computer, and we’ll see if I can get onto any actual explanation of how the things work.

So… why did I publish those posts?

So, here I (finally come)- the conclusion of my current theme of sport and fitness. Today I will, once again, return to the world of the gym, but the idea is actually almost as applicable to sport and fitness exercises generally.

Every year, towards the end of December, after the Christmas rush has subsided a little and the chocolates are running low, the western world embarks on the year’s final bizarre annual ritual- New Year’s Resolutions. These vary depending on geography (in Mexico, for example, they list not their new goals for the year ahead, but rather a list of things they hope will happen, generating a similar spirit of soon-to-be-crushed optimism), but there are a few cliched responses. Cut down on food x or y, get to know so and so better, finally sort out whatever you promise to deal with every year, perhaps even write a novel (for the more cocky and adventurous). However, perhaps the biggest cliched New Year’s Resolution is the vague “to exercise more”, or its (often accompanied) counterpart “to start going to the gym”.

Clearly, the world would be a very different place if we all stuck to our resolutions- there’d be a lot more mediocre books out there for starters. But perhaps the gym example is the most amusing, and obvious, example of our collective failure to stick to our own commitments. Every January, without fail, every gym in the land will be offering discounted taster sessions and membership deals, eager to entice their fresh crop of the budding gymgoer. All are quickly swamped with a fresh wave of enthusiasm and flab ready to burn, but by February many will lie practically empty, perhaps 90% of those new recruits having decided to bow out gracefully against the prospect of a lifetime’s slavery to the dumbbell.

So, back to my favourite question- why? What is it about the gym that can so quickly put people off- in essence, why don’t more people use the gym?

One important point to consider is practicality- to use the gym requires a quite significant commitment, and while 2-3 hours (ish) a week of actual exercise might not sound like much, given travelling time, getting changed, kit sorted and trying to fit it around a schedule such a commitment can quickly begin to take over one’s life. The gym atmosphere can also be very off-putting, as I know from personal experience. I am not a superlatively good rugby player, but I have my club membership and am entitled to use their gym for free. The reason I don’t is because trying to concentrate on my own (rather modest) personal aims and achievements can become both difficult and embarrassing when faced with first-teamers who use the gym religiously to bench press 150-odd kilos. All of them are resolutely nice guys, but it’s still an issue of personal embarrassment. It’s even worse if you have the dreaded ‘one-upmanship’ gym atmosphere, with everyone’s condescending smirks keeping the newbies firmly away. Then of course, there’s the long-term commitment to gym work. Some (admittedly naively) will first attend a gym expecting to see recognisable improvement immediately- but improvement takes a long time to notice, especially for the uninitiated and the young, who are likely to not have quite the same level of commitment and technique as the more experienced. The length of time it takes to see any improvement can be frustrating for many who feel like they’re wasting their time, and that can be as good an incentive as any to quit, disillusioned by the experienced.

However, by far the biggest (and ultimately overriding) cause is simply down to laziness- in fact most of the reasons or excuses given by someone dropping their gym routine (including perhaps that last one mentioned) can be traced back to a root cause of simply not wanting to put in the effort. It’s kinda easy to see why- gym work is (and should be) incredibly hard work, and busting a gut to lift a mediocre weight is perhaps not the most satisfying feeling for many, especially if they’re already feeling in a poor mood and/or they’re training alone (that’s a training tip- always train with a friend and encourage one another, but stick to rigid time constraints to ensure you don’t spend all the time nattering). But, this comes despite the fact that everyone (rationally) knows that going to the gym is good for you, and that if we weren’t lazy then we could probably achieve more and do more with ourselves. So, this in and of itself raises another question- why are humans lazy?

Actually, this question is a little bit of a misnomer, simply because of the ‘humans’ part- almost anyone who has a pet knows of their frequent struggles for the ‘most time spent lazing around in bed doing nothing all day’ award (to which I will nominate my own terrier). A similar competition is also often seen, to the disappointment of many a small child, in zoos across the land. It’s a trend seen throughout nature that, give an animal what he needs in a convenient space, he will quite happily enjoy such a bounty without any desire to get up & do more than necessary to get them- which is why zoo keepers often have problems with keeping their charges fit. This is, again, odd, since it seems like an evolutionary disadvantage to not want to do stuff.

However, despite being naturally lazy, this does not mean that people (and animals) don’t want to do stuff. In fact, laziness actually acts as a vital incentive in the progression of the human race. For an answer, ask yourself- why did we invent the wheel? Answer- because it was a lot easier than having to carry stuff around everywhere, and meant stuff took less work, allowing the inventor (and subsequently the human race) to become more and more lazy. The same pattern is replicated in just about every single thing the human race has ever invented (especially anything made by Apple)- laziness acts as a catalyst for innovation and discovery.

Basically, if more people went to the gym, then Thomas Edison wouldn’t have invented the lightbulb. Maybe.