Big Pharma

The pharmaceutical industry is (some might say amazingly) the second largest on the planet, worth over 600 billion dollars in sales every year and acting as the force behind the cutting edge of science that continues to push the science of medicine onwards as a field- and while we may never develop a cure for everything you can be damn sure that the modern medical world will have given it a good shot. In fact the pharmaceutical industry is in quite an unusual position in this regard, forming the only part of the medicinal public service, and indeed any major public service, that is privatised the world over.

The reason for this is quite simply one of practicality; the sheer amount of startup capital required to develop even one new drug, let alone form a public service of this R&D, would feature in the hundreds of millions of dollars, something that no government would be willing to set aside for a small immediate gain. All modern companies in the ‘big pharma’ demographic were formed many decades ago on the basis of a surprise cheap discovery or suchlike, and are now so big that they are the only people capable of fronting such a big initial investment. There are a few organisations (the National Institute of Health, the Royal Society, universities) who conduct such research away from the private sectors, but they are small in number and are also very old institutions.

Many people, in a slightly different field, have voiced the opinion that people whose primary concern is profit are those we should least be putting in charge of our healthcare and wellbeing (although I’m not about to get into that argument now), and a similar argument has been raised concerning private pharmaceutical companies. However, that is not to say that a profit driven approach is necessarily a bad thing for medicine, for without it many of the ‘minor’ drugs that have greatly improved the overall healthcare environment would not exist. I, for example, suffer from irritable bowel syndrome, a far from life threatening but nonetheless annoying and inconvenient condition that has been greatly helped by a drug called mebeverine hydrochloride. If all medicine focused on the greater good of ‘solving’ life-threatening illnesses, a potentially futile task anyway, this drug would never have been developed and I would be even more hateful to my fragile digestive system. In the western world, motivated-by-profit makes a lot of sense when trying to make life just that bit more comfortable. Oh, and they also make the drugs that, y’know, save your life every time you’re in hospital.

Now, normally at this point in any ‘balanced argument/opinion piece’ thing on this blog, I try to come up with another point to try and keep each side of the argument at an about equal 500 words. However, this time I’m going to break that rule, and jump straight into the reverse argument straight away. Why? Because I can genuinely think of no more good stuff to say about big pharma.

If I may just digress a little; in the UK & USA (I think, anyway) a patent for a drug or medicine lasts for 10 years, on the basis that these little capsules can be very valuable things and it wouldn’t do to let people hang onto the sole rights to make them for ages. This means that just about every really vital lifesaving drug in medicinal use today, given the time it takes for an experimental treatment to become commonplace, now exists outside its patent and is now manufactured by either the lowest bidder or, in a surprisingly high number of cases, the health service itself (the UK, for instance, is currently trying to become self-sufficient in morphine poppies to prevent it from having to import from Afghanistan or whatever), so these costs are kept relatively low by market forces. This therefore means that during their 10-year grace period, drugs companies will do absolutely everything they can to extort cash out of their product; when the antihistamine drug loratadine (another drug I use relatively regularly, it being used to combat colds) was passing through the last two years of its patent, its market price was quadrupled by the company making it; they had been trying to get the market hooked onto using it before jacking up the prices in order to wring out as much cash as possible. This behaviour is not untypical for a huge number of drugs, many of which deal with serious illness rather than being semi-irrelevant cures for the snuffles.

So far, so much normal corporate behaviour. Reaching this point, we must now turn to consider some practices of the big pharma industry that would make Rupert Murdoch think twice. Drugs companies, for example, have a reputation for setting up price fixing networks, many of which have been worth several hundred million dollars. One, featuring what were technically food supplements businesses, subsidiaries of the pharmaceutical industry, later set the world record for the largest fines levied in criminal history- this a record that persists despite the fact that the cost of producing the actual drugs themselves (at least physically) rarely exceeds a couple of pence per capsule, hundreds of times less than their asking price.

“Oh, but they need to make heavy profits because of the cost of R&D to make all their new drugs”. Good point, well made and entirely true, and it would also be valid if the numbers behind it didn’t stack up. In the USA, the National Institute of Health last year had a total budget of $23 billion, whilst all the drug companies in the US collectively spent $32 billion on R&D. This might seem at first glance like the private sector has won this particular moral battle; but remember that the American drug industry generated $289 billion in 2006, and accounting for inflation (and the fact that pharmaceutical profits tend to stay high despite the current economic situation affecting other industries) we can approximate that only around 10% of company turnover is, on average, spent on R&D. Even accounting for manufacturing costs, salaries and such, the vast majority of that turnover goes into profit, making the pharmaceutical industry the most profitable on the planet.

I know that health is an industry, I know money must be made, I know it’s all necessary for innovation. I also know that I promised not to go into my Views here. But a drug is not like an iPhone, or a pair of designer jeans; it’s the health of millions at stake, the lives of billions, and the quality of life of the whole world. It’s not something to be played around with and treated like some generic commodity with no value beyond a number. Profits might need to be made, but nobody said there had to be 12 figures of them.

Advertisement

I’ve been expecting you…

As everybody has been incredibly keen to point out surrounding the release of Skyfall, the James Bond film franchise is currently celebrating its 50th birthday. Yes really- some absolute genius of an executive at Eon managed to get the rights to a film series that has lasted longer than the Cold War (which in and of itself presented a problem when Bond couldn’t simply beat up Commies all of a sudden and they had to start inventing new bad guys). But Bond is, of course, far older than that, and his story is an interesting one.

Ian Fleming had served as an intelligence officer during the Second World War, being involved with such charismatic spies as Dusko Popov (who ran an information exchange in Lisbon and traded signals on a roulette table), before returning to England during the 1950s. He later made a famous quote, based on an event that occurred in 1952:

‘Looking out of my window as the rain lashed down during one of those grey austerity-ridden days in post-war Britain, I made two of the biggest decisions of my life; one, never to spend winter in England again; two, to write the spy story to end all spy stories’.

He began writing the first Bond novel (Casino Royale) in February of that year, retiring to his Goldeneye estate in Jamaica to write it (Bond spent the majority of his time in certainly the earlier novels in the Caribbean, and Goldeneye would of course later become the name for Pierce Brosnan’s first Bond film). He chose the name from American ornithologist (and world-renowned expert on Caribbean birds) James Bond, saying that he originally wanted his character to be a normal person to whom extraordinary things happened, and whilst this brief got distorted somewhat through his various revisions this drab name, combined with Bond’s businesslike, unremarkable exterior, formed a contrast with his steely edge and amazing skill set to form the basis of the infamous MI6 operative (Fleming also admitted to incorporating large swathes of himself into the character).

The books were an immediate hit, demonstrating a sharp breakout from the norms of the time, and the film industry was quick to make its move towards them. As early as 1954 a TV version of Casino Royale starring the Americanized ‘Jimmy Bond’ had hit the screen, but Fleming thought he could go better and started a project to make a film adaptation in 1959, with himself acting as screenwriter. However, the project bombed and it wasn’t until 1961 that Albert ‘Cubby’ Broccoli (along with partner Harry Saltzmann) bought the film rights to the series. This project too was plagued by difficulties; despite Sean Connery being said to ‘walk like a panther’ when he came to audition for the part, Broccoli’s first choice for the Bond role was Cary Grant, and when he said he didn’t want to be part of a series he turned to James Mason. Mason made similar bones and so at last, with some misgivings, they turned to Connery. Said Fleming, ‘he’s not exactly what I had in mind’.

He had even worse things to say when Connery’s first film, Dr. No, was released; ‘Dreadful. Simply dreadful’ his words upon seeing the preview screening. He wasn’t the only one either; the film received only mixed reviews, and even a rebuke from the Vatican (never noted for their tolerance towards bikinis). However, Dr. No did include a few of the features that would later come to define Bond; his gun, for instance. For the first 5 Bond novels, Fleming had him using Berreta 418, but munitions expert Geoffrey Boothroyd subsequently wrote to Fleming criticizing the choice. Describing the weapon ‘a lady’s gun’ (a phrase Fleming himself would later use to describe it), he recommended the Walther PPK as an alternative. Fleming loved the suggestion, incorporating an adapted version of the exchanged into his next book (which was, coincidentally, Dr. No) and giving the name of Bond’s armourer as Major Boothroyd by way of thanks. Boothroyd’s role as a quartermaster eventually lead to his more famous nickname; Q.

Not that any of this saved the film, or indeed ‘From Russia With Love’, which succeeded it. Reviews did improve for this one if only for its better quality of execution, but many still rallied against the very concept of the Bond movie and it hardly kickstarted the franchise. What it did do, however, was prompt the release of the film that did; Goldfinger.

This was the film that cemented Bond’s reputation, and laid the tropes on the table for all subsequent films to follow. Pussy Galore (Honor Blackman) became the definitive Bond girl, Sean Connery the definitive Bond (a reputation possibly enhanced by the contrast between his portrayal of Bond and the aggressive, chauvinistic ‘semi-rapist’ portrayed in the books), and his beautiful, silver Aston Martin DB5 the Bond car- one such car sold in the US some years ago for over 2 million dollars. According to many, Goldfinger remains the best Bond film ever (although personally I’m quite fond of Live and Let Die, The World is Not Enough and Casino Royale), although rather sadly Ian Fleming died before he could see it.

Since then, the franchise has had to cope with a whole host of ups & downs. After ‘You Only Live Twice’ (in which the character of supervillain Ernst Stavro Blofeld is first revealed), Connery announced that it would be his last Bond film, but his replacement George Lazenby appeared just once (On Her Majesty’s Secret Service, in which his performance received mixed reception) before claiming that he didn’t feel the character of a gun-em-down chauvinist such as Bond could survive the ‘peace & love’ sentiment of the late 60s (Lazenby was also, on an unrelated note, the youngest man ever to play Bond, at just 30). After Connery was tempted back for one more film (Diamonds Are Forever) by an exorbitant salary, the gauntlet was thrown to Roger Moore, who simultaneously holds the record for oldest Bond ever (57 by the end) and most number of films (7, over a 12-year period). Moore’s more laid back, light-hearted and some might say graceless approach to the role won him some plaudits by its contrast to Connery’s performance, but despite increasingly negative audience feedback over time this style became ever more necessary as the series came under scrutiny. The feminist lobby (among others) had been gaining voice, and whilst they had once been pleased at the ‘freedom’ demonstrated by the likes of Playgirls and other burlesque performers (seriously, that was the attitude they took in the 50s) by now they saw them as the by-products of a chauvinist society. This quickly meant Bond’s all action, highly sexual and male-dominated atmosphere came under fire, forcing the character to retreat into steadily tamer plots. It was also rapidly running out of ideas (the same director had been working on the project for several films by now), retreating into petty jokes (ie the name ‘Holly Goodhead’) and generally mediocre filmmaking. The series limped on with Moore until A View To A Kill, and for two more with Timothy Dalton after that, but it then took an 6 year break whilst another Dalton production fell through. Some felt that the franchise was on its last legs, that a well-liked and iconic character would soon have to wink out of existence, but then came Pierce Brosnan.

Whatever you do or don’t think of Brosnan’s performances (I happen to like them, others think he’s fairly rubbish), there can be no denying that Goldeneye was the first Bond film to really catapult the franchise into the modern era of filmmaking. With fresh camera techniques to make it at least look new, a new lead actor and a long break to give everyone time to forget about the character, there was a sense of this being something of a new beginning for Bond. And it was; seven films later and with Daniel Craig now at the helm, the series is in rude health and is such a prominent, well-loved and symbolic character that Craig adopted his 007 role when pretending to skydive into the stadium alongside the Queen during the London 2012 opening ceremony (which I’m sure you all agree was possibly the best bit of the entire games). There is something about Bond that fundamentally appeals to us; all the cool, clever gadgets, the cars we could only ever dream of, the supermodels who line his bed (well, maybe a few people would prefer to turn a blind eye to some of that), and the whole smooth, suave nature that defines his character makes him such a fixed trope that he seems impossible for our collective psyche to forget. We can forgive the bad film making, the formula of the character, the lack of the artistry that puts other films in line for Oscars, simply because… he’s Bond. He’s fun, and he’s awesome.

Oh, and on a related note, go and see Skyfall. It’s absolutely brilliant.

Other Politicky Stuff

OK, I know I talked about politics last time, and no I don’t want to start another series on this, but I actually found when writing my last post that I got very rapidly sidetracked when I tried to use voter turnout as a way of demonstrating the fact that everyone hates their politicians, and I thought I might dedicate a post to this particular train of thought as well.

You see, across the world, but predominantly in the developed west where the right to choose our leaders has been around for ages, less and less people are turning out each time to vote.  By way of an example, Ronald Reagan famously won a ‘landslide’ victory when coming to power in 1980- but only actually attracted the vote of 29% of all eligible voters. In some countries, such as Australia, voting is mandatory, but thoughts about introducing such a system elsewhere have frequently met with opposition and claims that it goes against people’s democratic right to abstain from doing so (this argument is largely rubbish, but no time for that now).

A lot of reasons have been suggested for this trend, among them a sense of political apathy, laziness, and the idea that we having the right to choose our leaders for so long has meant we no longer find such an idea special or worth exercising. For example, the presidential election in Venezuela – a country that underwent something of a political revolution just over a decade ago and has a history of military dictatorships, corruption and general political chaos – a little while ago saw a voter turnout of nearly 90% (incumbent president Hugo Chavez winning with 54% of the vote to win his fourth term of office in case you were interested) making Reagan look boring by comparison.

However, another, more interesting (hence why I’m talking about it) argument has also been proposed, and one that makes an awful lot of sense. In Britain there are 3 major parties competing for every seat, and perhaps 1 or two others who may be standing in your local area. In the USA, your choice is pretty limited to either Obama or Romney, especially if you’re trying to avoid the ire of the rabidly aggressive ‘NO VOTE IS A VOTE FOR ROMNEY AND HITLER AND SLAUGHTERING KITTENS’ brigade. Basically, the point is that your choice of who to vote for is limited to usually less than 5 people, and given the number of different issues they have views on that mean something to you the chance of any one of them following your precise political philosophy is pretty close to zero.

This has wide reaching implications extending to every corner of democracy, and is indicative of one simple fact; that when the US Declaration of Independence was first drafted some 250 years ago and the founding fathers drew up what would become the template for modern democracy, it was not designed for a state, or indeed a world, as big and multifaceted as ours. That template was founded on the basis of the idea that one vote was all that was needed to keep a government in line and following the will of the masses, but in our modern society (and quite possibly also in the one they were designing for) that is simply not the case. Once in power, a government can do almost what it likes (I said ALMOST) and still be confident that they will get a significant proportion of the country voting for them; not only that, but that their unpopular decisions can often be ‘balanced out’ by more popular, mass-appeal ones, rather than their every decision being the direct will of the people.

One solution would be to have a system more akin to Greek democracy, where every issue is answered by referendum which the government must obey. However, this presents just as many problems as it answers; referendums are very expensive and time-consuming to set up and perform, and if they became commonplace it could further enhance the existing issue of voter apathy. Only the most actively political would vote in every one, returning the real power to the hands of a relative few who, unlike previously, haven’t been voted in. However, perhaps the most pressing issue with this solution is that it rather renders the role of MPs, representatives, senators and even Prime Ministers & Presidents rather pointless. What is the point of our society choosing those who really care about the good of their country, have worked hard to slowly rise up the ranks and giving them a chance to determine how their country is governed, if we are merely going to reduce their role to ones of administrators and form fillers? Despite the problems I mentioned last time out, of all the people we’ve got to choose from politicians are probably the best people to have governing us (or at least the most reliably OK, even if it’s simply because we picked them).

Plus, politics is a tough business, and what is the will of the people is not necessarily always what’s best for the country as a whole. Take Greece at the moment; massive protests are (or at least were; I know everyone’s still pissed off about it) underway due to the austerity measures imposed by the government, because of the crippling economic suffering that is sure to result. However, the politicians know that such measures are necessary and are refusing to budge on the issue- desperate times call for difficult decisions (OK, I know there were elections that almost entirely centred on this decision that sided with austerity, but shush- you’re ruining my argument). To pick another example, President Obama (and several Democrat candidates before him) have met with huge opposition to the idea of introducing a US national healthcare system, basically because Americans hate taxes. Nonetheless, this is something he believes very strongly in, and has finally managed to get through congress; if he wins the elections later this year, we’ll see how well he executes.

In short, then, there are far too many issues, too many boxes to balance and ideas to question, for all protesting in a democratic society to take place at the ballot box. Is there a better solution to waving placards in the street and sending strongly worded letters? Do those methods at all work? In all honesty, I don’t know- that whole internet petitions get debated in parliament thing the British government recently imported from Switzerland is a nice idea, but, just like more traditional forms of protest, gives those in power no genuine categorical imperative to change anything. If I had a solution, I’d probably be running for government myself (which is one option that definitely works- just don’t all try it at once), but as it is I am nothing more than an idle commentator thinking about an imperfect system.

Yeah, I struggle for conclusions sometimes.

The President Problem

As one or two of you may have noticed, our good friends across the pond are getting dreadfully overexcited at the prospect of their upcoming election later this year, and America is gripped by the paralyzing dilemma of whether a Mormon or a black guy would be worse to put in charge of their country for the next four years. This has got me, when I have nothing better to do, having the occasional think about politics, politicians and the whole mess in general, and about how worked up everyone seems to get over it.

It is a long-established fact that the fastest way for a politician to get himself hated, apart from murdering some puppies on live TV, is to actually get himself in power. As the opposition, constantly biting at the heels of those in power, they can have lots of fun making snarky comments and criticisms about their opponent’s ineptitude, whereas when in power they have no choice but to sit quietly and absorb the insults, since their opponents are rarely doing anything interesting or important enough to warrant a good shouting. When in power, one constantly has the media jumping at every opportunity to ridicule decisions and throw around labels like ‘out of touch’ or just plain old ‘stupid’, and even the public seem to make it their business to hate everything their glorious leader does in their name. Nobody likes their politicians, and the only way for them once in power is, it seems, down.

An awful lot of reasons have been suggested for this trend, including the fact that we humans do love to hate stuff- but more on that another time, because I want to make another point. Consider why you, or anyone else for that matter, vote for your respective candidate during an election. Maybe it’s their dedication to a particular cause, such as education, that really makes you back them, or maybe their political philosophy is, broadly speaking, aligned with yours. Maybe it’s something that could be called politically superficial, such as skin colour; when Robert Mugabe became Prime Minister of Zimbabwe in 1980 it was for almost entirely that reason. Or is it because of the person themselves; somebody who presents themselves as a strong, capable leader, the kind of person you want to lead your country into the future?

Broadly speaking, we have to consider the fact that it is not just someone’s political alignment that gets a cross next to their name; it is who they are. To even become a politician somebody needs to be intelligent, diligent, very strong in their opinions and beliefs, have a good understanding of all the principles involved and an active political contributor. To persuade their party to let them stand, they need to be good with people, able to excite their peers and seniors, demonstrate an aligning political philosophy with the kind of people who choose these things, and able to lay everything, including their pride, in pursuit of a chance to run. To get elected, they need to be charismatic, tireless workers, dedicated to their cause, very good at getting their point across and associated PR, have no skeletons in the closet and be prepared to get shouted at by constituents for the rest of their career. To become a leader of a country, they need to have that art mastered to within a pinprick of perfection.

All of these requirements are what stop the bloke in the pub with a reason why the government is wrong about everything from ever actually having a chance to action his opinions, and they weed out a lot of people with one good idea from getting that idea out there- it takes an awful lot more than strong opinions and reasons why they will work to actually become a politician. However, this process has a habit of moulding people into politicians, rather than letting politicians be people, and that is often to the detriment of people in general. Everything becomes about what will let you stay in power, what you will have to give up to allow you to push the things you feel really strong for, and how many concessions you will have to make for the sake of popularity, just so you can do a little good with your time in power.

For instance, a while ago somebody compiled a list of the key demographics of British people (and gave them all stupid names like ‘Dinky Developers’ or whatever), expanded to include information about typical geographical spread, income and, among other things, political views. Two of those groups have been identified by the three main parties as being the most likely to swing their vote one way or the other (being middle of the road liberal types without a strong preference either way), and are thus the victim of an awful lot of vote-fishing by the various parties. In the 2005 election, some 80% of campaign funding (I’ve probably got this stat wrong; it’s been a while since I heard it) was directed towards swinging the votes of these key demographics to try and win key seats; never mind whether these policies were part of their exponent’s political views or even whether they ever got enacted to any great degree, they had to go in just to try and appease the voters. And, of course, when power eventually does come their way many of their promises prove an undeliverable part of their vision for a healthier future of their country.

This basically means that only ‘political people’, those suited to the hierarchical mess of a workplace environment and the PR mayhem that comes with the job, are able to ever get a shot at the top job, and these are not necessarily those who are best suited to get the best out of a country. And that, in turn means everybody gets pissed off with them. All. The. Bloody. Time.

But, unfortunately, this is the only way that the system of democracy can ever really function, for human nature will always drag it back to some semblance of this no matter how hard we try to change it; and that’s if it were ever to change at all. Maybe Terry Pratchett had it right all along; maybe a benevolent dictatorship is the way to go instead.

*”It is sweet and right to die for your country”

Patriotism is one of humankind’s odder traits, at least on the face of it. For many hundreds of years, dying in a war hundreds of miles away from home defending/stealing for what were, essentially, the business interests and egos of rich men too powerful to even acknowledge your existence was considered the absolute pinnacle of honour, the ultimate way to bridge the gap between this world and the next. This near-universal image of the valiance of dying for your country was heavily damaged by the first world war, near-crushing “the old lie: Dulce Et Decorum Est/Pro Patria Mori*” (to quote Wilfred Owen), but even nowadays soldiers fighting in a dubiously moral war that has killed far more people than the events it was ‘payback’ for are regarded as heroes, their deaths always granted both respect and news coverage (and rightly so). Both the existence and extent of patriotism become increasingly bizarre and prevalent when we look away from the field of conflict; national identity is one of the most hotly argued and defended topics we have, stereotypes and national slurs form the basis for a vast range of insults, and the level of passion and pride in ‘our’ people and teams on the sporting stage is quite staggering to behold (as the recent London 2012 games showed to a truly spectacular degree).

But… why? What’s the point? Why is ‘our’ country any better than everyone else’s, to us at least, just by virtue of us having been born there by chance? Why do we feel such a connection to a certain group of sportspeople, many of whom we might hate as people more than any of their competitors, simply because we share an accent? Why are we patriotic?

The source of the whole business may have its roots in my old friend, the hypothetical neolithic tribe. In such a situation, one so small that everybody knows and constantly interacts with everyone else, then pride in connection with the achievements of one’s tribe is understandable. Every achievement made by your tribe is of direct benefit to you, and is therefore worthy of celebration. Over an extended period of time, during which your tribe may enjoy a run of success, you start to develop a sense of pride that you are achieving so much, and that you are doing better than surrounding others.

This may, at least to a degree, have something to do with why we enjoy successes that are, on the scale of countries, wholly unconnected to us, but nonetheless are done in the name of our extended ‘tribe’. But what it doesn’t explain so well is the whole ‘through thick and thin mentality’- that of supporting your country’s endeavours throughout its failings as well as its successes, of continuing to salvage a vestige of pride even if your country’s name has been dragged through the mud.

We may find a clue to this by, once again, turning our attention to the sporting field, this time on the level of clubs (who, again, receive a level of support and devotion wholly out of proportion to their achievements, and who are a story in their own right). Fans are, obviously, always proud and passionate when their side is doing well- but just as important to be considered a ‘true’ fan is the ability to carry on supporting during the days when you’re bouncing along the bottom of the table praying to avoid relegation. Those who do not, either abandoning their side or switching allegiance to another, are considered akin to traitors, and when the good times return may be ostracized (or at least disrespected) for not having faith. We can apply this same idea to being proud of our country despite its poor behaviour and its failings- for how can we claim to be proud of our great achievements if we do not at least remain loyal to our country throughout its darkest moments?

But to me, the core of the whole business is simply a question of self-respect. Like it or not, our nationality is a huge part of our personal identity, a core segment of our identification and being that cannot be ignored by us, for it certainly will not be by others. We are, to a surprisingly large degree, identified by our country, and if we are to have a degree of pride in ourselves, a sense of our own worth and place, then we must take pride in all facets of our identity- not only that, but a massed front of people prepared to be proud of their nationality in and of itself gives us a reason, or at least part of one, to be proud of. It may be irrational, illogical and largely irrelevant, but taking pride in every pointless achievement made in the name of our nation is a natural part of identifying with and being proud of ourselves, and who we are.

My apologies for the slightly shorter than normal post today, I’ve been feeling a little run down today. I’ll try and make it up next time…

The End of The World

As everyone who understands the concept of buying a new calendar when the old one runs out should be aware, the world is emphatically due to not end on December 21st this year thanks to a Mayan ‘prophecy’ that basically amounts to one guy’s arm getting really tired and deciding ‘sod carving the next year in, it’s ages off anyway’. Most of you should also be aware of the kind of cosmology theories that talk about the end of the world/the sun’s expansion/the universe committing suicide that are always hastily suffixed with an ‘in 200 billion years or so’, making the point that there’s really no need to worry and that the world is probably going to be fine for the foreseeable future; or at least, that by the time anything serious does happen we’re probably not going to be in a position to complain.

However, when thinking about this, we come across a rather interesting, if slightly macabre, gap; an area nobody really wants to talk about thanks to a mixture of lack of certainty and simple fear. At some point in the future, we as a race and a culture will surely not be here. Currently, we are. Therefore, between those two points, the human race is going to die.

Now, from a purely biological perspective there is nothing especially surprising or worrying about this; species die out all the time (in fact we humans are getting so good at inadvertent mass slaughter that between 2 and 20 species are going extinct every day), and others evolve and adapt to slowly change the face of the earth. We humans and our few thousand years of existence, and especially our mere two or three thousand of organised mass society, are the merest blip in the earth’s long and varied history. But we are also unique in more ways than one; the first race to, to a very great extent, remove ourselves from the endless fight for survival and start taking control of events once so far beyond our imagination as to be put down to the work of gods. If the human race is to die, as it surely will one day, we are simply getting too smart and too good at thinking about these things for it to be the kind of gradual decline & changing of a delicate ecosystem that characterises most ‘natural’ extinctions. If we are to go down, it’s going to be big and it’s going to be VERY messy.

In short, with the world staying as it is and as it has for the past few millennia we’re not going to be dying out very soon. However, this is also not very biologically unusual, for when a species go extinct it is usually the result of either another species with which they are engaging in direct competition out-competing them and causing them to starve, or a change in environmental conditions meaning they are no longer well-adapted for the environment they find themselves in. But once again, human beings appear to be showing a semblance of being rather above this; having carved out what isn’t so much an ecological niche as a categorical redefining of the way the world works there is no other creature that could be considered our biological competitor, and the thing that has always set humans apart ecologically is our ability to adapt. From the ice ages where we hunted mammoth, to the African deserts where the San people still live in isolation, there are very few things the earth can throw at us that are beyond the wit of humanity to live through. Especially a human race that is beginning to look upon terraforming and cultured food as a pretty neat idea.

So, if our environment is going to change sufficiently for us to begin dying out, things are going to have to change not only in the extreme, but very quickly as well (well, quickly in geographical terms at least). This required pace of change limits the number of potential extinction options to a very small, select list. Most of these you could make a disaster film out of (and in most cases one has), but one that is slightly less dramatic (although they still did end up making a film about it) is global warming.

Some people are adamant that global warming is either a) a myth, b) not anything to do with human activity or c) both (which kind of seems a contradiction in terms, but hey). These people can be safely categorized under ‘don’t know what they’re *%^&ing talking about’, as any scientific explanation that covers all the available facts cannot fail to reach the conclusion that global warming not only exists, but that it’s our fault. Not only that, but it could very well genuinely screw up the world- we are used to the idea that, in the long run, somebody will sort it out, we’ll come up with a solution and it’ll all be OK, but one day we might have to come to terms with a state of affairs where the combined efforts of our entire race are simply not enough. It’s like the way cancer always happens to someone else, until one morning you find a lump. One day, we might fail to save ourselves.

The extent to which global warming looks set to screw around with our climate is currently unclear, but some potential scenarios are extreme to say the least. Nothing is ever quite going to match up to the picture portrayed in The Day After Tomorrow (for the record, the Gulf Stream will take around a decade to shut down if/when it does so), but some scenarios are pretty horrific. Some predict the flooding of vast swathes of the earth’s surface, including most of our biggest cities, whilst others predict mass desertification, a collapse of many of the ecosystems we rely on, or the polar regions swarming across Northern Europe. The prospect of the human population being decimated is a very real one.

But destroyed? Totally? After thousands of years of human society slowly getting the better of and dominating all that surrounds it? I don’t know about you, but I find that quite unlikely- at the very least, it at least seems to me like it’s going to take more than just one wave of climate change to finish us off completely. So, if climate change is unlikely to kill us, then what else is left?

Well, in rather a nice, circular fashion, cosmology may have the answer, even if we don’t some how manage to pull off a miracle and hang around long enough to let the sun’s expansion get us. We may one day be able to blast asteroids out of existence. We might be able to stop the super-volcano that is Yellowstone National Park blowing itself to smithereens when it erupts as it is due to in the not-too-distant future (we also might fail at both of those things, and let either wipe us out, but ho hum). But could we ever prevent the sun emitting a gamma ray burst at us, of a power sufficient to cause the third largest extinction in earth’s history last time it happened? Well, we’ll just have to wait and see…

NUMBERS

One of the most endlessly charming parts of the human experience is our capacity to see something we can’t describe and just make something up in order to do so, never mind whether it makes any sense in the long run or not. Countless examples have been demonstrated over the years, but the mother lode of such situations has to be humanity’s invention of counting.

Numbers do not, in and of themselves, exist- they are simply a construct designed by our brains to help us get around the awe-inspiring concept of the relative amounts of things. However, this hasn’t prevented this ‘neat little tool’ spiralling out of control to form the vast field that is mathematics. Once merely a diverting pastime designed to help us get more use out of our counting tools, maths (I’m British, live with the spelling) first tentatively applied itself to shapes and geometry before experimenting with trigonometry, storming onwards to algebra, turning calculus into a total mess about four nanoseconds after its discovery of something useful, before just throwing it all together into a melting point of cross-genre mayhem that eventually ended up as a field that it as close as STEM (science, technology, engineering and mathematics) gets to art, in that it has no discernible purpose other than for the sake of its own existence.

This is not to say that mathematics is not a useful field, far from it. The study of different ways of counting lead to the discovery of binary arithmetic and enabled the birth of modern computing, huge chunks of astronomy and classical scientific experiments were and are reliant on the application of geometric and trigonometric principles, mathematical modelling has allowed us to predict behaviour ranging from economics & statistics to the weather (albeit with varying degrees of accuracy) and just about every aspect of modern science and engineering is grounded in the brute logic that is core mathematics. But… well, perhaps the best way to explain where the modern science of maths has lead over the last century is to study the story of i.

One of the most basic functions we are able to perform to a number is to multiply it by something- a special case, when we multiply it by itself, is ‘squaring’ it (since a number ‘squared’ is equal to the area of a square with side lengths of that number). Naturally, there is a way of reversing this function, known as finding the square root of a number (ie square rooting the square of a number will yield the original number). However, convention dictates that a negative number squared makes a positive one, and hence there is no number squared that makes a negative and there is no such thing as the square root of a negative number, such as -1. So far, all I have done is use a very basic application of logic, something a five-year old could understand, to explain a fact about ‘real’ numbers, but maths decided that it didn’t want to not be able to square root a negative number, so had to find a way round that problem. The solution? Invent an entirely new type of number, based on the quantity i (which equals the square root of -1), with its own totally arbitrary and made up way of fitting  on a number line, and which can in no way exist in real life.

Admittedly, i has turned out to be useful. When considering electromagnetic forces, quantum physicists generally assign the electrical and magnetic components real and imaginary quantities in order to identify said different components, but its main purpose was only ever to satisfy the OCD nature of mathematicians by filling a hole in their theorems. Since then, it has just become another toy in the mathematician’s arsenal, something for them to play with, slip into inappropriate situations to try and solve abstract and largely irrelevant problems, and with which they can push the field of maths in ever more ridiculous directions.

A good example of the way mathematics has started to lose any semblance of its grip on reality concerns the most famous problem in the whole of the mathematical world- Fermat’s last theorem. Pythagoras famously used the fact that, in certain cases, a squared plus b squared equals c squared as a way of solving some basic problems of geometry, but it was never known as to whether a cubed plus b cubed could ever equal c cubed if a, b and c were whole numbers. This was also true for all other powers of a, b and c greater than 2, but in 1637 the brilliant French mathematician Pierre de Fermat claimed, in a scrawled note inside his copy of Diohantus’ Arithmetica, to have a proof for this fact ‘that is too large for this margin to contain’. This statement ensured the immortality of the puzzle, but its eventual solution (not found until 1995, leading most independent observers to conclude that Fermat must have made a mistake somewhere in his ‘marvellous proof’) took one man, Andrew Wiles, around a decade to complete. His proof involved showing that the terms involved in the theorem could be expressed in the form of an incredibly weird equation that doesn’t exist in the real world, and that all equations of this type had a counterpart equation of an equally irrelevant type. However, since the ‘Fermat equation’ was too weird to exist in the other format, it could not logically be true.

To a mathematician, this was the holy grail; not only did it finally lay to rest an ages-old riddle, but it linked two hitherto unrelated branches of algebraic mathematics by way of proving what is (now it’s been solved) known as the Taniyama-Shimura theorem. To anyone interested in the real world, this exercise made no contribution to it whatsoever- apart from satisfying a few nerds, nobody’s life was made easier by the solution, it didn’t solve any real-world problem, and it did not make the world a tangibly better place. In this respect then, it was a total waste of time.

However, despite everything I’ve just said, I’m not going to decide that all modern day mathematics is a waste of time; very few human activities ever are. Mathematics is many things; among them ridiculous, confusing, full of contradictions and potential slip-ups and, in a field whose age of winning a major prize is younger than in any other STEM field, apparently full of those likely to belittle you out of future success should you enter the world of serious academia. But, for some people, maths is just what makes the world makes sense, and at its heart that was all it was ever created to do. And if some people want their life to be all about the little symbols that make the world make sense, then well done to the world for making a place for them.

Oh, and there’s a theory doing the rounds of cosmology nowadays that reality is nothing more than a mathematical construct. Who knows in what obscure branch of reverse logarithmic integrals we’ll find answers about that one…

Practical computing

This looks set to be my final post of this series about the history and functional mechanics of computers. Today I want to get onto the nuts & bolts of computer programming and interaction, the sort of thing you might learn as a budding amateur wanting to figure out how to mess around with these things, and who’s interested in exactly how they work (bear in mind that I am not one of these people and am, therefore, likely to get quite a bit of this wrong). So, to summarise what I’ve said in the last two posts (and to fill in a couple of gaps): silicon chips are massive piles of tiny electronic switches, memory is stored in tiny circuits that are either off or on, this pattern of off and on can be used to represent information in memory, memory stores data and instructions for the CPU, the CPU has no actual ability to do anything but automatically delegates through the structure of its transistors to the areas that do, the arithmetic logic unit is a dumb counting machine used to do all the grunt work and is also responsible, through the CPU, for telling the screen how to make the appropriate pretty pictures.

OK? Good, we can get on then.

Programming languages are a way of translating the medium of computer information and instruction (binary data) into our medium of the same: words and language. Obviously, computers do not understand that the buttons we press on our screen have symbols on them, that these symbols mean something to us and that they are so built to produce the same symbols on the monitor when we press them, but we humans do and that makes computers actually usable for 99.99% of the world population. When a programmer brings up an appropriate program and starts typing instructions into it, at the time of typing their words mean absolutely nothing. The key thing is what happens when their data is committed to memory, for here the program concerned kicks in.

The key feature that defines a programming language is not the language itself, but the interface that converts words to instructions. Built into the workings of each is a list of ‘words’ in binary, each word having a corresponding, but entirely different, string of data associated with it that represents the appropriate set of ‘ons and offs’ that will get a computer to perform the correct task. This works in one of two ways: an ‘interpreter’ is an inbuilt system whereby the programming is stored just as words and is then converted to ‘machine code’ by the interpreter as it is accessed from memory, but the most common form is to use a compiler. This basically means that once you have finished writing your program, you hit a button to tell the computer to ‘compile’ your written code into an executable program in data form. This allows you to delete the written file afterwards, makes programs run faster, and gives programmers an excuse to bum around all the time (I refer you here)

That is, basically how computer programs work- but there is one last, key feature, in the workings of a modern computer, one that has divided both nerds and laymen alike across the years and decades and to this day provokes furious debate: the operating system.

An OS, something like Windows (Microsoft), OS X (Apple) or Linux (nerds), is basically the software that enables the CPU to do its job of managing processes and applications. Think of it this way: whilst the CPU might put two inputs through a logic gate and send an output to a program, it is the operating system that will set it up to determine exactly which gate to put it through and exactly how that program will execute. Operating systems are written onto the hard drive, and can, theoretically, be written using nothing more than a magnetized needle, a lot of time and a plethora of expertise to flip the magnetically charged ‘bits’ on the hard disk. They consist of many different parts, but the key feature of all of them is the kernel, the part that manages the memory, optimises the CPU performance and translates programs from memory to screen. The precise translation and method by which this latter function happens differs from OS to OS, hence why a program written for Windows won’t work on a Mac, and why Android (Linux-powered) smartphones couldn’t run iPhone (iOS) apps even if they could access the store. It is also the cause of all the debate between advocates of different operating systems, since different translation methods prioritise/are better at dealing with different things, work with varying degrees of efficiency and are more  or less vulnerable to virus attack. However, perhaps the most vital things that modern OS’s do on our home computers is the stuff that, at first glance seems secondary- moving stuff around and scheduling. A CPU cannot process more than one task at once, meaning that it should not be theoretically possible for a computer to multi-task; the sheer concept of playing minesweeper whilst waiting for the rest of the computer to boot up and sort itself out would be just too outlandish for words. However, a clever piece of software called a scheduler in each OS which switches from process to process very rapidly (remember computers run so fast that they can count to a billion, one by one, in under a second) to give the impression of it all happening simultaneously. Similarly, a kernel will allocate areas of empty memory for a given program to store its temporary information and run on, but may also shift some rarely-accessed memory from RAM (where it is accessible) to hard disk (where it isn’t) to free up more space (this is how computers with very little free memory space run programs, and the time taken to do this for large amounts of data is why they run so slowly) and must cope when a program needs to access data from another part of the computer that has not been specifically allocated a part of that program.

If I knew what I was talking about, I could witter on all day about the functioning of operating systems and the vast array of headache-causing practicalities and features that any OS programmer must consider, but I don’t and as such won’t. Instead, I will simply sit back, pat myself on the back for having actually got around to researching and (after a fashion) understanding all this, and marvel at what strange, confusing, brilliant inventions computers are.

Up one level

In my last post (well, last excepting Wednesday’s little topical deviation), I talked about the real nuts and bolts of a computer, detailing the function of the transistors that are so vital to the workings of a computer. Today, I’m going to take one step up and study a slightly broader picture, this time concerned with the integrated circuits that utilise such components to do the real grunt work of computing.

An integrated circuit is simply a circuit that is not comprised of multiple, separate, electronic components- in effect, whilst a standard circuit might consist of a few bits of metal and plastic connected to one another by wires, in an IC they are all stuck in the same place and all assembled as one. The main advantage of this is that since all the components don’t have to be manually stuck to one another, but are built in circuit form from the start, there is no worrying about the fiddliness of assembly and they can be mass-produced quickly and cheaply with components on a truly microscopic scale. They generally consist of several layers on top of the silicon itself, simply to allow space for all of the metal connecting tracks and insulating materials to run over one another (this pattern is usually, perhaps ironically, worked out on a computer), and the sheer detail required of their manufacture surely makes it one of the marvels of the engineering world.

But… how do they make a computer work? Well, let’s start by looking at a computer’s memory, which in all modern computers takes the form of semiconductor memory. Memory takes the form of millions upon millions of microscopically small circuits known as memory circuits, each of which consists of one or more transistors. Computers are electronic, meaning to only thing they understand is electricity- for the sake of simplicity and reliability, this takes the form of whether the current flowing in a given memory circuit is ‘on’ or ‘off’. If the switch is on, then the circuit is represented as a 1, or a 0 if it is switched off. These memory circuits are generally grouped together, and so each group will consist of an ordered pattern of ones and zeroes, of which there are many different permutations. This method of counting in ones and zeroes is known as binary arithmetic, and is sometimes thought of as the simplest form of counting. On a hard disk, patches of magnetically charged material represent binary information rather than memory circuits.

Each little memory circuit, with its simple on/off value, represents one bit of information. 8 bits grouped together forms a byte, and there may be billions of bytes in a computer’s memory. The key task of a computer programmer is, therefore, to ensure that all the data that a computer needs to process is written in binary form- but this pattern of 1s and 0s might be needed to represent any information from the content of an email to the colour of one pixel of a video. Clearly, memory on its own is not enough, and the computer needs some way of translating the information stored into the appropriate form.

A computer’s tool for doing this is known as a logic gate, a simple electronic device consisting of (you guessed it) yet more transistor switches. This takes one or two inputs, either ‘on’ or ‘off’ binary ones, and translates them into another value. There are three basic types:  AND gates (if both inputs equal 1, output equals 1- otherwise, output equals 0), OR gates (if either input equals 1, output equals 1- if both inputs equal 0, output equals 0), and NOT gates (if input equals 1, output equals 0, if input equals 0, output equals 1). The NOT gate is the only one of these with a single input, and combinations of these gates can perform other functions too, such as NAND (not-and) or XOR (exclusive OR; if either input equals 1, output equals 1, but if both inputs equal 1 or 0, output equals 0) gates. A computer’s CPU (central processing unit) will contain hundreds of these, connected up in such a way as to link various parts of the computer together appropriately, translate the instructions of the memory into what function a given program should be performing, and thus cause the relevant bit (if you’ll pardon the pun) of information to translate into the correct process for the computer to perform.

For example, if you click on an icon on your desktop, your computer will put the position of your mouse and the input of the clicking action through an AND gate to determine that it should first highlight that icon. To do this, it orders the three different parts of each of the many pixels of that symbol to change their shade by a certain degree, and the the part of the computer responsible for the monitor’s colour sends a message to the Arithmetic Logic Unit (ALU), the computer’s counting department, to ask what the numerical values of the old shades plus the highlighting is, to give it the new shades of colour for the various pictures. Oh, and the CPU should also open the program. To do this, its connections send a signal off to the memory to say that program X should open now. Another bit of the computer then searches through the memory to find program X, giving it the master ‘1’ signal that causes it to open. Now that it is open, this program routes a huge amount of data back through the CPU to tell it to change the pattern of pretty colours on the screen again, requiring another slue of data to go through the ALU, and that areas of the screen A, B and C are now all buttons, so if you click there then we’re going to have to go through this business all over again. Basically the CPU’s logical function consists of ‘IF this AND/OR this happens, which signal do I send off to ask the right part of the memory what to do next?’. And it will do all this in a miniscule fraction of a second. Computers are amazing.

Obviously, nobody in their right mind is going to go through the whole business of telling the computer exactly what to do with each individual piece of binary data manually, because if they did nothing would ever get done. For this purpose, therefore, programmers have invented programming languages to translate their wishes into binary, and for a little more detail about them, tune in to my final post on the subject…

Today

Today, as very few of you will I’m sure be aware (hey, I wasn’t until a few minutes ago) is World Mental Health Day. I have touched on my own personal experiences of mental health problems before, having spent the last few years suffering from depression, but I feel today is a suitably appropriate time to bring it up again, because this is an issue that, in the modern world, cannot be talked about enough.

Y’see, conservative estimates claim at least 1 in 4 of us will suffer from a mental health problem at some point in our lives, be it a relatively temporary one such as post-natal depression or a lifelong battle with the likes of manic depressive disorder or schizophrenia. Mental health is also in the top five biggest killers in the developed world, through a mixture of suicide, drug usage, self-harming or self-negligence, and as such there is next to zero chance that you will go through your life without somebody you know very closely suffering or even dying as a result of what’s going on in their upstairs. If mental health disorders were a disease in the traditional sense, this would be labelled a red alert, emergency level pandemic.

However, despite the prevalence and danger associated with mental health, the majority of sufferers do so in silence. Some have argued that the two correlate due to the mindset of sufferers, but this claim does not change the fact 9 out of 10 people suffering from a mental health problem say that they feel a degree of social stigma and discrimination against their disability (and yes that description is appropriate; a damaged mind is surely just as debilitating, if not more so, than a damaged body), and this prevents them from coming out to their friends about their suffering.

The reason for this is an all too human one; we humans rely heavily, perhaps more so than any other species, on our sense of sight to formulate our mental picture of the world around us, from the obviously there to the unsaid subtext. We are, therefore, easily able to identify with and relate to physical injuries and obvious behaviours that suggest something is ‘broken’ with another’s body and general being, and that they are injured or disabled is clear to us. However, a mental problem is confined to the unseen recesses of our brain, hiding away from the physical world and making it hard for us to identify with as a problem. We may see people acting down a lot, hanging their head and giving other hints through their body language that something’s up, but everybody looks that way from time to time and it is generally considered a regrettable but normal part of being human. If we see someone acting like that every day, our sympathy for what we perceive as a short-term issue may often turn into annoyance that people aren’t resolving it, creating a sense that they are in the wrong for being so unhappy the whole time and not taking a positive outlook on life.

Then we must also consider the fact that mental health problems tend to place a lot of emphasis on the self, rather than one’s surroundings. With a physical disability, such as a broken leg, the source of our problems, and our worry, is centred on the physical world around us; how can I get up that flight of stairs, will I be able to keep up with everyone, what if I slip or get knocked over, and so on. However, when one suffers from depression, anxiety or whatever, the source of our worry is generally to do with our own personal failings or problems, and less on the world around us. We might continually beat ourselves up over the most microscopic of failings and tell ourselves that we’re not good enough, or be filled by an overbearing, unidentifiable sense of dread that we can only identify as emanating from within ourselves. Thus, when suffering from mental issues we tend to focus our attention inwards, creating a barrier between our suffering and the outside world and making it hard to break through the wall and let others know of our suffering.

All this creates an environment surrounding mental health that it is a subject not to be broached in general conversation, that it just doesn’t get talked about; not so much because it is a taboo of any kind but more due to a sense that it will not fit into the real world that well. This is even a problem in the environment of counselling  specifically designed to try and address such issues, as people are naturally reluctant to let it out or even to ‘give in’ and admit there is something wrong. Many people who take a break from counselling, me included, confident that we’ve come a long way towards solving our various issues, are for this reason resistive to the idea of going back if things take a turn for the worse again.

And it’s not as simple as making people go to counselling either, because quite frequently that’s not the answer. For some people, they go to the wrong place and find their counsellor is not good at relating to and helping them; others may need medication or some such rather than words to get them through the worst times, and for others counselling just plain doesn’t work. But this does not detract from the fact that no mental health condition in no person, however serious, is so bad as to be untreatable, and the best treatment I’ve ever found for my depression has been those moments when people are just nice to me, and make me feel like I belong.

This then, is the two-part message of today, of World Mental Health Day, and of every day and every person across the world; if you have a mental health problem, talk. Get it out there, let people know. Tell your friends, tell your family, find a therapist and tell them, but break the walls of your own mental imprisonment and let the message out. This is not something that should be forever bottled up inside us.

And for the rest of you, those of us who do not suffer or are not at the moment, your task is perhaps even more important; be there. Be prepared to hear that someone has a mental health problem, be ready to offer them support, a shoulder to lean on, but most importantly, just be a nice human being. Share a little love wherever and to whoever you can, and help to make the world a better place for every silent sufferer out there.