War in Three Dimensions

Warfare has changed a lot in the last century. Horses have become redundant, guns become reliable, machine guns become light enough to carry and bombs have become powerful enough to totally annihilate a small country if the guy with the button so chooses. But perhaps more significant than just the way hardware has changed is the way that warfare has changed itself; tactics and military structure have changed beyond all recognition compared to the pre-war era, and we must now fight wars whilst surrounded by a political landscape, at least in the west, that does not approve of open conflict. However, next year marks the 100th anniversary of a military innovation that not only represented massive hardware upgrade at the time, but that has changed almost beyond recognition in the century since then and has fundamentally changed the way we fight wars; the use of aeroplanes in warfare.

The skies have always been a platform to be exploited by the cunning military strategist; balloons were frequently used for messaging long before they were able to carry humans and be used for reconnaissance during the early 20th century, and for many years the only way of reliably sending a complicated message over any significant distance was via homing pigeon. It was, therefore, only natural that the Wright brothers had barely touched down after their first flight in ‘Flyer I’ when the first suggestions of a military application to such a technology were being made. However, early attempts at powered flight could not sustain it for very long, and even subsequent improvements failed to produce anything capable of carrying a machine gun. By the First World War, aircraft had become advanced enough to make controlled, sustained, two-person flight at an appreciable height a reality, and both the Army and Navy were quick to incorporate air divisions into their structures (these divisions in the British Armed Forces were the Royal Flying Corps and the Royal Naval Air Service respectively). However, these air forces were initially only used for reconnaissance purposes and ‘spotting’ for artillery to help them get their eye in; the atmosphere was quite peaceful so far above the battlefield, and pilots and observers of opposing aircraft would frequently wave to one another during the early years of the war. As time passed and the conflict grew ever-bloodier, these exchanges became less friendly; before long observers would carry supplies of bricks into the air with them and attempt to throw them at enemy aircraft, and the Germans even went so far as to develop steel darts that could reportedly split a man in two; whilst almost impossible to aim in a dogfight, these darts were incredibly dangerous for those on the ground. By 1916 aircraft had grown advanced enough to carry bombs, enabling a (slightly) more precise method of destroying enemy targets than artillery, and before long both sides could equip these bombers with turret-mounted machine guns that the observers could fire on other aircraft with; given that the aircraft of the day were basically wire and wood cages covered in fabric, these guns could cause vast amounts of damage and the men within the planes had practically zero protection (and no parachutes either, since the British top brass believed this might encourage cowardice). To further protect their bombers, both sides began to develop fighter aircraft as well; smaller, usually single-man, planes with fixed machine guns operated by the pilot (and which used a clever bit of circuitry to fire through the propeller; earlier attempts at doing this without blowing the propeller to pieces had simply consisted of putting armour plating on the back of the propeller, which not infrequently caused bullets to bounce back and hit the pilot). It wasn’t long before these fighters were given more varied orders, ranging from trench strafing to offensive patrols (where they would actively go and look for other aircraft to attack). Perhaps the most dangerous of these objectives was balloon strafing; observation balloons were valuable pieces of reconnaissance equipment, and bringing them down generally required a pilot to navigate the large escort of fighters that accompanied them. Towards the end of the war, the forces began to realise just how central to their tactics air warfare had become, and in 1918 the RFC and RNAS were combined to form the Royal Air Force, the first independent air force in the world. The RAF celebrated its inception three weeks later when German air ace Manfred von Richthofen (aka The Red Baron), who had 80 confirmed victories despite frequently flying against superior numbers or hardware, was shot down (although von Richthofen was flying close to the ground at the time in pursuit of an aircraft, and an analysis of the shot that killed him suggests that he was killed by a ground-based AA gunner rather than the Canadian fighter pilot credited with downing him. Exactly who fired the fatal shot remains a mystery.)

By the time the Second World War rolled around things had changed somewhat; in place of wire-and-fabric biplanes, sleeker metal monoplanes were in use, with more powerful and efficient engines making air combat faster affair. Air raids themselves could be conducted over far greater distances since more fuel could be carried, and this proved well suited to the style of warfare that the war generated; rather than the largely unmoving battle lines of the First World War, the early years of WW2 consisted of countrywide occupation in Europe, whilst the battlegrounds of North Africa and Soviet Russia were dominated by tank warfare and moved far too fluidly for frontline air bases to be safe. Indeed, air power featured prominently in neither of these land campaigns; but on the continent, air warfare reigned supreme. As the German forces dominated mainland Europe, they launched wave after wave of long distance bombing campaigns at Britain in an effort to gain air superiority and cripple the Allies’ ability to fight back when they attempted to cross the channel and invade. However, the British had, unbeknownst to the Germans, perfected their radar technology, and were thus able to use their relatively meagre force of fighters to greatest effect to combat the German bombing assault. This, combined with some very good planes and flying on behalf of the British and an inability to choose the right targets to bomb on behalf of the Germans, allowed the Battle of Britain to swing in favour of the Allies and turned the tide of the war in Europe. In the later years of the war, the Allies turned the tables on a German military crippled by the Russian campaign after the loss at Stalingrad and began their own orchestrated bombing campaign. With the increase in anti-aircraft technology since the First World War, bombers were forced to fly higher than ever before, making it far harder to hit their targets; thus, both sides developed the tactic of ‘carpet bombing’, whereby they would simply load up as big a plane as they could with as many bombs as it could carry and drop them all over an area in the hope of at least one of the bombs hitting the intended target. This imprecise tactic was only moderately successful when it came to destruction of key military targets, and was responsible for the vast scale of the damage to cities both sides caused in their bombing campaigns. In the war in the Pacific, where space on aircraft carriers was at a premium and Lancaster Bombers would have been impractical, they kept with the tactic of using dive bombers, but such attacks were very risky and there was still no guarantee of a successful hit. By the end of the war, air power was rising to prominence as possibly the most crucial theatre of combat, but we were reaching the limits of what our hardware was capable of; our propellor-driven, straight wing fighter aircraft seemed incapable of breaking the sound barrier, and our bombing attacks couldn’t safely hit any target less than a mile wide. Something was clearly going to have to change; and next time, I’ll investigate what did.

Advertisement

Art vs. Science

All intellectual human activity can be divided into one of three categories; the arts, humanities, and sciences (although these terms are not exactly fully inclusive). Art here covers everything from the painted medium to music, everything that we humans do that is intended to be creative and make our world as a whole a more beautiful place to live in. The precise definition of ‘art’ is a major bone of contention among creative types and it’s not exactly clear where the boundary lies in some cases, but here we can categorise everything intended to be artistic as an art form. Science here covers every one of the STEM disciplines; science (physics, biology, chemistry and all the rest in its vast multitude of forms and subgenres), technology, engineering (strictly speaking those two come under the same branch, but technology is too satisfying a word to leave out of any self-respecting acronym) and mathematics. Certain portions of these fields too could be argued to be entirely self-fulfilling, and others are considered by some beautiful, but since the two rarely overlap the title of art is never truly appropriate. The humanities are an altogether trickier bunch to consider; on one hand they are, collectively, a set of sciences, since they purport to study how the world we live in behaves and functions. However, this particular set of sciences are deemed separate because they deal less with fundamental principles of nature but of human systems, and human interactions with the world around them; hence the title ‘humanities’. Fields as diverse as economics and geography are all blanketed under this title, and are in some ways the most interesting of sciences as they are the most subjective and accessible; the principles of the humanities can be and usually are encountered on a daily basis, so anyone with a keen mind and an eye for noticing the right things can usually form an opinion on them. And a good thing too, otherwise I would be frequently short of blogging ideas.

Each field has its own proponents, supporters and detractors, and all are quite prepared to defend their chosen field to the hilt. The scientists point to the huge advancements in our understanding of the universe and world around us that have been made in the last century, and link these to the immense breakthroughs in healthcare, infrastructure, technology, manufacturing and general innovation and awesomeness that have so increased our quality of life (and life expectancy) in recent years. And it’s not hard to see why; such advances have permanently changed the face of our earth (both for better and worse), and there is a truly vast body of evidence supporting the idea that these innovations have provided the greatest force for making our world a better place in recent times. The artists provide the counterpoint to this by saying that living longer, healthier lives with more stuff in it is all well and good, but without art and creativity there is no advantage to this better life, for there is no way for us to enjoy it. They can point to the developments in film, television, music and design, all the ideas of scientists and engineers tuned to perfection by artists of each field, and even the development in more classical artistic mediums such as poetry or dance, as key features of the 20th century that enabled us to enjoy our lives more than ever before. The humanities have advanced too during recent history, but their effects are far more subtle; innovative strategies in economics, new historical discoveries and perspectives and new analyses of the way we interact with our world have all come, and many have made news, but their effects tend to only be felt in the spheres of influence they directly concern- nobody remembers how a new use of critical path analysis made J. Bloggs Ltd. use materials 29% more efficiently (yes, I know CPA is technically mathematics; deal with it). As such, proponents of humanities tend to be less vocal than those in other fields, although this may have something to do with the fact that the people who go into humanities have a tendency to be more… normal than the kind of introverted nerd/suicidally artistic/stereotypical-in-some-other-way characters who would go into the other two fields.

This bickering between arts & sciences as to the worthiness/beauty/parentage of the other field has lead to something of a divide between them; some commentators have spoken of the ‘two cultures’ of arts and sciences, leaving us with a sect of sciences who find it impossible to appreciate the value of art and beauty, thinking it almost irrelevant compared what their field aims to achieve (to their loss, in my opinion). I’m not entirely sure that this picture is entirely true; what may be more so, however, is the other end of the stick, those artistic figures who dominate our media who simply cannot understand science beyond GCSE level, if that. It is true that quite a lot of modern science is very, very complex in the details, but Albert Einstein was famous for saying that if a scientific principle cannot be explained to a ten-year old then it is almost certainly wrong, and I tend to agree with him. Even the theory behind the existence of the Higgs Boson, right at the cutting edge of modern physics, can be explained by an analogy of a room full of fans and celebrities. Oh look it up, I don’t want to wander off topic here.

The truth is, of course, that no field can sustain a world without the other; a world devoid of STEM would die out in a matter of months, a world devoid of humanities would be hideously inefficient and appear monumentally stupid, and a world devoid of art would be the most incomprehensibly dull place imaginable. Not only that, but all three working in harmony will invariably produce the best results, as master engineer, inventor, craftsman and creator of some of the most famous paintings of all time Leonardo da Vinci so ably demonstrated. As such, any argument between fields as to which is ‘the best’ or ‘the most worthy’ will simply never be won, and will just end up a futile task. The world is an amazing place, but the real source of that awesomeness is the diversity it contains, both in terms of nature and in terms of people. The arts and sciences are not at war, nor should they ever be; for in tandem they can achieve so much more.

Pineapples (TM)

If the last few decades of consumerism have taught us anything, it is just how much faith people are able of setting store in a brand. In everything from motorbikes to washing powder, we do not simply test and judge effectiveness of competing products objectively (although, especially when considering expensive items such as cars, this is sometimes impractical); we must compare them to what we think of the brand and the label, what reputation this product has and what it is particularly good at, which we think most suits our social standing and how others will judge our use of it. And good thing too, from many companies’ perspective, otherwise the amount of business they do would be slashed. There are many companies whose success can be almost entirely put down to the effect of their branding and the impact their marketing has had on the psyche of western culture, but perhaps the most spectacular example concerns Apple.

In some ways, to typecast Apple as a brand-built company is a harsh one; their products are doubtless good ones, and they have shown a staggering gift for bringing existed ideas together into forms that, if not quite new, are always the first to be a practical, genuine market presence. It is also true that Apple products are often better than their competitors in very specific fields; in computing, for example, OS X is better at dealing with media than other operating systems, whilst Windows has traditionally been far stronger when it comes to word processing, gaming and absolutely everything else (although Windows 8 looks very likely to change all of that- I am not looking forward to it). However, it is almost universally agreed (among non-Apple whores anyway) that once the rest of the market gets hold of it Apple’s version of a product is almost never the definitive best, from a purely analytical perspective (the iPod is a possible exception, solely due to the existence of iTunes redefining the music industry before everyone else and remaining competitive to this day) and that every Apple product is ridiculously overpriced for what it is. Seriously, who genuinely thinks that top-end Macs are a good investment?

Still, Apple make high-end, high-quality products with a few things they do really, really well that are basically capable of doing everything else. They should have a small market share, perhaps among the creative or the indie, and a somewhat larger one in the MP3 player sector. They should be a status symbol for those who can afford them, a nice company with a good history but that nowadays has to face up to a lot of competitors. As it is, the Apple way of doing business has proven successful enough to make them the biggest private company in the world. Bigger than every other technology company, bigger than every hedge fund or finance company, bigger than any oil company, worth more than every single one (excluding state owned companies such as Saudi Aramco, which is estimated to be worth around 3 trillion dollars by dealing in Saudi oil exports). How has a technology company come to be worth $400 billion? How?

One undoubted feature is Apple’s uncanny knack of getting there first- the Apple II was the first real personal computer and provided the genes for Windows-powered PC’s to take the world, whilst the iPod was the first MP3 player that was genuinely enjoyable to use, the iPhone the first smartphone (after just four years, somewhere in the region of 30% of the world’s phones are now smartphones) and the iPad the first tablet computer. Being in the technology business has made this kind of innovation especially rewarding for them; every company is constantly terrified of being left behind, so whenever a new innovation comes along they will knock something together as soon as possible just to jump on the bandwagon. However, technology is a difficult business to get right, meaning that these products are usually rubbish and make the Apple version shine by comparison. This also means that if Apple comes up with the idea first, they have had a couple of years of working time to make sure they get it right, whilst everyone else’s first efforts have had only a few scance months; it takes a while for any serious competitors to develop, by which time Apple have already made a few hundred million off it and have moved on to something else; innovation matters in this business.

But the real reason for Apple’s success can be put down to the aura the company have built around themselves and their products. From their earliest infancy Apple fans have been self-dubbed as the independent, the free thinkers, the creative, those who love to be different and stand out from the crowd of grey, calculating Windows-users (which sounds disturbingly like a conspiracy theory or a dystopian vision of the future when it is articulated like that). Whilst Windows has its problems, Apple has decided on what is important and has made something perfect in this regard (their view, not mine), and being willing to pay for it is just part of the induction into the wonderful world of being an Apple customer (still their view). It’s a compelling world view, and one that thousands of people have subscribed to, simply because it is so comforting; it sells us the idea that we are special, individual, and not just one of the millions of customers responsible for Apple’s phenomenal size and success as a company. But the secret to the success of this vision is not just the view itself; it is the method and the longevity of its delivery. This is an image that has been present in their advertising campaign from its earliest infancy, and is now so ingrained that it doesn’t have to be articulated any more; it’s just present in the subtle hints, the colour scheme, the way the Apple store is structured and the very existence of Apple-dedicated shops generally. Apple have delivered the masterclass in successful branding; and that’s all the conclusion you’re going to get for today.

In a hole in the ground there lived a hobbit…

I read a lot; I have done since I was a kid. Brian Jacques, JK Rowling, Caroline Lawrence and dozens of other authors’ work sped through my young mind, throwing off ideas, philosophies, and any other random stuff I found interesting in all directions. However, as any committed reader will tell you, after a while flicking through any genre all the ‘low hanging fruit’, the good books everyone’s heard of, will soon be absorbed, and it is often quite a task to find reliable sources of good reading material. It was for partly this reason that I, some years ago, turned to the fantasy genre because, like it or loathe it, it is impossible to deny the sheer volume of stuff, and good stuff too, that is there. Mountains of books have been written for it, many of which are truly huge (I refer to volumes 11 and 12 of Robert Jordan’s ‘Wheel of Time’, which I have yet to pluck up the courage to actually read, if anyone doubts this fact), and the presence of so many different subgenres (who can compare George RR Martin, creator of A Game of Thrones, with Terry Pratchett, of Discworld fame) and different ideas gives it a nice level of innovation within a relatively safe, predictable sphere of existence.

This sheer volume of work does create one or two issues, most notably the fact that it can be often hard to consult with other fans about ‘epic sagas’ you picked up in the library that they may never have even heard of (hands up how many of you have heard of Raymond E Feist, who really got me started in this genre)- there’s just so much stuff, and not much of it can be said to be standard reading material for fantasy fans. However, there is one point of consistency, one author everyone’s read, and who can always be used as a reliable, if high, benchmark. I speak, of course, of the work of JRR Tolkein.

As has been well documented, John Ronald Reuel Tolkein was not an author by trade or any especial inclination; he was an academic, a professor of first Anglo-Saxon and later English Language & Literature at Pembroke College, Oxford, for 34 years no less. He first rose to real academic prominence in 1936, when he gave (and later published) a seminal lecture entitled Beowulf: The Monsters and the Critics. Beowulf is one of the oldest surviving works of English literature, an Anglo-Saxon epic poem from around the 8th century AD detailing the adventures of a warrior/king named Beowulf, and Tolkein’s lecture defined many contemporary thoughts about it as a work of literature.

However, there was something about Beowulf that was desperately sad to Tolkein; it was just about the only surviving piece of Old English mythology, and certainly the only one with any degree of public knowledge. Tolkein was a keen student of Germanic mythology and that of other nations, and it always pained him that his home nation had no such traditional mythology to be called upon, all the Saxon stories having been effectively wiped out with the coming of the Normans in 1066. Even our most famous ‘myths’, those of King Arthur, came from a couple of mentions in 8th century texts, and were only formalised by Normans- Sir Thomas Malory didn’t write Le Morte d’Arthur, the first full set of the Arthurian legends, until 1485, and there is plenty of evidence that he made most of it up. It never struck Tolkein as being how a myth should be; ancient, passed down father to son over innumerable generations until it became so ingrained as to be considered true. Tolkein’s response to what he saw as a lamentable gap in our heritage was decidedly pragmatic- he began building his own mythological world.

Since he was a linguistic scholar, Tolkein began by working with what he new; languages. His primary efforts were concerned with elvish, which he invented his own alphabet and grammar for and eventually developed into as deep and fully-fleshed a tongue as you could imagine. He then began experimenting with writing mythology based around the language- building a world of the Dark Ages and before that was as special, fantastical and magical as a story should be to become a fully-fledged myth (you will notice that at the start of The Lord Of The Rings, Tolkein refers to how we don’t see much of hobbits any more, implying that his world was set in the past rather than the alternate universe).

His first work in this field was the Quenta Silmarillion, a title that translates (from elvish) as “the Tale of the Silmarils”. It is a collection of stories and legends supposedly originating from the First Age of his world, although compiled by an Englishman during the Dark Ages from tales edited during the Fourth Age, after the passing of the elves. Tolkein started this work multiple times without ever finishing, and it wasn’t until long after his death that his son published The Silmarillion as a finished article.

However, Tolkein also had a family with young children, and took delight in writing stories for them. Every Christmas (he was, incidentally, a devout Catholic) he wrote letters to them from Father Christmas that took the form of short stories (again, not published until after his death), and wrote numerous other tales for them. A few of these, such as The Adventures of Tom Bombadil, either drew inspiration from or became part of his world (or ‘legendarium’, as it is also known), but he never expected any of them to become popular. And they weren’t- until he, bored out of his mind marking exam papers one day in around 1930, found a blank back page and began writing another, longer story for them, beginning with the immortal lines: “In a hole in the ground there lived a hobbit.”

This work, what would later become The Hobbit (or There and Back Again), was set in the Third Age of his legendarium and is soon to be made into a  series of three films (don’t ask me how that works, given that it’s shorter than each one of the books making up The Lord Of The Rings that each got a film to themselves, but whatever). Like his other stories, he never intended it to be much more than a diverting adventure for his children, and for 4 years after its completion in 1932 it was just that. However, Tolkein was a generous soul who would frequently lend his stories to friends, and one of those, a student named Elaine Griffiths, showed it to another friend called Susan Dagnall. Dagnall worked at the publishing company Allen & Unwin, and she was so impressed upon reading it that she showed it to Stanley Unwin. Unwin lent the book to his son Rayner to review (this was his way of earning pocket money), who described it as ‘suitable for children between the ages of 6 and 12’ (kids were clearly a lot more formal and eloquent where he grew up). Unwin published the book, and everyone loved it. It recieved many glowing reviews in an almost universally positive critical reception, and one of the first reviews came from Tolkein’s friend CS Lewis in The Times, who wrote:

The truth is that in this book a number of good things, never before united, have come together: a fund of humour, an understanding of children, and a happy fusion of the scholar’s with the poet’s grasp of mythology… The professor has the air of inventing nothing. He has studied trolls and dragons at first hand and describes them with that fidelity that is worth oceans of glib “originality.”

In many ways, that quote describes all that was great about Tolkein’s writing; an almost childish, gleeful imagination combined with the brute seriousness of his academic work, that made it feel like a very, very real fantasy world. However, this was most definitely not the end of JRR Tolkein, and since I am rapidly going over length, the rest of the story will have to wait until next time…

The Consolidation of a World Power

I left my last post on the history of music at around 1969, which for many independent commentators marks the end of the era of the birth of rock music. The 60s had been a decade of a hundred stories running alongside one another in the music world, each with their own part to play in the vast tapestry of innovation. Jimi Hendrix had risen from an obscure career playing the blues circuit in New York to being an international star, and one moreover who revolutionised what the music world thought about what a guitar could and should do- even before he became an icon of the psychedelic hippie music world, his hard & heavy guitar leads, in stark contrast to the tones of early Beatles’ and 60s pop music had founded rock music’s harder edge. He in turn had borrowed from earlier pioneers, Jeff Beck, Eric Clapton, The Who (perhaps the first true rock band, given their wild onstage antics and heavy guitar & drumkit-based sound) and Bob Dylan (the godfather of folk rock and the blues-style guitar playing that rock turned into its harder sound), each of whom had their own special stories. However, there was a reason I focused on the story of the hippie movement in my last post- the story of a counter-culture precipitating a musical revolution was only in its first revolution, and would be repeated several times by the end of the century.

To some music nerds however, Henrix’s death aged just 27 (and after just four years of fame) in 1970 thanks to an accidental drug overdose marked the beginning of the end. The god of the guitar was dead, the beautiful voice of Janis Joplin was dead, Syd Barrett had broken up from Pink Floyd, another founding band of the psychedelic rock movement, and was being driven utterly insane by LSD (although he thankfully later managed to pull himself out of the self-destructive cycle and lived until 2006), and Floyd’s American counterparts The Velvet Underground broke up just four years later. Hell, even The Beatles went in 1970.

But that didn’t mean it was the end- far from it. Rock music might have lost some of its guiding lights, but it still carried on regardless- Pink Floyd, The Who, Led Zeppelin and The Rolling Stones, the four biggest British bands of the time, continued to play an active role in the worldwide music scene, Zeppelin and The Who creating a huge fan rivalry. David Bowie was also continuing to show the world the mental ideas hiding beneath his endlessly crisp accent, and the rock world continued to swing along.

However, it was also during this time that a key division began to make itself firmly felt. As rock developed its harder sound during the 1960s, other bands and artists had followed The Beatles’ early direction by playing softer, more lyrical and acoustic sounds, music that was designed to be easy on the ear and played to and for mass appeal. This quickly got itself labelled ‘pop music’ (short for popular), and just as quickly this became something of a term of abuse from serious rock aficionados. Since its conception, pop has always been more the commercial enterprise, motivated less by a sense of artistic expression and experimentation and more by the promise of fame and fortune, which many consider a rather shallow ambition. But, no matter what the age, pop music has always been there, and more often than not has been topping the charts- people often talk about some age in the long distant past as being the ‘best time for music’ before returning to lambast the kind of generic, commercial consumer-generated pop that no self-respecting musician could bring himself to genuinely enjoy and claiming that ‘most music today is rubbish’. They fail to remember, of course, just how much of the same kind of stuff was around in their chosen ‘golden age’, that the world in general has chosen to forget.

Nonetheless, this frustration with generic pop has frequently been a driving force for the generation of new forms of rock, in an attempt to ‘break the mould’. In the early seventies, for example, the rock world was described as tame or sterile, relatively acoustic acts beginning to claim rock status. The Rolling Stones and company weren’t new any more, there was a sense of lacking in innovation, and a sense of musical frustration began to build. This frustration was further fuelled by the ending of the 25-year old post war economic boom, and the result, musically speaking, was punk rock. In the UK, it was The Sex Pistols and The Clash, in the USA The Ramones and similar, most of whom were ‘garage bands’ with little skill (Johnny Rotten, lead singer of The Sex Pistols, has frequently admitted that he couldn’t sing in the slightest, and there was a running joke at the time on the theme of ‘Here’s three chords. Now go start a band’) but the requisite emotion, aggression and fresh thinking to make them a musical revolution. Also developed a few years earlier was heavy metal, perhaps the only rock genre to have never had a clearly defined ‘era’ despite having been there, hiding around the back and on the sidelines somewhere, for the past 40 or so years. Its development was partly fuelled by the same kind of musical frustration that sparked punk, but was also the result of a bizarre industrial accident. Working at a Birmingham metal factory in 1965 when aged 17, Black Sabbath guitarist (although they were then known as The Polka Tulk Blues Band) Tony Iommi lost the the ends of his middle and ring fingers on his right hand. This was a devastating blow for a young guitarist, but Iommi compensated by easing the tension on his strings and developing two thimbles to cover his finger ends. By 1969, his string slackening had lead him to detune his guitar down a minor third from E to C#, and to include slapping the strings with his fingers as part of his performance. This detuning, matched by the band’s bassist Geezer Butler, was combined with the idea formulated whilst watching the queues for horror movie Black Sabbath that ‘if people are prepared to make money to be scared, then why don’t we write scary music?’, to create the incredibly heavy, aggressive, driving and slightly ‘out of tune’ (to conventional ears) sound of heavy metal, which was further popularised by the likes of Judas Priest, Deep Purple and Motley Crue (sorry, I can’t do the umlauts here).

Over the next few years, punk would slowly fall out of fashion, evolving into harder variations such as hardcore (which never penetrated the public consciousness but would make itself felt some years later- read on to find out how) and leaving other bands to develop it into post-punk; a pattern repeated with other genres down the decades. The 1980s was the first decade to see hip hop come to the fore,  partly in response to the newly-arrived MTV signalling the onward march of electronic, manufactured pop. Hip hop was specifically targeted at a more underground, urban circuit to these clean, commercial sounds, music based almost entirely around a beat rather than melody and allowing the songs to be messed around with, looped, scratched and repeated all for the sake of effect and atmosphere building. From hip hop was spawned rap, party, funk, disco, a new definition of the word DJ and, eventually, even dubstep. The decade also saw rock music really start to ‘get large’ with bands such as Queen and U2 filling football stadiums, paving the way for the sheer scale of modern rock acts and music festivals, and culminating, in 1985, with the huge global event that was Live Aid- not only was this a huge musical landmark, but it fundamentally changed what it meant to be a musical celebrity, and greatly influenced western attitudes to the third world.

By the late 80s and early 90s the business of counter-culture was at it again, this time with anger directed at a range of subjects from MTV tones, the boring, amelodic repetition of rap and the controversial policies of the Reagan administration that created a vast American ‘disaffected youth’ culture. This music partly formulated itself into the thoughtful lyrics and iconic sounds of bands such as REM, but in other areas found its expression and anger in the remnants of punk. Kurt Cobain in particular drew heavy inspiration from ‘hardcore’ bands (see, I said they’d show up again) such as Black Cloud, and the huge popularity of Nirvana’s ‘Smells Like Teen Spirit’ thrust grunge, along with many of the other genres blanketed under the title ‘alternative rock’ into the public consciousness (one of my earlier posts dealt with this, in some ways tragic, rise and fall in more detail). Once the grunge craze died down, it was once again left for other bands to formulate a new sound and scene out of the remnants of the genre, Foo Fighters being the most prominent post-grunge band around today. In the UK things went in a little different direction- this time resentment was more reserved to the staged nature of Top of the Pops and the like, The Smiths leading the way into what would soon become indie rock or Britpop. This wave of British bands, such as Oasis, Blur and Suede, pushed back the influx of grunge and developed a prominence for the genre that made the term ‘indie’ seem a bit ironic.

Nowadays, there are so many different great bands, genres and styles pushing at the forefront of the musical world that it is difficult to describe what is the defining genre of our current era. Music is a bigger business than it has ever been before, both in terms of commercial pop sound and the hard rock acts that dominate festivals such as Download and Reading, with every band there is and has ever been forming a part, be it a thread or a whole figure, of the vast musical tapestry that the last century has birthed. It is almost amusing to think that, whilst there is so much that people could and do complain about in our modern world, it’s very hard to take it out on a music world that is so vast and able to cater for every taste. It’s almost hard to see where the next counter-culture will come from, or how their musical preferences will drive the world forward once again. Ah well, we’ll just have to wait and see…

NMEvolution

Music has been called by some the greatest thing the human race has ever done, and at its best it is undoubtedly a profound expression of emotion more poetic than anything Shakespeare ever wrote. True, done badly it can sound like a trapped cat in a box of staplers falling down a staircase, but let’s not get hung up on details here- music is awesome.

However, music as we know it has only really existed for around a century or so, and many of the developments in music’s  history that have shaped it into the tour de force that it is in modern culture are in direct parallel to human history. As such, the history of our development as a race and the development of music run closely alongside one another, so I thought I might attempt a set of edited highlights of the former (well, western history at least) by way of an exploration of the latter.

Exactly how and when the various instruments as we know them were invented and developed into what they currently are is largely irrelevant (mostly since I don’t actually know and don’t have the time to research all of them), but historically they fell into one of two classes. The first could be loosely dubbed ‘noble’ instruments- stuff like the piano, clarinet or cello, which were (and are) hugely expensive to make, required a significant level of skill to do so, and were generally played for and by the rich upper classes in vast orchestras, playing centuries-old music written by the very few men with the both the riches, social status and talent to compose them. On the other hand, we have the less historically significant, but just as important, ‘common’ instruments, such as the recorder and the ancestors of the acoustic guitar. These were a lot cheaper to make and thus more available to (although certainly far from widespread among) the poorer echelons of society, and it was on these instruments that tunes were passed down from generation to generation, accompanying traditional folk dances and the like; the kind of people who played such instruments very rarely had the time to spare to really write anything new for them, and certainly stood no chance of making a living out of them. And, for many centuries, that was it- what you played and what you listened to, if you did so at all, depended on who you were born as.

However, during the great socioeconomic upheaval and levelling that accompanied the 19th century industrial revolution, music began to penetrate society in new ways. The growing middle and upper-middle classes quickly adopted the piano as a respectable ‘front room’ instrument for their daughters to learn, and sheet music was rapidly becoming both available and cheap for the masses. As such, music began to become an accessible activity for far larger swathes of the population and concert attendances swelled. This was the Romantic era of music composition, with the likes of Chopin, Mendelssohn and Brahms rising to prominence, and the size of an orchestra grew considerably to its modern size of four thousand violinists, two oboes and a bored drummer (I may be a little out in my numbers here) as they sought to add some new experimentation to their music. This experimentation with classical orchestral forms was continued through the turn of the century by a succession of orchestral composers, but this period also saw music head in a new and violently different direction; jazz.

Jazz was the quintessential product of the United States’ famous motto ‘E Pluribus Unum’ (From Many, One), being as it was the result of a mixing of immigrant US cultures. Jazz originated amongst America’s black community, many of whom were descendants of imported slaves or even former slaves themselves, and was the result of traditional African music blending with that of their forcibly-adopted land. Whilst many black people were heavily discriminated against when it came to finding work, they found they could forge a living in the entertainment industry, in seedier venues like bars and brothels. First finding its feet in the irregular, flowing rhythms of ragtime music, the music of the deep south moved onto the more discordant patterns of blues in the early 20th century before finally incorporating a swinging, syncopated rhythm and an innovative sentiment of improvisation to invent jazz proper.

Jazz quickly spread like wildfire across the underground performing circuit, but it wouldn’t force its way into popular culture until the introduction of prohibition in the USA. From 1920 all the way up until the Presidency of Franklin D Roosevelt (whose dropping of the bill is a story in and of itself) the US government banned the consumption of alcohol, which (as was to be expected, in all honesty) simply forced the practice underground. Dozens of illegal speakeasies (venues of drinking, entertainment and prostitution usually run by the mob) sprung up in every district of every major American city, and they were frequented by everyone from the poorest street sweeper to the police officers who were supposed to be closing them down. And in these venues, jazz flourished. Suddenly, everyone knew about jazz- it was a fresh, new sound to everyone’s ears, something that stuck in the head and, because of its ‘common’, underground connotations, quickly became the music of the people. Jazz musicians such as Louis Armstrong (a true pioneer of the genre) became the first celebrity musicians, and the way the music’s feel resonated with the happy, prosperous feeling surrounding the economic good times of the 1920s lead that decade to be dubbed ‘the Jazz Age’.

Countless things allowed jazz and other, successive generations to spread around the world- the invention of the gramophone further enhanced the public access to music, as did the new cultural phenomenon of the cinema and even the Second World War, which allowed for truly international spread. By the end of the war, jazz, soul, blues, R&B and all other derivatives had spread from their mainly deep south origins across the globe, blazing a trail for all other forms of popular music to follow in its wake. And, come the 50s, they did so in truly spectacular style… but I think that’ll have to wait until next time.

Big Pharma

The pharmaceutical industry is (some might say amazingly) the second largest on the planet, worth over 600 billion dollars in sales every year and acting as the force behind the cutting edge of science that continues to push the science of medicine onwards as a field- and while we may never develop a cure for everything you can be damn sure that the modern medical world will have given it a good shot. In fact the pharmaceutical industry is in quite an unusual position in this regard, forming the only part of the medicinal public service, and indeed any major public service, that is privatised the world over.

The reason for this is quite simply one of practicality; the sheer amount of startup capital required to develop even one new drug, let alone form a public service of this R&D, would feature in the hundreds of millions of dollars, something that no government would be willing to set aside for a small immediate gain. All modern companies in the ‘big pharma’ demographic were formed many decades ago on the basis of a surprise cheap discovery or suchlike, and are now so big that they are the only people capable of fronting such a big initial investment. There are a few organisations (the National Institute of Health, the Royal Society, universities) who conduct such research away from the private sectors, but they are small in number and are also very old institutions.

Many people, in a slightly different field, have voiced the opinion that people whose primary concern is profit are those we should least be putting in charge of our healthcare and wellbeing (although I’m not about to get into that argument now), and a similar argument has been raised concerning private pharmaceutical companies. However, that is not to say that a profit driven approach is necessarily a bad thing for medicine, for without it many of the ‘minor’ drugs that have greatly improved the overall healthcare environment would not exist. I, for example, suffer from irritable bowel syndrome, a far from life threatening but nonetheless annoying and inconvenient condition that has been greatly helped by a drug called mebeverine hydrochloride. If all medicine focused on the greater good of ‘solving’ life-threatening illnesses, a potentially futile task anyway, this drug would never have been developed and I would be even more hateful to my fragile digestive system. In the western world, motivated-by-profit makes a lot of sense when trying to make life just that bit more comfortable. Oh, and they also make the drugs that, y’know, save your life every time you’re in hospital.

Now, normally at this point in any ‘balanced argument/opinion piece’ thing on this blog, I try to come up with another point to try and keep each side of the argument at an about equal 500 words. However, this time I’m going to break that rule, and jump straight into the reverse argument straight away. Why? Because I can genuinely think of no more good stuff to say about big pharma.

If I may just digress a little; in the UK & USA (I think, anyway) a patent for a drug or medicine lasts for 10 years, on the basis that these little capsules can be very valuable things and it wouldn’t do to let people hang onto the sole rights to make them for ages. This means that just about every really vital lifesaving drug in medicinal use today, given the time it takes for an experimental treatment to become commonplace, now exists outside its patent and is now manufactured by either the lowest bidder or, in a surprisingly high number of cases, the health service itself (the UK, for instance, is currently trying to become self-sufficient in morphine poppies to prevent it from having to import from Afghanistan or whatever), so these costs are kept relatively low by market forces. This therefore means that during their 10-year grace period, drugs companies will do absolutely everything they can to extort cash out of their product; when the antihistamine drug loratadine (another drug I use relatively regularly, it being used to combat colds) was passing through the last two years of its patent, its market price was quadrupled by the company making it; they had been trying to get the market hooked onto using it before jacking up the prices in order to wring out as much cash as possible. This behaviour is not untypical for a huge number of drugs, many of which deal with serious illness rather than being semi-irrelevant cures for the snuffles.

So far, so much normal corporate behaviour. Reaching this point, we must now turn to consider some practices of the big pharma industry that would make Rupert Murdoch think twice. Drugs companies, for example, have a reputation for setting up price fixing networks, many of which have been worth several hundred million dollars. One, featuring what were technically food supplements businesses, subsidiaries of the pharmaceutical industry, later set the world record for the largest fines levied in criminal history- this a record that persists despite the fact that the cost of producing the actual drugs themselves (at least physically) rarely exceeds a couple of pence per capsule, hundreds of times less than their asking price.

“Oh, but they need to make heavy profits because of the cost of R&D to make all their new drugs”. Good point, well made and entirely true, and it would also be valid if the numbers behind it didn’t stack up. In the USA, the National Institute of Health last year had a total budget of $23 billion, whilst all the drug companies in the US collectively spent $32 billion on R&D. This might seem at first glance like the private sector has won this particular moral battle; but remember that the American drug industry generated $289 billion in 2006, and accounting for inflation (and the fact that pharmaceutical profits tend to stay high despite the current economic situation affecting other industries) we can approximate that only around 10% of company turnover is, on average, spent on R&D. Even accounting for manufacturing costs, salaries and such, the vast majority of that turnover goes into profit, making the pharmaceutical industry the most profitable on the planet.

I know that health is an industry, I know money must be made, I know it’s all necessary for innovation. I also know that I promised not to go into my Views here. But a drug is not like an iPhone, or a pair of designer jeans; it’s the health of millions at stake, the lives of billions, and the quality of life of the whole world. It’s not something to be played around with and treated like some generic commodity with no value beyond a number. Profits might need to be made, but nobody said there had to be 12 figures of them.

Copyright Quirks

This post is set to follow on from my earlier one on the subject of copyright law and its origins. However, just understanding the existence of copyright law does not necessarily premeditate the understanding of the various complications, quirks and intricacies that get people quite so angry about it- so today I want to explore a few of these features that get people so annoyed, and explain why and how they came to be.

For starters, it is not in the public interest for material to stay forever copyrighted, for the simple reason that stuff is always more valuable if freely in the public domain as it is more accessible for the majority. If we consider a technological innovation or invention, restricting its production solely to the inventor leaves them free to charge pretty much what they like, since they have no competition to compete with. Not only does this give them an undesirable monopoly, it also restricts that invention from being best used on a large scale, particularly if it is something like a drug or medicine. Therefore, whilst a copyright obviously has to exist in order to stimulate the creation of new stuff, allowing it to last forever is just asking for trouble, which is why copyrights generally have expiry times. The length of a copyright’s life varies depending on a product- for authors it generally lasts for their lifetime plus a period of around 70 years or so to allow their family to profit from it (expired copyright is the reason that old books can be bought for next to nothing in digital form these days, as they cost nothing to produce). For physical products and, strangely, music, the grace period is generally both fixed and shorter (and dependent on the country concerned), and for drugs and pharmaceuticals it is just ten years (drugs companies are corrupt and profit-obsessed enough without giving them too long to rake in the cash).

Then, we encounter the fact that a copyright also represents a valuable commodity, and thus something that can potentially be put up for sale. You might think that allowing this sort of thing to go on is wrong and is only going to cause problems, but it is often necessary. Consider somebody who owns the rights to a book, and wants someone to make a film out of it, partly because they may be up for a cut of the profits and will gain money from the sale of their rights, but also because it represents a massive advertisement for their product. They, therefore, want to be able to sell part of the whole ‘right to publish’ idea to a film studio who can do the job for them, and any law prohibiting this is just pissing everybody off and preventing a good film from potentially being made. The same thing could apply to a struggling company who owns some valuable copyright to a product; the ability to sell it not only offers them the opportunity to make a bit of money to cover their losses, but also means that the product is more likely to stay on the free market and continue being produced by whoever bought the rights. It is for this reason legal for copyright to be traded between various different people or groups to varying degrees, although the law does allow the original owner to cancel any permanent trade after 35 years if they want to do something with the property.

And what about the issue of who is responsible for a work at all?  One might say that it is simply the work of the author/inventor concerned, but things are often not that simple. For one thing, innovations are often the result of work by a team of people and to restrict the copyright to any one of them would surely be unfair. For another, what if, say, the discovery of a new medical treatment came about because the scientist responsible was paid to do so, and given all the necessary equipment and personnel, by a company. Without corporate support, the discovery could never have been made, so surely that company is just as much legally entitled to the copyright as the individual responsible? This is legally known as ‘work made for hire’, and the copyright in this scenario is the property of the company rather than the individual, lasting for a fixed period (70 years in the US) since the company involved is unlikely to ‘die’ in quite the same predictable lifespan of a human being, and is unlikely to have any relatives for the copyright to benefit afterwards. It is for this reason also that companies, rather than just people, are allowed to hold copyright.

All of these quirks of law are undoubtedly necessary to try and be at least relatively fair to all concerned, but they are responsible for most of the arguments currently put about pertaining to ‘why copyright law is %&*$ed up’. The correct length of a copyright for various different stuff is always up for debate, whether it be musicians who want them to get longer (Paul McCartney made some complaints about this a few years ago), critics who want corporate ones to get shorter, or morons who want to get rid of them altogether (they generally mean well, but anarchistic principles today don’t either a) work very well or b) attract support likely to get them taken seriously). The sale of copyright angers a lot of people, particularly film critics- sales of the film rights for stuff like comic book characters generally include a clause requiring the studio to give it back if they don’t do anything with it for a few years. This has resulted in a lot of very badly-made films over the years which continue to be published solely because the relevant studio don’t want to give back for free a valuable commodity that still might have a few thousand dollars to be squeezed out of it (basically, blame copyright law for the new Spiderman film). The fact that both corporations and individuals can both have a right to the ownership of a product (and even the idea that a company can claim responsibility for the creation of something) has resulted in countless massive lawsuits over the years, almost invariably won by the biggest publishing company, and has created an image of game developers/musicians/artists being downtrodden by big business that is often used as justification by internet pirates. Not that the image is inaccurate or anything, but very few companies appear to realise that this is why there is such an undercurrent of sympathy for piracy on the internet and why their attempts to attack it through law have met with quite such a vitriolic response (as well as being poorly-worded and not thought out properly).

So… yeah, that’s pretty much copyright, or at least why it exists and people get annoyed about it. There are a lot of features concerning copyright law that people don’t like, and I’d be the last to say that it couldn’t do with a bit of bringing up to date- but it’s all there for a reason and it’s not just there because suit-clad stereotypes are lighting hundred dollar cigars off the arse of the rest of us. So please, when arguing about it, don’t suggest anything should just go without thinking of why it’s there in the first place.

A Brief History of Copyright

Yeah, sorry to be returning to this topic yet again, I am perfectly aware that I am probably going to be repeating an awful lot of stuff that either a) I’ve said already or b) you already know. Nonetheless, having spent a frustrating amount of time in recent weeks getting very annoyed at clever people saying stupid things, I feel the need to inform the world if only to satisfy my own simmering anger at something really not worth getting angry about. So:

Over the past year or so, the rise of a whole host of FLLAs (Four Letter Legal Acronyms) from SOPA to ACTA has, as I have previously documented, sent the internet and the world at large in to paroxysms of mayhem at the very idea that Google might break and/or they would have to pay to watch the latest Marvel film. Naturally, they also provoked a lot of debate, ranging in intelligence from intellectual to average denizen of the web, on the subject of copyright and copyright law. I personally think that the best way to understand anything is to try and understand exactly why and how stuff came to exist in the first place, so today I present a historical analysis of copyright law and how it came into being.

Let us travel back in time, back to our stereotypical club-wielding tribe of stone age human. Back then, the leader not only controlled and lead the tribe, but ensured that every facet of it worked to increase his and everyone else’s chance of survival, and chance of ensuring that the next meal would be coming along. In short, what was good for the tribe was good for the people in it. If anyone came up with a new idea or technological innovation, such as a shield for example, this design would also be appropriated and used for the good of the tribe. You worked for the tribe, and in return the tribe gave you protection, help gathering food and such and, through your collective efforts, you stayed alive. Everybody wins.

However, over time the tribes began to get bigger. One tribe would conquer their neighbours, gaining more power and thus enabling them to take on bigger, larger, more powerful tribes and absorb them too. Gradually, territories, nations and empires form, and what was once a small group in which everyone knew everyone else became a far larger organisation. The problem as things get bigger is that what’s good for a country starts to not necessarily become as good for the individual. As a tribe gets larger, the individual becomes more independent of the motions of his leader, to the point at which the knowledge that you have helped the security of your tribe does not bear a direct connection to the availability of your next meal- especially if the tribe adopts a capitalist model of ‘get yer own food’ (as opposed to a more communist one of ‘hunters pool your resources and share between everyone’ as is common in a very small-scale situation when it is easy to organise). In this scenario, sharing an innovation for ‘the good of the tribe’ has far less of a tangible benefit for the individual.

Historically, this rarely proved to be much of a problem- the only people with the time and resources to invest in discovering or producing something new were the church, who generally shared between themselves knowledge that would have been useless to the illiterate majority anyway, and those working for the monarchy or nobility, who were the bosses anyway. However, with the invention of the printing press around the start of the 16th century, this all changed. Public literacy was on the up and the press now meant that anyone (well, anyone rich enough to afford the printers’ fees)  could publish books and information on a grand scale. Whilst previously the copying of a book required many man-hours of labour from a skilled scribe, who were rare, expensive and carefully controlled, now the process was quick, easy and available. The impact of the printing press was made all the greater by the social change of the few hundred years between the Renaissance and today, as the establishment of a less feudal and more merit-based social system, with proper professions springing up as opposed to general peasantry, meaning that more people had the money to afford such publishing, preventing the use of the press being restricted solely to the nobility.

What all this meant was that more and more normal (at least, relatively normal) people could begin contributing ideas to society- but they weren’t about to give them up to their ruler ‘for the good of the tribe’. They wanted payment, compensation for their work, a financial acknowledgement of the hours they’d put in to try and make the world a better place and an encouragement for others to follow in their footsteps. So they sold their work, as was their due. However, selling a book, which basically only contains information, is not like selling something physical, like food. All the value is contained in the words, not the paper, meaning that somebody else with access to a printing press could also make money from the work you put in by running of copies of your book on their machine, meaning they were profiting from your work. This can significantly cut or even (if the other salesman is rich and can afford to undercut your prices) nullify any profits you stand to make from the publication of your work, discouraging you from putting the work in in the first place.

Now, even the most draconian of governments can recognise that your citizens producing material that could not only benefit your nation’s happiness but also potentially have great material use is a valuable potential resource, and that they should be doing what they can to promote the production of that material, if only to save having to put in the large investment of time and resources themselves. So, it makes sense to encourage the production of this material, by ensuring that people have a financial incentive to do it. This must involve protecting them from touts attempting to copy their work, and hence we arrive at the principle of copyright: that a person responsible for the creation of a work of art, literature, film or music, or who is responsible for some form of technological innovation, should have legal control over the release & sale of that work for at least a set period of time. And here, as I will explain next time, things start to get complicated…