An Opera Posessed

My last post left the story of JRR Tolkein immediately after his writing of his first bestseller; the rather charming, lighthearted, almost fairy story of a tale that was The Hobbit. This was a major success, and not just among the ‘children aged between 6 and 12’ demographic identified by young Rayner Unwin; adults lapped up Tolkein’s work too, and his publishers Allen & Unwin were positively rubbing their hands in glee. Naturally, they requested a sequel, a request to which Tolkein’s attitude appears to have been along the lines of ‘challenge accepted’.

Even holding down the rigours of another job, and even accounting for the phenomenal length of his finished product, the writing of a book is a process that takes a few months for a professional writer (Dame Barbara Cartland once released 25 books in the space of a year, but that’s another story), and perhaps a year or two for an amateur like Tolkein. He started writing the book in December 1937, and it was finally published 18 years later in 1955.

This was partly a reflection of the difficulties Tolkein had in publishing his work (more on that later), but this also reflects the measured, meticulous and very serious approach Tolkein took to his writing. He started his story from scratch, each time going in a completely different direction with an entirely different plot, at least three times. His first effort, for instance, was due to chronicle another adventure of his protagonist Bilbo from The Hobbit, making it a direct sequel in both a literal and spiritual sense. However, he then remembered about the ring Bilbo found beneath the mountains, won (or stolen, depending on your point of view) from the creature Gollum, and the strange power it held; not just invisibility, as was Bilbo’s main use for it, but the hypnotic effect it had on Gollum (he even subsequently rewrote that scene for The Hobbit‘s second edition to emphasise that effect). He decided that the strange power of the ring was a more natural direction to follow, and so he wrote about that instead.

Progress was slow. Tolkein went months at a time without working on the book, making only occasional, sporadic yet highly focused bouts of progress. Huge amounts were cross-referenced or borrowed from his earlier writings concerning the mythology, history & background of Middle Earth, Tolkein constantly trying to make his mythic world feel and, in a sense, be as real as possible, but it was mainly due to the influence of his son Christopher, who Tolkein would send chapters to whilst he was away fighting the Second World War in his father’s native South Africa, that the book ever got finished at all. When it eventually did, Tolkein had been working the story of Bilbo’s son Frodo and his adventure to destroy the Ring of Power for over 12 years. His final work was over 1000 pages long, spread across six ‘books’, as well as being laden with appendices to explain & offer background information, and he called it The Lord of The Rings (in reference to his overarching antagonist, the Dark Lord Sauron).

A similar story had, incidentally, been attempted once before; Der Ring des Nibelungen is an opera (well, four operas) written by German composer Richard Wagner during the 19th century, traditionally performed over the course of four consecutive nights (yeah, you have to be pretty committed to sit through all of that) and also known as ‘The Ring Cycle’- it’s where ‘Ride of The Valkyries’ comes from. The opera follows the story of a ring, made from the traditionally evil Rhinegold (gold panned from the Rhine river), and the trail of death, chaos and destruction it leaves in its wake between its forging & destruction. Many commentators have pointed out the close similarities between the two, and as a keen follower of Germanic mythology Tolkein certainly knew the story, but Tolkein rubbished any suggestion that he had borrowed from it, saying “Both rings were round, and there the resemblance ceases”. You can probably work out my approximate personal opinion from the title of this post, although I wouldn’t read too much into it.

Even once his epic was finished, the problems weren’t over. Once finished, he quarrelled with Allen & Unwin over his desire to release LOTR in one volume, along with his still-incomplete Silmarillion (that he wasn’t allowed to may explain all the appendices). He then turned to Collins, but they claimed his book was in urgent need of an editor and a license to cut (my words, not theirs, I should add). Many other people have voiced this complaint since, but Tolkein refused and ordered Collins to publish by 1952. This they failed to do, so Tolkein wrote back to Allen & Unwin and eventually agreed to publish his book in three parts; The Fellowship of The Ring, The Two Towers, and The Return of The King (a title Tolkein, incidentally, detested because it told you how the book ended).

Still, the book was out now, and the critics… weren’t that enthusiastic. Well, some of them were, certainly, but the book has always had its detractors among the world of literature, and that was most certainly the case during its inception. The New York Times criticised Tolkein’s academic approach, saying he had “formulated a high-minded belief in the importance of his mission as a literary preservationist, which turns out to be death to literature itself”, whilst others claimed it, and its characters in particular, lacked depth. Even Hugo Dyson, one of Tolkein’s close friends and a member of his own literary group, spent public readings of the book lying on a sofa shouting complaints along the lines of “Oh God, not another elf!”. Unlike The Hobbit, which had been a light-hearted children’s story in many ways, The Lord of The Rings was darker & more grown up, dealing with themes of death, power and evil and written in a far more adult style; this could be said to have exposed it to more serious critics and a harder gaze than its predecessor, causing some to be put off by it (a problem that wasn’t helped by the sheer size of the thing).

However, I personally am part of the other crowd, those who have voiced their opinions in nearly 500 five-star reviews on Amazon (although one should never read too much into such figures) and who agree with the likes of CS  Lewis, The Sunday Telegraph and Sunday Times of the time that “Here is a book that will break your heart”, that it is “among the greatest works of imaginative fiction of the twentieth century” and that “the English-speaking world is divided into those who have read The Lord of the Rings and The Hobbit and those who are going to read them”. These are the people who have shown the truth in the review of the New York Herald Tribune: that Tolkein’s masterpiece was and is “destined to outlast our time”.

But… what exactly is it that makes Tolkein’s epic so special, such a fixture; why, even years after its publication as the first genuinely great work of fantasy, it is still widely regarded as the finest work the genre has ever produced? I could probably write an entire book just to try and answer that question (and several people probably have done), but to me it was because Tolkein understood, absolutely perfectly and fundamentally, exactly what he was trying to write. Many modern fantasy novels try to be uber-fantastical, or try to base themselves around an idea or a concept, in some way trying to find their own level of reality on which their world can exist, and they often find themselves in a sort of awkward middle ground, but Tolkein never suffered that problem because he knew that, quite simply, he was writing a myth, and he knew exactly how that was done. Terry Pratchett may have mastered comedic fantasy, George RR Martin may be the king of political-style fantasy, but only JRR Tolkein has, in recent times, been able to harness the awesome power of the first source of story; the legend, told around the campfire, of the hero and the villain, of the character defined by their virtues over their flaws, of the purest, rawest adventure in the pursuit of saving what is good and true in this world. These are the stories written to outlast the generations, and Tolkein’s mastery of them is, to me, the secret to his masterpiece.

Advertisement

…but some are more equal than others

Seemingly the key default belief of any modern, respectable government and, indeed, a well brought-up child of the modern age, is that of egalitarianism- that all men are born equal. Numerous documents, from the US Declaration of Independence to the UN Bill of Human Rights, have proclaimed this as a ‘self-evident truth’, and anyone who still blatantly clings onto the idea that some people are born ‘better’ than others by virtue of their family having more money is dubbed out of touch at best, and (bizarrely) a Nazi at worst. And this might be considered surprising given the amount of approval and the extent to which we set store by a person’s rank or status.

I mean, think about it. A child from a well-respected, middle class family with two professional parents will invariably get more opportunities in life, and will frequently be considered more ‘trustworthy’, than a kid born into a broken home with a mother on benefits and a father in jail, particularly if his accent (especially) or skin colour (possibly to a slightly lesser extent in Europe than the US) suggests this fact. Someone with an expensive, tailored suit can stand a better chance at a job interview to a candidate with an old, fading jacket and worn knees on his trousers that he has never been rich enough to replace, and I haven’t even started on the wage and job availability gap between men and women, despite that there are nowadays more female university graduates than males. You get the general idea. We might think that all are born equal, but that doesn’t mean we treat them like that.

Some have said that this, particularly in the world of work, is to do with the background and age of the people concerned. Particularly in large, old and incredibly valuable corporate enterprises such as banks, the average age of senior staff and shareholders tends to be on the grey end of things, the majority of them are male and many of them will have had the top-quality private education that allowed them to get there, so the argument put forward is that these men were brought up surrounded by this sort of ‘public schoolers are fantastic and everyone else is a pleb’ mentality. And it is without doubt true that very few companies have an average age of a board member below 50, and many above 65; in fact the average age of a CEO in the UK has recently gone up from a decade-long value of 51 to nearly 53.  However, the evidence suggests that the inclusion of younger board members and CEOs generally benefits a company by providing a fresher understanding of the modern world; data that could only be gathered by the fact that there are a large number of young, high-ranking businesspeople to evaluate. And anyway; in most job interviews, it’s less likely to be the board asking the questions than it is a recruiting officer of medium business experience- this may be an issue, but I don’t think it’s the key thing here.

It could well be possible that the true answer is that there is no cause at all, and the whole business is nothing more than a statistical blip. In Freakonomics, an analysis was done to find the twenty ‘blackest’ and ‘whitest’ boy’s names in the US (I seem to remember DeShawn was the ‘blackest’ and Jake the ‘whitest’), and then compared the job prospects of people with names on either of those two lists. The results suggested that people with one of the ‘white’ names did better in the job market than those with ‘black’ names, perhaps suggesting that interviewers are being, subconsciously or not, racist. But, a statistical analysis revealed this to not, in fact, be the case; we must remember that black Americans are, on average, less well off than their white countrymen, meaning they are more likely to go to a dodgy school, have problems at home or hang around with the wrong friends. Therefore, black people do worse, on average, on the job market because they are more likely to be not as well-qualified as white equivalents, making them, from a purely analytical standpoint, often worse candidates. This meant that Jake was more likely to get a job than DeShawn because Jake was simply more likely to be a better-educated guy, so any racism on the part of job interviewers is not prevalent enough to be statistically significant. To some extent, we may be looking at the same thing here- people who turn up to an interview with cheap or hand-me-down clothes are likely to have come from a poorer background to someone with a tailored Armani suit, and are therefore likely to have had a lower standard of education and make less attractive candidates to an interviewing panel. Similarly, women tend to drop their careers earlier in life if they want to start a family, since the traditional family model puts the man as chief breadwinner, meaning they are less likely to advance up the ladder and earn the high wages that could even out the difference in male/female pay.

But statistics cannot quite cover anything- to use another slightly tangential bit of research, a study done some years ago found that teachers gave higher marks to essays written in neat handwriting than they did to identical essays that were written messier. The neat handwriting suggested a diligent approach to learning, a good education in their formative years, making the teacher think the child was cleverer, and thus deserving of more marks, than a scruffier, less orderly hand. Once again, we can draw parallels to our two guys in their different suits. Mr Faded may have good qualifications and present himself well, but his attire suggests to his interviewers that he is from a poorer background. We have a subconscious understanding of the link between poorer backgrounds and the increased risk of poor education and other compromising factors, and so the interviewers unconsciously link our man to the idea that he has been less well educated than Mr Armani, even if the evidence presented before them suggests otherwise. They are not trying to be prejudiced, they just think the other guy looks more likely to be as good as his paperwork suggests. Some of it isn’t even linked to such logical connections; research suggests that interviewers, just as people in everyday life, are drawn to those they feel are similar to them, and they might also make the subconscious link that ‘my wife stays at home and looks after the kids, there aren’t that many women in the office, so what’s this one doing here?’- again, not deliberate discrimination, but it happens.

In many ways this is an unfortunate state of affairs, and one that we should attempt to remedy in everyday life whenever and wherever we can. But a lot of the stuff that to a casual observer might look prejudiced, might be violating our egalitarian creed, we do without thinking, letting out brain make connections that logic should not. The trick is not to ‘not judge a book by it’s cover’, but not to let your brain register that there’s a cover at all.

What we know and what we understand are two very different things…

If the whole Y2K debacle over a decade ago taught us anything, it was that the vast majority of the population did not understand the little plastic boxes known as computers that were rapidly filling up their homes. Nothing especially wrong or unusual about this- there’s a lot of things that only a few nerds understand properly, an awful lot of other stuff in our life to understand, and in any case the personal computer had only just started to become commonplace. However, over 12 and a half years later, the general understanding of a lot of us does not appear to have increased to any significant degree, and we still remain largely ignorant of these little feats of electronic witchcraft. Oh sure, we can work and operate them (most of us anyway), and we know roughly what they do, but as to exactly how they operate, precisely how they carry out their tasks? Sorry, not a clue.

This is largely understandable, particularly given the value of ‘understand’ that is applicable in computer-based situations. Computers are a rare example of a complex system that an expert is genuinely capable of understanding, in minute detail, every single aspect of the system’s working, both what it does, why it is there, and why it is (or, in some cases, shouldn’t be) constructed to that particular specification. To understand a computer in its entirety, therefore, is an equally complex job, and this is one very good reason why computer nerds tend to be a quite solitary bunch, with quite few links to the rest of us and, indeed, the outside world at large.

One person who does not understand computers very well is me, despite the fact that I have been using them, in one form or another, for as long as I can comfortably remember. Over this summer, however, I had quite a lot of free time on my hands, and part of that time was spent finally relenting to the badgering of a friend and having a go with Linux (Ubuntu if you really want to know) for the first time. Since I like to do my background research before getting stuck into any project, this necessitated quite some research into the hows and whys of its installation, along with which came quite a lot of info as to the hows and practicalities of my computer generally. I thought, then, that I might spend the next couple of posts or so detailing some of what I learned, building up a picture of a computer’s functioning from the ground up, and starting with a bit of a history lesson…

‘Computer’ was originally a job title, the job itself being akin to accountancy without the imagination. A computer was a number-cruncher, a supposedly infallible data processing machine employed to perform a range of jobs ranging from astronomical prediction to calculating interest. The job was a fairly good one, anyone clever enough to land it probably doing well by the standards of his age, but the output wasn’t. The human brain is not built for infallibility and, not infrequently, would make mistakes. Most of these undoubtedly went unnoticed or at least rarely caused significant harm, but the system was nonetheless inefficient. Abacuses, log tables and slide rules all aided arithmetic manipulation to a great degree in their respective fields, but true infallibility was unachievable whilst still reliant on the human mind.

Enter Blaise Pascal, 17th century mathematician and pioneer of probability theory (among other things), who invented the mechanical calculator aged just 19, in 1642. His original design wasn’t much more than a counting machine, a sequence of cogs and wheels so constructed as to able to count and convert between units, tens, hundreds and so on (ie a turn of 4 spaces on the ‘units’ cog whilst a seven was already counted would bring up eleven), as well as being able to work with currency denominations and distances as well. However, it could also subtract, multiply and divide (with some difficulty), and moreover proved an important point- that a mechanical machine could cut out the human error factor and reduce any inaccuracy to one of simply entering the wrong number.

Pascal’s machine was both expensive and complicated, meaning only twenty were ever made, but his was the only working mechanical calculator of the 17th century. Several, of a range of designs, were built during the 18th century as show pieces, but by the 19th the release of Thomas de Colmar’s Arithmometer, after 30 years of development, signified the birth of an industry. It wasn’t a large one, since the machines were still expensive and only of limited use, but de Colmar’s machine was the simplest and most reliable model yet. Around 3,000 mechanical calculators, of various designs and manufacturers, were sold by 1890, but by then the field had been given an unexpected shuffling.

Just two years after de Colmar had first patented his pre-development Arithmometer, an Englishmen by the name of Charles Babbage showed an interesting-looking pile of brass to a few friends and associates- a small assembly of cogs and wheels that he said was merely a precursor to the design of a far larger machine: his difference engine. The mathematical workings of his design were based on Newton polynomials, a fiddly bit of maths that I won’t even pretend to understand, but that could be used to closely approximate logarithmic and trigonometric functions. However, what made the difference engine special was that the original setup of the device, the positions of the various columns and so forth, determined what function the machine performed. This was more than just a simple device for adding up, this was beginning to look like a programmable computer.

Babbage’s machine was not the all-conquering revolutionary design the hype about it might have you believe. Babbage was commissioned to build one by the British government for military purposes, but since Babbage was often brash, once claiming that he could not fathom the idiocy of the mind that would think up a question an MP had just asked him, and prized academia above fiscal matters & practicality, the idea fell through. After investing £17,000 in his machine before realising that he had switched to working on a new and improved design known as the analytical engine, they pulled the plug and the machine never got made. Neither did the analytical engine, which is a crying shame; this was the first true computer design, with two separate inputs for both data and the required program, which could be a lot more complicated than just adding or subtracting, and an integrated memory system. It could even print results on one of three printers, in what could be considered the first human interfacing system (akin to a modern-day monitor), and had ‘control flow systems’ incorporated to ensure the performing of programs occurred in the correct order. We may never know, since it has never been built, whether Babbage’s analytical engine would have worked, but a later model of his difference engine was built for the London Science Museum in 1991, yielding accurate results to 31 decimal places.

…and I appear to have run on a bit further than intended. No matter- my next post will continue this journey down the history of the computer, and we’ll see if I can get onto any actual explanation of how the things work.

Living for… when, exactly?

When we are young, we get a lot of advice and rules shoved down our throats in a seemingly endless stream of dos and don’ts. “Do eat your greens”, “Don’t spend too much time watching TV”, “Get your fingers away from your nose” and, an old personal favourite, “Keep your elbows off the table”. There are some schools of psychology who claim it is this militant enforcement of rules with no leeway or grey area may be responsible for some of our more rebellious behaviour in older life and, particularly, the teenage years, but I won’t delve into that now.

But there is one piece of advice, very broadly applied in a variety of contexts, in fact more of a general message than a rule, that is of particular interest to me. Throughout our lives, from cradle to right into adulthood, we are encouraged to take time over our decisions, to make only sensible choices, to plan ahead and think of the consequences, living for long-term satisfaction than short-term thrills. This takes the form of a myriad of bits of advice like ‘save not spend’ or ‘don’t eat all that chocolate at once’ (perhaps the most readily disobeyed of all parental instructions), but the general message remains the same: make the sensible, analytical decision.

The reason that this advice is so interesting is because when we hit adult life, many of us will encounter another, entirely contradictory school of thought that runs totally counter to the idea of sensible analysis- the idea of ‘living for the moment’. The basic viewpoint goes along the lines of ‘We only have one short life that could end tomorrow, so enjoy it as much as you can whilst you can. Take risks, make the mad decisions, go for the off-chances, try out as much as you can, and try to live your life in the moment, thinking of yourself and the here & now rather than worrying about what’s going to happen 20 years down the line’.

This is a very compelling viewpoint, particularly to the fun-centric outlook of the early-to-mid-twenties age bracket who most commonly get given and promote this way of life, for a host of reasons. Firstly, it offers a way of living in which very little can ever be considered to be a mistake, only an attempt at something new that didn’t come off. Secondly, its practice generates immediate and tangible results, rather than slower, more boring, long-term gains that a ‘sensible life’ may gain you, giving it an immediate association with living the good life. But, most importantly, following this life path is great fun, and leads you to the moments that make life truly special. Someone I know has often quoted their greatest ever regret as, when seriously strapped for cash, taking the sensible fiscal decision and not forking out to go to a Queen concert. Freddie Mercury died shortly afterwards, and this hardcore Queen fan never got to see them live. There is a similar and oft-quoted argument for the huge expense of the space program: ‘Across the galaxy there may be hundreds of dead civilizations, all of whom made the sensible economic choice to not pursue space exploration- who will only be discovered by whichever race made the irrational decision’. In short, sensible decisions may make your life seem good to an accountant, but might not make it seem that special or worthwhile.

On the other hand, this does not make ‘living for the moment’ an especially good life choice either- there’s a very good reason why your parents wanted you to be sensible. A ‘live for the future’ lifestyle is far more likely to reap long-term rewards in terms of salary and societal rank,  plans laid with the right degree of patience and care invariably more successful, whilst a constant, ceaseless focus on satisfying the urges of the moment is only ever going to end in disaster. This was perhaps best demonstrated in that episode of Family Guy entitled “Brian Sings and Swings”, in which, following a near-death experience, Brian is inspired by the ‘live for today’ lifestyle of Frank Sinatra Jr. For him, this takes the form of singing with Sinatra (and Stewie) every night, and drinking heavily both before & during performances, quickly resulting in drunken shows, throwing up into the toilet, losing a baby and, eventually, the gutter. Clearly, simply living for the now with no consideration for future happiness will very quickly leave you broke, out of a job, possibly homeless and with a monumental hangover. Not only that, but such a heavy focus on the short term has been blamed for a whole host of unsavoury side effects ranging from the ‘plastic’ consumer culture of the modern world and a lack of patience between people to the global economic meltdown, the latter of which could almost certainly have been prevented (and cleared up a bit quicker) had the world’s banks been a little more concerned with their long-term future and a little less with the size of their profit margin.

Clearly then, this is not a clear-cut balance between a right and wrong way of doing things- for one thing everybody’s priorities will be different, but for another neither way of life makes perfect sense without some degree of compromise. Perhaps this is in and of itself a life lesson- that nothing is ever quite fixed, that there are always shades of grey, and that compromise is sure to permeate every facet of our existence. Living for the moment is costly in all regards and potentially catastrophic, whilst living for the distant future is boring and makes life devoid of real value, neither of which is an ideal way to be. Perhaps the best solution is to aim for somewhere in the middle; don’t live for now, don’t live for the indeterminate future, but perhaps live for… this time next week?

I am away on holiday for the next week, so posts should resume on the Monday after next. To tide you over until then, I leave you with a recommendation: YouTube ‘Crapshots’. Find a spare hour or two. Watch all of. Giggle.

The Churchill Problem

Everybody knows about Winston Churchill- he was about the only reason that Britain’s will to fight didn’t crumble during the Second World War, his voice and speeches are some of the most iconic of all time, and his name and mannerisms have been immortalised by a cartoon dog selling insurance. However, some of his postwar achievements are often overlooked- after the war he was voted out of the office of Prime Minister in favour of a revolutionary Labour government, but he returned to office in the 50’s with the return of the Tories. He didn’t do quite as well this time round- Churchill was a shameless warmonger who nearly annihilated his own reputation during the First World War by ordering a disastrous assault on Gallipoli in Turkey, and didn’t do much to help it by insisting that everything between the two wars was an excuse for another one- but it was during this time that he made one of his least-known but most interesting speeches. In it he envisaged a world in which the rapidly accelerating technological advancement of his age would cause most of the meaningful work to be done by machines, and changing our concept of the working week. He suggested that we would one day be able to “give the working man what he’s never had – four days’ work and then three days’ fun”- basically, Winston Churchill was the first man to suggest the concept of a three day weekend.

This was at a time when the very concept of the weekend itself was actually a very new one- the original idea of one part of the week being dedicated to not working comes, of course, from the Sabbath days adopted by most religions. The idea of no work being done on a Sunday is, in the Western and therefore historically Christian world, an old one, but the idea of expanding it to Saturday as well is far newer. This was partly motivated by the increased proportion and acceptance of Jewish workers, whose day of rest fell on Saturday, and was also part of a general trend in decreasing work hours during the early 1900’s. It wasn’t until 1938 that the 5 day working week became ratified in US law, and it appeared to be the start of a downward trend in working hours as trade unions gained power, workers got more free time, and machines did all the important stuff. All of this appeared to lead to Churchill’s promised world- a world of the 4-day working week and perhaps, one day, a total lap of luxury whilst we let computers and androids do everything.

However, recently things have started to change. The trend of shortening working hours and an increasingly stressless existence has been reversed, with the average working week getting longer dramatically- since 1970, the  number of hours worked per capita has risen by 20%. A survey done a couple of winters ago found that of our weekend, we only spend an average of 15 hours and 17 minutes of it out of the work mindset (between 12:38am and 3:55pm on Sunday when we start worrying about Monday again), and that over half of us are too tired to enjoy our weekends properly. Given that this was a survey conducted by a hotel chain it may not be an entirely representative sample, but you get the idea. The weekend itself is in some ways under threat, and Churchill’s vision is disappearing fast.

So what’s changed since the 50’s (other than transport, communications, language, technology, religion, science, politics, the world, warfare, international relations, and just about everything else)? Why have we suddenly ceased to favour rest over work? What the hell is wrong with us?

To an extent, some of the figures are anomalous-  employment of women has increased drastically in the last 50 years and as such so has the percentage of the population who are unemployed. But this is not enough to explain away all of the stats relating to ‘the death of the weekend’.Part of the issue is judgemental. Office environments can be competitive places, and can quickly develop into mindsets where our emotional investment is in the compiling of our accounts document or whatever. In such an environment, people’s priorities become more focused on work, and somebody taking a day extra out on the weekend would just seem like laziness- especially of the boss who has deadlines to meet and really doesn’t appreciate slackers, as well as having control of your salary. We also, of course, judge ourselves, unwilling to feel as if we are letting the team down and causing other people inconvenience. There’s also the problem of boredom- as any schoolchild will tell you, the first few days of holiday after a long term are blissful relaxation, but it’s only a matter of time before a parent hears that dreaded phrase: “I’m booooooored”. The same thing can be said to apply to having nearly half your time off every single week. But these are features of human nature, which certainly hasn’t changed in the past 50 years, so what could the root of the change in trends be?

The obvious place to start when considering this is in the changes in work over this time. The last half-century has seen Britain’s manufacturing economy spiral downwards, as more and more of us lay down tools and pick up keyboards- the current ‘average job’ for a Briton involves working in an office somewhere. Probably in Sales, or Marketing. This kind of job involves chiefly working our minds, crunching numbers, thinking through figures and making it far harder for us to ‘switch off’ from our work mentality than if it were centred on how much our muscles hurt. It also makes it far easier to justify staying for overtime and to ‘just finish that last bit’, partly because not being physically tired makes it easier and also because the kind of work given to an office worker is more likely to be centred around individual mini-projects than simply punching rivets or controlling a machine for hours on end. And of course, as some of us start to stay for longer, so our competitive instinct causes the rest of us to as well.

In the modern age, switching off from a modern work mindset has been made even harder since the invention of the laptop and, especially, the smartphone. The laptop allowed us to check our emails or work on a project at home, on a train or wherever we happened to be- the smartphone has allowed us to keep in touch with work at every single waking moment of the day, making it very difficult for us to ‘switch work off’. It has also made it far easier to work at home, which for the committed worker can make it even harder to formally end the day when there are no colleagues or bosses telling you it’s time to go home. This spread of technology into our lives is thought to lead to an increase in levels of dopamine, a sort of pick-me-up drug the body releases after exposure to adrenaline, which can frazzle our pre-frontal cortex and leave someone feeling drained and unfocused- obvious signs of being overworked

Then there is the issue of competition. In the past, competition in industry would usually have been limited to a few other industries in the local area- in the grand scheme of things, this could perhaps be scaled up to cover an entire country. The existence of trade unions helped prevent this competition from causing problems- if everyone is desperate for work, as occurred with depressing regularity during the Great Depression in the USA, they keep trying to offer their services as cheaply as possible to try and bag the job, but if a trade union can be use to settle and standardise prices then this effect is halted. However, in the current age of everywhere being interconnected, competition in big business can occur from all over the world. To guarantee that they keep their job, people have to try to work as hard as they can for as long as they can, lengthening the working week still further. Since trade unions are generally limited to a single country, their powers in this situation are rather limited.

So, that’s the trend as it is- but is it feasible that we will ever live the life of luxury, with robots doing all our work, that seemed the summit of Churchill’s thinkings. In short: no. Whilst a three-day weekend is perhaps not too unfeasible, I just don’t think human nature would allow us to laze about all day, every day for the whole of our lives and do absolutely nothing with it, if only for the reasons explained above. Plus, constant rest would simply sanitise us to the concept, it becoming so normal that we simply could not envisage the concept of work at all. Thus, all the stresses that were once taken up with work worries would simply be transferred to ‘rest worries’, resulting in us not being any happier after all, and defeating the purpose of having all the rest in the first place. In short, we need work to enjoy play.

Plus, if robots ran everything and nobody worked them, it’d only be a matter of time before they either all broke down or took over.