Art vs. Science

All intellectual human activity can be divided into one of three categories; the arts, humanities, and sciences (although these terms are not exactly fully inclusive). Art here covers everything from the painted medium to music, everything that we humans do that is intended to be creative and make our world as a whole a more beautiful place to live in. The precise definition of ‘art’ is a major bone of contention among creative types and it’s not exactly clear where the boundary lies in some cases, but here we can categorise everything intended to be artistic as an art form. Science here covers every one of the STEM disciplines; science (physics, biology, chemistry and all the rest in its vast multitude of forms and subgenres), technology, engineering (strictly speaking those two come under the same branch, but technology is too satisfying a word to leave out of any self-respecting acronym) and mathematics. Certain portions of these fields too could be argued to be entirely self-fulfilling, and others are considered by some beautiful, but since the two rarely overlap the title of art is never truly appropriate. The humanities are an altogether trickier bunch to consider; on one hand they are, collectively, a set of sciences, since they purport to study how the world we live in behaves and functions. However, this particular set of sciences are deemed separate because they deal less with fundamental principles of nature but of human systems, and human interactions with the world around them; hence the title ‘humanities’. Fields as diverse as economics and geography are all blanketed under this title, and are in some ways the most interesting of sciences as they are the most subjective and accessible; the principles of the humanities can be and usually are encountered on a daily basis, so anyone with a keen mind and an eye for noticing the right things can usually form an opinion on them. And a good thing too, otherwise I would be frequently short of blogging ideas.

Each field has its own proponents, supporters and detractors, and all are quite prepared to defend their chosen field to the hilt. The scientists point to the huge advancements in our understanding of the universe and world around us that have been made in the last century, and link these to the immense breakthroughs in healthcare, infrastructure, technology, manufacturing and general innovation and awesomeness that have so increased our quality of life (and life expectancy) in recent years. And it’s not hard to see why; such advances have permanently changed the face of our earth (both for better and worse), and there is a truly vast body of evidence supporting the idea that these innovations have provided the greatest force for making our world a better place in recent times. The artists provide the counterpoint to this by saying that living longer, healthier lives with more stuff in it is all well and good, but without art and creativity there is no advantage to this better life, for there is no way for us to enjoy it. They can point to the developments in film, television, music and design, all the ideas of scientists and engineers tuned to perfection by artists of each field, and even the development in more classical artistic mediums such as poetry or dance, as key features of the 20th century that enabled us to enjoy our lives more than ever before. The humanities have advanced too during recent history, but their effects are far more subtle; innovative strategies in economics, new historical discoveries and perspectives and new analyses of the way we interact with our world have all come, and many have made news, but their effects tend to only be felt in the spheres of influence they directly concern- nobody remembers how a new use of critical path analysis made J. Bloggs Ltd. use materials 29% more efficiently (yes, I know CPA is technically mathematics; deal with it). As such, proponents of humanities tend to be less vocal than those in other fields, although this may have something to do with the fact that the people who go into humanities have a tendency to be more… normal than the kind of introverted nerd/suicidally artistic/stereotypical-in-some-other-way characters who would go into the other two fields.

This bickering between arts & sciences as to the worthiness/beauty/parentage of the other field has lead to something of a divide between them; some commentators have spoken of the ‘two cultures’ of arts and sciences, leaving us with a sect of sciences who find it impossible to appreciate the value of art and beauty, thinking it almost irrelevant compared what their field aims to achieve (to their loss, in my opinion). I’m not entirely sure that this picture is entirely true; what may be more so, however, is the other end of the stick, those artistic figures who dominate our media who simply cannot understand science beyond GCSE level, if that. It is true that quite a lot of modern science is very, very complex in the details, but Albert Einstein was famous for saying that if a scientific principle cannot be explained to a ten-year old then it is almost certainly wrong, and I tend to agree with him. Even the theory behind the existence of the Higgs Boson, right at the cutting edge of modern physics, can be explained by an analogy of a room full of fans and celebrities. Oh look it up, I don’t want to wander off topic here.

The truth is, of course, that no field can sustain a world without the other; a world devoid of STEM would die out in a matter of months, a world devoid of humanities would be hideously inefficient and appear monumentally stupid, and a world devoid of art would be the most incomprehensibly dull place imaginable. Not only that, but all three working in harmony will invariably produce the best results, as master engineer, inventor, craftsman and creator of some of the most famous paintings of all time Leonardo da Vinci so ably demonstrated. As such, any argument between fields as to which is ‘the best’ or ‘the most worthy’ will simply never be won, and will just end up a futile task. The world is an amazing place, but the real source of that awesomeness is the diversity it contains, both in terms of nature and in terms of people. The arts and sciences are not at war, nor should they ever be; for in tandem they can achieve so much more.

Getting bored with history lessons

Last post’s investigation into the post-Babbage history of computers took us up to around the end of the Second World War, before the computer age could really be said to have kicked off. However, with the coming of Alan Turing the biggest stumbling block for the intellectual development of computing as a science had been overcome, since it now clearly understood what it was and where it was going. From then on, therefore, the history of computing is basically one long series of hardware improvements and business successes, and the only thing of real scholarly interest was Moore’s law. This law is an unofficial, yet surprisingly accurate, model of the exponential growth in the capabilities of computer hardware, stating that every 18 months computing hardware gets either twice as powerful, half the size, or half the price for the same other specifications. This law was based on a 1965 paper by Gordon E Moore, who noted that the number of transistors on integrated circuits had been doubling every two years since their invention 7 years earlier. The modern day figure of an 18-monthly doubling in performance comes from an Intel executive’s estimate based on both the increasing number of transistors and their getting faster & more efficient… but I’m getting sidetracked. The point I meant to make was that there is no point me continuing with a potted history of the last 70 years of computing, so in this post I wish to get on with the business of exactly how (roughly fundamentally speaking) computers work.

A modern computer is, basically, a huge bundle of switches- literally billions of the things. Normal switches are obviously not up to the job, being both too large and requiring an electromechanical rather than purely electrical interface to function, so computer designers have had to come up with electrically-activated switches instead. In Colossus’ day they used vacuum tubes, but these were large and prone to breaking so, in the late 1940s, the transistor was invented. This is a marvellous semiconductor-based device, but to explain how it works I’m going to have to go on a bit of a tangent.

Semiconductors are materials that do not conduct electricity freely and every which way like a metal, but do not insulate like a wood or plastic either- sometimes they conduct, sometimes they don’t. In modern computing and electronics, silicon is the substance most readily used for this purpose. For use in a transistor, silicon (an element with four electrons in its outer atomic ‘shell’) must be ‘doped’ with other elements, meaning that they are ‘mixed’ into the chemical, crystalline structure of the silicon. Doping with a substance such as boron, with three electrons in its outer shell, creates an area with a ‘missing’ electron, known as a hole. Holes have, effectively, a positive charge compared a ‘normal’ area of silicon (since electrons are negatively charged), so this kind of doping produces what is known as p-type silicon. Similarly, doping with something like phosphorus, with five outer shell electrons, produces an excess of negatively-charged electrons and n-type silicon. Thus electrons, and therefore electricity (made up entirely of the net movement of electrons from one area to another) finds it easy to flow from n- to p-type silicon, but not very well going the other way- it conducts in one direction and insulates in the other, hence a semiconductor. However, it is vital to remember that the p-type silicon is not an insulator and does allow for free passage of electrons, unlike pure, undoped silicon. A transistor generally consists of three layers of silicon sandwiched together, in order NPN or PNP depending on the practicality of the situation, with each layer of the sandwich having a metal contact or ‘leg’ attached to it- the leg in the middle is called the base, and the ones at either side are called the emitter and collector.

Now, when the three layers of silicon are stuck next to one another, some of the free electrons in the n-type layer(s) jump to fill the holes in the adjacent p-type, creating areas of neutral, or zero, charge. These are called ‘depletion zones’ and are good insulators, meaning that there is a high electrical resistance across the transistor and that a current cannot flow between the emitter and collector despite usually having a voltage ‘drop’ between them that is trying to get a current flowing. However, when a voltage is applied across the collector and base a current can flow between these two different types of silicon without a problem, and as such it does. This pulls electrons across the border between layers, and decreases the size of the depletion zones, decreasing the amount of electrical resistance across the transistor and allowing an electrical current to flow between the collector and emitter. In short, one current can be used to ‘turn on’ another.

Transistor radios use this principle to amplify the signal they receive into a loud, clear sound, and if you crack one open you should be able to see some (well, if you know what you’re looking for). However, computer and manufacturing technology has got so advanced over the last 50 years that it is now possible to fit over ten million of these transistor switches onto a silicon chip the size of your thumbnail- and bear in mind that the entire Colossus machine, the machine that cracked the Lorenz cipher, contained only ten thousand or so vacuum tube switches all told. Modern technology is a wonderful thing, and the sheer achievement behind it is worth bearing in mind next time you get shocked over the price of a new computer (unless you’re buying an Apple- that’s just business elitism).

…and dammit, I’ve filled up a whole post again without getting onto what I really wanted to talk about. Ah well, there’s always next time…

(In which I promise to actually get on with talking about computers)

The Land of the Red

Nowadays, the country to talk about if you want to be seen as being politically forward-looking is, of course, China. The most populous nation on Earth (containing 1.3 billion souls) with an economy and defence budget second only to the USA in terms of size, it also features a gigantic manufacturing and raw materials extraction industry, the world’s largest standing army and one of only five remaining communist governments. In many ways, this is China’s second boom as a superpower, after its early forays into civilisation and technological innovation around the time of Christ made it the world’s largest economy for most of the intervening time. However, the technological revolution that swept the Western world in the two or three hundred years during and preceding the Industrial Revolution (which, according to QI, was entirely due to the development and use of high-quality glass in Europe, a material almost totally unheard of in China having been invented in Egypt and popularised by the Romans) rather passed China by, leaving it a severely underdeveloped nation by the nineteenth century. After around 100 years of bitter political infighting, during which time the 2000 year old Imperial China was replaced by a republic whose control was fiercely contested between nationalists and communists, the chaos of the Second World War destroyed most of what was left of the system. The Second Sino-Japanese War (as that particular branch of WWII was called) killed around 20 million Chinese civilians, the second biggest loss to a country after the Soviet Union, as a Japanese army fresh from an earlier revolution from Imperial to modern systems went on a rampage of rape, murder and destruction throughout the underdeveloped northern China, where some war leaders still fought with swords. The war also annihilated the nationalists, leaving the communists free to sweep to power after the Japanese surrender and establish the now 63-year old People’s Republic, then lead by former librarian Mao Zedong.

Since then, China has changed almost beyond recognition. During the idolised Mao’s reign, the Chinese population near-doubled in an effort to increase the available worker population, an idea tried far less successfully in other countries around the world with significantly less space to fill. This population was then put to work during Mao’s “Great Leap Forward”, in which he tried to move his country away from its previously agricultural economy and into a more manufacturing-centric system. However, whilst the Chinese government insists to this day that three subsequent years of famine were entirely due to natural disasters such as drought and poor weather, and only killed 15 million people, most external commentators agree that the sudden change in the availability of food thanks to the Great Leap certainly contributed to the death toll estimated to actually be in the region of 20-40 million. Oh, and the whole business was an economic failure, as farmers uneducated in modern manufacturing techniques attempted to produce steel at home, resulting in a net replacement of useful food for useless, low-quality pig iron.

This event in many ways typifies the Chinese way- that if millions of people must suffer in order for things to work out better in the long run and on the numbers sheet, then so be it, partially reflecting the disregard for the value of life historically also common in Japan. China is a country that has said it would, in the event of a nuclear war, consider the death of 90% of their population acceptable losses so long as they won, a country whose main justification for this “Great Leap Forward” was to try and bring about a state of social structure & culture that the government could effectively impose socialism upon, as it tried to do during its “Cultural Revolution” during the mid-sixties. All this served to do was get a lot of people killed, resulted in a decade of absolute chaos, literally destroyed China’s education system and, despite reaffirming Mao’s godlike status (partially thanks to an intensification in the formation of his personality cult), some of his actions rather shamed the governmental high-ups, forcing the party to take the angle that, whilst his guiding thought was of course still the foundation of the People’s Republic and entirely correct in every regard, his actions were somehow separate from that and got rather brushed under the carpet. It did help that, by this point, Mao was now dead and was unlikely to have them all hung for daring to question his actions.

But, despite all this chaos, all the destruction and all the political upheaval (nowadays the government is still liable to arrest anyone who suggests that the Cultural Revolution was a good idea), these things shaped China into the powerhouse it is today. It may have slaughtered millions of people and resolutely not worked for 20 years, but Mao’s focus on a manufacturing economy has now started to bear fruit and give the Chinese economy a stable footing that many countries would dearly love in these days of economic instability. It may have an appalling human rights record and have presided over the large-scale destruction of the Chinese environment, but Chinese communism has allowed for the government to control its labour force and industry effectively, allowing it to escape the worst ravages of the last few economic downturns and preventing internal instability. And the extent to which it has forced itself upon the people of China for decades, forcing them into the party line with an iron fist, has allowed its controls to be gently relaxed in the modern era whilst ensuring the government’s position is secure, to an extent satisfying the criticisms of western commentators. Now, China is rich enough and positioned solidly enough to placate its people, to keep up its education system and build cheap housing for the proletariat. To an accountant, therefore,  this has all worked out in the long run.

But we are not all accountants or economists- we are members of the human race, and there is more for us to consider than just some numbers on a spreadsheet. The Chinese government employs thousands of internet security agents to ensure that ‘dangerous’ ideas are not making their way into the country via the web, performs more executions annually than the rest of the world combined, and still viciously represses every critic of the government and any advocate of a new, more democratic system. China has paid an enormously heavy price for the success it enjoys today. Is that price worth it? Well, the government thinks so… but do you?

The Churchill Problem

Everybody knows about Winston Churchill- he was about the only reason that Britain’s will to fight didn’t crumble during the Second World War, his voice and speeches are some of the most iconic of all time, and his name and mannerisms have been immortalised by a cartoon dog selling insurance. However, some of his postwar achievements are often overlooked- after the war he was voted out of the office of Prime Minister in favour of a revolutionary Labour government, but he returned to office in the 50’s with the return of the Tories. He didn’t do quite as well this time round- Churchill was a shameless warmonger who nearly annihilated his own reputation during the First World War by ordering a disastrous assault on Gallipoli in Turkey, and didn’t do much to help it by insisting that everything between the two wars was an excuse for another one- but it was during this time that he made one of his least-known but most interesting speeches. In it he envisaged a world in which the rapidly accelerating technological advancement of his age would cause most of the meaningful work to be done by machines, and changing our concept of the working week. He suggested that we would one day be able to “give the working man what he’s never had – four days’ work and then three days’ fun”- basically, Winston Churchill was the first man to suggest the concept of a three day weekend.

This was at a time when the very concept of the weekend itself was actually a very new one- the original idea of one part of the week being dedicated to not working comes, of course, from the Sabbath days adopted by most religions. The idea of no work being done on a Sunday is, in the Western and therefore historically Christian world, an old one, but the idea of expanding it to Saturday as well is far newer. This was partly motivated by the increased proportion and acceptance of Jewish workers, whose day of rest fell on Saturday, and was also part of a general trend in decreasing work hours during the early 1900’s. It wasn’t until 1938 that the 5 day working week became ratified in US law, and it appeared to be the start of a downward trend in working hours as trade unions gained power, workers got more free time, and machines did all the important stuff. All of this appeared to lead to Churchill’s promised world- a world of the 4-day working week and perhaps, one day, a total lap of luxury whilst we let computers and androids do everything.

However, recently things have started to change. The trend of shortening working hours and an increasingly stressless existence has been reversed, with the average working week getting longer dramatically- since 1970, the  number of hours worked per capita has risen by 20%. A survey done a couple of winters ago found that of our weekend, we only spend an average of 15 hours and 17 minutes of it out of the work mindset (between 12:38am and 3:55pm on Sunday when we start worrying about Monday again), and that over half of us are too tired to enjoy our weekends properly. Given that this was a survey conducted by a hotel chain it may not be an entirely representative sample, but you get the idea. The weekend itself is in some ways under threat, and Churchill’s vision is disappearing fast.

So what’s changed since the 50’s (other than transport, communications, language, technology, religion, science, politics, the world, warfare, international relations, and just about everything else)? Why have we suddenly ceased to favour rest over work? What the hell is wrong with us?

To an extent, some of the figures are anomalous-  employment of women has increased drastically in the last 50 years and as such so has the percentage of the population who are unemployed. But this is not enough to explain away all of the stats relating to ‘the death of the weekend’.Part of the issue is judgemental. Office environments can be competitive places, and can quickly develop into mindsets where our emotional investment is in the compiling of our accounts document or whatever. In such an environment, people’s priorities become more focused on work, and somebody taking a day extra out on the weekend would just seem like laziness- especially of the boss who has deadlines to meet and really doesn’t appreciate slackers, as well as having control of your salary. We also, of course, judge ourselves, unwilling to feel as if we are letting the team down and causing other people inconvenience. There’s also the problem of boredom- as any schoolchild will tell you, the first few days of holiday after a long term are blissful relaxation, but it’s only a matter of time before a parent hears that dreaded phrase: “I’m booooooored”. The same thing can be said to apply to having nearly half your time off every single week. But these are features of human nature, which certainly hasn’t changed in the past 50 years, so what could the root of the change in trends be?

The obvious place to start when considering this is in the changes in work over this time. The last half-century has seen Britain’s manufacturing economy spiral downwards, as more and more of us lay down tools and pick up keyboards- the current ‘average job’ for a Briton involves working in an office somewhere. Probably in Sales, or Marketing. This kind of job involves chiefly working our minds, crunching numbers, thinking through figures and making it far harder for us to ‘switch off’ from our work mentality than if it were centred on how much our muscles hurt. It also makes it far easier to justify staying for overtime and to ‘just finish that last bit’, partly because not being physically tired makes it easier and also because the kind of work given to an office worker is more likely to be centred around individual mini-projects than simply punching rivets or controlling a machine for hours on end. And of course, as some of us start to stay for longer, so our competitive instinct causes the rest of us to as well.

In the modern age, switching off from a modern work mindset has been made even harder since the invention of the laptop and, especially, the smartphone. The laptop allowed us to check our emails or work on a project at home, on a train or wherever we happened to be- the smartphone has allowed us to keep in touch with work at every single waking moment of the day, making it very difficult for us to ‘switch work off’. It has also made it far easier to work at home, which for the committed worker can make it even harder to formally end the day when there are no colleagues or bosses telling you it’s time to go home. This spread of technology into our lives is thought to lead to an increase in levels of dopamine, a sort of pick-me-up drug the body releases after exposure to adrenaline, which can frazzle our pre-frontal cortex and leave someone feeling drained and unfocused- obvious signs of being overworked

Then there is the issue of competition. In the past, competition in industry would usually have been limited to a few other industries in the local area- in the grand scheme of things, this could perhaps be scaled up to cover an entire country. The existence of trade unions helped prevent this competition from causing problems- if everyone is desperate for work, as occurred with depressing regularity during the Great Depression in the USA, they keep trying to offer their services as cheaply as possible to try and bag the job, but if a trade union can be use to settle and standardise prices then this effect is halted. However, in the current age of everywhere being interconnected, competition in big business can occur from all over the world. To guarantee that they keep their job, people have to try to work as hard as they can for as long as they can, lengthening the working week still further. Since trade unions are generally limited to a single country, their powers in this situation are rather limited.

So, that’s the trend as it is- but is it feasible that we will ever live the life of luxury, with robots doing all our work, that seemed the summit of Churchill’s thinkings. In short: no. Whilst a three-day weekend is perhaps not too unfeasible, I just don’t think human nature would allow us to laze about all day, every day for the whole of our lives and do absolutely nothing with it, if only for the reasons explained above. Plus, constant rest would simply sanitise us to the concept, it becoming so normal that we simply could not envisage the concept of work at all. Thus, all the stresses that were once taken up with work worries would simply be transferred to ‘rest worries’, resulting in us not being any happier after all, and defeating the purpose of having all the rest in the first place. In short, we need work to enjoy play.

Plus, if robots ran everything and nobody worked them, it’d only be a matter of time before they either all broke down or took over.