Alternative Marketing

Extra Credits is one of my favourite online productions: what started out as a couple of animated lectures on videogames as art written by then student Daniel Floyd and posted on YouTube has now attracted attracted a huge fan base of gamers wishing to greater understand videogames as a form of artistic media.  Nowadays the show is hosted by Floyd, utilises the art services of LeeLee Scaldaferri and Scott deWitt and its content comes straight from the mind of James Portnow, one of the videogame industry’s leading lights when it comes to advancing them as a respected form of media and art. It provides intelligent yet easy-to-understand discussion on a topic too frequently ignored and trivialised by gamers and the general public alike, and its existence is a boon to the gaming world.

However, a while back they produced an episode that I found particularly interesting. Creative Assembly, the developers behind the hugely successful Total War franchise, apparently had some money left over in the marketing budget for their latest game, Total War: Rome II, and offered to subcontract the Extra Credits team (with their old art maestro Allison Theus) to make a few episodes about the Punic Wars, possibly the single most crucial event in the Roman Empire’s rise to power. They weren’t asked to mention the Total War franchise or Rome II at all, or even so much as mention videogames, just to make some short historical lectures in the engaging style that has made them so successful. The only reason I know of this origin story is because they deliberately chose to mention it in their intro.

As a marketing tactic, hiring somebody to not talk about the content of your game is a somewhat strange one, at least on the surface of it, but when one works backwards from the end-goal of marketing Creative Assembly’s tactic starts to seem more and more clever. The final aim of games marketing is, of course, to make more people buy your game, which generally takes one of two forms; the creation, expansion and maintenance of a core fanbase who will always buy your game and will do their own viral marketing for you, and the attraction of buyers (both new and returning) outside this core bracket. The former area is generally catered for by means of convention panels, forums, Facebook groups and such, whilst the latter is what we are interested in right now.

Generally, attempting to attract ‘non-core’ buyers in the gaming world takes the form of showing off big, flashy adverts and gameplay demonstrations, effectively saying ‘look at all the stuff our game can do!’ amidst various bits of marketing jargon. However, gameplay features alone aren’t everything, and there is a growing body of evidence to suggest that, for many gamers (compulsive Call of Duty players perhaps being an exception) story is just as important a consideration in their games as gameplay features. For a game such as the Total War series, where there is no predefined story and a distinct lack of character interaction*, one might think that this consideration becomes irrelevant, but it nonetheless demonstrates a key point; the core motivation behind videogame players is frequently not concerned with the gameplay features that form the bulk of most marketing material.

For a Total War game, the key motivating factor is based around a power fantasy; the dream of the player controlling the entire world at the head of one of the greatest Empires in history and of winning epic battles against great odds. From here we can dissect the motivation further- the thrill of victory in some great, decisive battle against your nemesis comes not just from the victory itself, but also from the idea of the players’ skill allowing them to outsmart the enemy and overcome the no doubt overwhelming odds. The dream of dominion over all Europe and beyond comes is partly satisfying for the sense of power it alone generates, but this sense of achievement is enhanced when one knows it is being played out against some great historical background, full of its own great stories, giving it some context and allowing it to carry even more weight. In Rome II for example, you have the options to emulate or even surpass the achievements of the mightiest Roman generals and Emperors, placing yourself on a par with Scipio and the various Caesars, or alternatively you can play as another faction and overcome what history tells us is one of the greatest empires and most unstoppable military forces ever to walk the earth. You can literally change the course of history.

One might ask, therefore, why marketeers don’t focus more on these aspects of the games, and to an extent they do; adverts for games such as the Total War franchise are frequently filled with inspiring messages along the lines of ‘Lead your nation to victory!’ or ‘Crush all who dare oppose you!’. But the very format of an advert makes really delivering on this historical power fantasy difficult; with screen time expensive and thus at a premium, there is little room to wax lyrical about any great history or to debate military tactics. A convention panel or gameplay demo can go a little further, but the usefulness of these is limited since most of the people who are going to be there will be fans of the series anyway; their main focus is community-building. And that’s where Extra Credits come in.

What Creative Assembly have realised is that Extra Credits have a large audience of gamers who are already well-indoctrinated with the concept of buying games (as some advert-viewers may not be) and think deeply enough about their games that flashy adverts are unlikely to impress them as much as they might some audience. Thus, to recruit members of the EC audience to buy the game, they need to sell them on the core appeal of the campaign, that of the epic history surrounding the game and your chance to manipulate it; and thus, they came up with the idea to simply educate the gaming world about this amazing piece of history, get them interested in it and make them want to explore it through games, their favourite sort of media. The Punic wars too are a masterful choice of subject matter; once commonly taught in schools (meaning there’s a pretty decent body of work analysing them to draw upon), they fell out of favour as Latin and other features of classical education began to drop out of the school system, meaning the majority of the population are unfamiliar with this epic tale of warfare on the grandest of scales. Given how relatively cheap and simple a technique it is, since it lets others do most of the legwork for you, it’s a truly masterful piece of marketing. And I’m not just saying that because it’s resulted in a video I like.

*I didn’t mention it in the main post because it disrupts the flow, but even without a preset story grand strategy games most certainly have a narrative. Indeed, the self-made stories of beating down a simultaneous rebellion and foreign invasion, and in the process gaining the moniker of ‘the Great’, are one of the main things that makes me enjoy playing Crusader Kings II. There’s an entire post’s-worth of discussion on the subject of videogames’ potential for fluid, non-linear storytelling, but that’s for another time

The Value of Transparency

Once you start looking for it, it can be quite staggering to realise just how much of our modern world is, quite literally, built on glass. The stuff is manufactured in vast quantities, coating our windows, lights, screens, skyscrapers and countless other uses. Some argue that it is even responsible for the entire development of the world, particularly in the west, as we know it; it’s almost a wonder we take it for granted so.

Technically, out commonplace use of the word ‘glass’ rather oversimplifies the term; glasses are in fact a family of materials that all exhibit the same amorphous structure and behaviour under heating whilst not actually all being made from the same stuff. The member of this family that we are most familiar with and will commonly refer to as simply ‘glass’ is soda-lime glass, made predominantly from silica dioxide with a few other additives to make it easier to produce. But I’m getting ahead of myself; let me tell the story from the beginning.

Like all the best human inventions, glass was probably discovered by accident. Archaeological evidence suggests glassworking was probably an Egyptian invention in around the third millennia BC, Egypt (or somewhere nearby) being just about the only place on earth at the time where the three key ingredients needed for glass production occured naturally and in the same place: silica dioxide (aka sand), sodium carbonate (aka soda, frequently found as a mineral or from plant ashes) and a relatively civilised group of people capable of building a massive great fire. When Egyptian metalworkers got sand and soda in their furnaces by accident, when removed they discovered the two had fused to form a hard, semi-transparent, almost alien substance; the first time glass had been produced anywhere on earth.

This type of glass was far from perfect; for one thing, adding soda has the unfortunate side-effect of making silica glass water-soluble, and for another they couldn’t yet work out how to make the glass clear. Then there were the problems that came with trying to actually make anything from the stuff. The only glass forming technique at the time was called core forming, a moderately effective but rather labour-intensive process illustrated well in this video. Whilst good for small, decorative pieces, it became exponentially more difficult to produce an item by this method the larger it needed to be, not to mention the fact that it couldn’t produce flat sheets of glass for use as windows or whatever.

Still, onwards and upwards and all that, and developments were soon being made in the field of glass technology. Experimentation with various additives soon yielded the discovery that adding lime (calcium oxide) plus a little aluminium and magnesium oxide made soda glass insoluble, and thus modern soda-lime glass was discovered. In the first century BC, an even more significant development came along with the discovery of glass blowing as a production method. Glass blowing was infinitely more flexible than core forming, opening up an entirely new avenue for glass as a material, but crucially it allowed glass products to be produced faster and thus be cheaper than pottery equivalents . By this time, the Eastern Mediterranean coast where these discoveries took place was part of the Roman Empire, and the Romans took to glass like a dieter to chocolate; glass containers and drinking vessels spread across the Empire from the glassworks of Alexandria, and that was before they discovered manganese dioxide could produce clear glass and that it was suddenly suitable for architectural work.

Exactly why glass took off on quite such a massive scale in Europe yet remained little more than a crude afterthought in the east and China (the other great superpower of the age) is somewhat unclear. Pottery remained the material of choice throughout the far east, and they got very skilled at making it too; there’s a reason we in the west today call exceptionally fine, high-quality pottery ‘china’. I’ve only heard one explanation for why this should be so, and it centres around alcohol.

Both the Chinese and Roman empires loved wine, but did so in different ways. To the Chinese, alcohol was a deeply spiritual thing, and played an important role in their religious procedures. This attitude was not unheard of in the west (the Egyptians, for example, believed the god Osiris invented beer, and both Greeks and Romans worshipped a god of wine), but the Roman Empire thought of wine in a secular as well as religious sense; in an age where water was often unsafe to drink, wine became the drink of choice for high society in all situations. One of the key features of wine to the Roman’s was its appearance, hence why the introduction of clear vessels allowing them to admire this colour was so attractive to them. By contrast, the Chinese day-to-day drink of choice was tea. whose appearance was of far less importance than the ability of its container to dissipate heat (as fine china is very good at). The introduction of clear drinking vessels would, therefore, have met with only a limited market in the east, and hence it never really took off. I’m not entirely sure that this argument holds up under scrutiny, but it’s quite a nice idea.

Whatever the reason, the result was unequivocal; only in Europe was glassmaking technology used and advanced over the years. Stained glass was one major discovery, and crown glass (a method for producing large, flat sheets) another. However, the crucial developments would be made in the early 14th century, not long after the Republic of Venice (already a centre for glassmaking) ordered all its glassmakers to move out to the island of Murano to reduce the risk of fire (which does seem ever so slightly strange for a city founded, quite literally, on water).  On Murano, the local quartz pebbles offered glassmakers silica of hitherto unprecedented purity which, combined with exclusive access to a source of soda ash, allowed for the production of exceptionally high-quality glassware. The Murano glassmakers became masters of the art, producing glass products of astounding quality, and from here onwards the technological revolution of glass could begin. The Venetians worked out how to make lenses, in turn allowing for the discovery of the telescope (forming the basis of the work of both Copernicus and Galileo) and spectacles (extending the working lifespan of scribes and monks across the western world). The widespread introduction of windows (as opposed to fabric-covered holes in the wall) to many houses, particularly in the big cities, dramatically improved the health of their occupants by both keeping the house warmer and helping keep out disease. Perhaps most crucially, the production of high-quality glass vessels was not only to revolutionise biology, and in turn medicine, as a discipline, but to almost single-handedly create the modern science of chemistry, itself the foundation stone upon which most of modern physics is based. These discoveries would all, given enough time and quite a lot of social upheaval, pave the way for the massive technological advancements that would characterise the western world in the centuries to come, and which would finally allow the west to take over from the Chinese and Arabs and become the world’s leading technological superpowers.* Nowadays, of course, glass has been taken even further, being widely used as a building material (its strength-to-weight ratio far exceeds that of concrete, particularly when it is made to ‘building grade’ standard), in televisions, and fibre optic cables (which may yet revolutionise our communications infrastructure).

Glass is, of course, not the only thing to have catalysed the technological breakthroughs that were to come; similar arguments have been made regarding gunpowder and the great social and political changes that were to grip Europe between roughly 1500 and 1750. History is never something that one can place a single cause on (the Big Bang excepted), but glass was undoubtedly significant in the western world’s rise to prominence during the second half of the last millennia, and the Venetians probably deserve a lot more credit than they get for creating our modern world.

*It is probably worth mentioning that China is nowadays the world’s largest producer of glass.

Time is an illusion, lunchtime doubly so…

In the dim and distant past, time was, to humankind, a thing and not much more. There was light-time, then there was dark-time, then there was another lot of light-time; during the day we could hunt, fight, eat and try to stay alive, and during the night we could sleep and have sex. However, we also realised that there were some parts of the year with short days and colder night, and others that were warmer, brighter and better for hunting. Being the bright sort, we humans realised that the amount of time it spent in winter, spring, summer and autumn (fall is the WRONG WORD) was about the same each time around, and thought that rather than just waiting for it to warm up every time we could count how long it took for one cycle (or year) so that we could work out when it was going to get warm next year. This enabled us to plan our hunting and farming patterns, and it became recognised that some knowledge of how the year worked was advantageous to a tribe. Eventually, this got so important that people started building monuments to the annual seasonal progression, hence such weird and staggeringly impressive prehistoric engineering achievements as Stonehenge.

However, this basic understanding of the year and the seasons was only one step on the journey, and as we moved from a hunter-gatherer paradigm to more of a civilised existence, we realised the benefits that a complete calendar could offer us, and thus began our still-continuing test to quantify time. Nowadays our understanding of time extends to clocks accurate to the degree of nanoseconds, and an understanding of relativity, but for a long time our greatest quest into the realm of bringing organised time into our lives was the creation of the concept of the wee.

Having seven days of the week is, to begin with, a strange idea; seven is an awkward prime number, and it seems odd that we don’t pick number that is easier to divide and multiply by, like six, eight or even ten, as the basis for our temporal system. Six would seem to make the most sense; most of our months have around 30 days, or 5 six-day weeks, and 365 days a year is only one less than multiple of six, which could surely be some sort of religious symbolism (and there would be an exact multiple on leap years- even better). And it would mean a shorter week, and more time spent on the weekend, which would be really great. But no, we’re stuck with seven, and it’s all the bloody moon’s fault.

Y’see, the sun’s daily cycle is useful for measuring short-term time (night and day), and the earth’s rotation around it provides the crucial yearly change of season. However, the moon’s cycle is 28 days long (fourteen to wax, fourteen to wane, regular as clockwork), providing a nice intermediary time unit with which to divide up the year into a more manageable number of pieces than 365. Thus, we began dividing the year up into ‘moons’ and using them as a convenient reference that we could refer to every night. However, even a moon cycle is a bit long for day-to-day scheduling, and it proved advantageous for our distant ancestors to split it up even further. Unfortunately, 28 is an awkward number to divide into pieces, and its only factors are 1, 2, 4, 7 and 14. An increment of 1 or 2 days is simply too small to be useful, and a 4 day ‘week’ isn’t much better. A 14 day week would hardly be an improvement on 28 for scheduling purposes, so seven is the only number of a practical size for the length of the week. The fact that months are now mostly 30 or 31 days rather than 28 to try and fit the awkward fact that there are 12.36 moon cycles in a year, hasn’t changed matters, so we’re stuck with an awkward 7 day cycle.

However, this wasn’t the end of the issue for the historic time-definers (for want of a better word); there’s not much advantage in defining a seven day week if you can’t then define which day of said week you want the crops to be planted on. Therefore, different days of the week needed names for identification purposes, and since astronomy had already provided our daily, weekly and yearly time structures it made sense to look skyward once again when searching for suitable names. At this time, centuries before the invention of the telescope, we only knew of seven planets, those celestial bodies that could be seen with the naked eye; the sun, the moon (yeah, their definition of ‘planet’ was a bit iffy), Mercury, Venus, Mars, Jupiter and Saturn. It might seem to make sense, with seven planets and seven days of the week, to just name the days after the planets in a random order, but humankind never does things so simply, and the process of picking which day got named after which planet was a complicated one.

In around 1000 BC the Egyptians had decided to divide the daylight into twelve hours (because they knew how to pick a nice, easy-to-divide number), and the Babylonians then took this a stage further by dividing the entire day, including night-time, into 24 hours. The Babylonians were also great astronomers, and had thus discovered the seven visible planets- however, because they were a bit weird, they decided that each planet had its place in a hierarchy, and that this hierarchy was dictated by which planet took the longest to complete its cycle and return to the same point in the sky. This order was, for the record, Saturn (29 years), Jupiter (12 years), Mars (687 days), Sun (365 days), Venus (225 days), Mercury (88 days) and Moon (28 days). So, did they name the days after the planets in this order? Of course not, that would be far too simple; instead, they decided to start naming the hours of the day after the planets (I did say they were a bit weird) in that order, going back to Saturn when they got to the Moon.

However, 24 hours does not divide nicely by seven planets, so the planet after which the first hour of the day was named changed each day. So, the first hour of the first day of the week was named after Saturn, the first hour of the second day after the Sun, and so on. Since the list repeated itself each week, the Babylonians decided to name each day after the planet that the first hour of each day was named, so we got Saturnday, Sunday, Moonday, Marsday, Mercuryday, Jupiterday and Venusday.

Now, you may have noticed that these are not the days of the week we English speakers are exactly used to, and for that we can blame the Vikings. The planetary method for naming the days of the week was brought to Britain by the Romans, and when they left the Britons held on to the names. However, Britain then spent the next 7 centuries getting repeatedly invaded and conquered by various foreigners, and for most of that time it was the Germanic Vikings and Saxons who fought over the country. Both groups worshipped the same gods, those of Norse mythology (so Thor, Odin and so on), and one of the practices they introduced was to replace the names of four days of the week with those of four of their gods; Tyr’sday, Woden’sday (Woden was the Saxon word for Odin), Thor’sday and Frig’sday replaced Marsday, Mercuryday, Jupiterday and Venusday in England, and soon the fluctuating nature of language renamed the days of the week Saturday, Sunday, Monday, Tuesday, Wednesday, Thursday and Friday.

However, the old planetary names remained in the romance languages (the Spanish translations of the days Tuesday to Friday are Mardi, Mercredi, Jeudi and Vendredi), with one small exception. When the Roman Empire went Christian in the fourth century, the ten commandments dictated they remember the Sabbath day; but, to avoid copying the Jews (whose Sabbath was on Saturday), they chose to make Sunday the Sabbath day. It is for this reason that Monday, the first day of the working week after one’s day of rest, became the start of the week, taking over from the Babylonian’s choice of Saturday, but close to Rome they went one stage further and renamed Sunday ‘Deus Dominici’, or Day Of The Lord. The practice didn’t catch on in Britain, thousands of miles from Rome, but the modern day Spanish, French and Italian words for Sunday are domingo, dimanche and domenica respectively, all of which are locally corrupted forms of ‘Deus Dominici’.

This is one of those posts that doesn’t have a natural conclusion, or even much of a point to it. But hey; I didn’t start writing this because I wanted to make a point, but more to share the kind of stuff I find slightly interesting. Sorry if you didn’t.


Cryptography is a funny business; shady from the beginning, the whole business of codes and ciphers has been specifically designed to hide your intentions and move in the shadows, unnoticed. However, the art of cryptography has been changed almost beyond recognition in the last hundred years thanks to the invention of the computer, and what was once an art limited by the imagination of the nerd responsible has now turned into a question of sheer computing might. But, as always, the best way to start with this story is at the beginning…

There are two different methods of applying cryptography to a message; with a code or with a cipher. A code is a system involving replacing words with other words (‘Unleash a fox’ might mean ‘Send more ammunition’, for example), whilst a cipher involves changing individual letters and their ordering. Use of codes can generally only be limited to a few words that can be easily memorised, and/or requires endless cross-referencing with a book of known ‘translations’, as well as being relatively insecure when it comes to highly secretive information. Therefore, most modern encoding (yes, that word is still used; ‘enciphering’ sounds stupid) takes the form of employing ciphers, and has done for hundreds of years; they rely solely on the application of a simple rule, require far smaller reference manuals, and are more secure.

Early attempts at ciphers were charmingly simple; the ‘Caesar cipher’ is a classic example, famously invented and used by Julius Caesar, where each letter is replaced by the one three along from it in the alphabet (so A becomes D, B becomes E and so on). Augustus Caesar, who succeeded Julius, didn’t set much store by cryptography and used a similar system, although with only a one-place transposition (so A to B and such)- despite the fact that knowledge of the Caesar cipher was widespread, and his messages were hopelessly insecure. These ‘substitution ciphers’ suffered from a common problem; the relative frequency with which certain letters appear in the English language (E being the most common, followed by T) is well-known, so by analysing the frequency of occurring letters in a substitution-enciphered message one can work out fairly accurately what letter corresponds to which, and work out the rest from there. This problem can be partly overcome by careful phrasing of messages and using only short ones, but it’s nonetheless a problem.

Another classic method is to use a transposition cipher, which changes the order of letters- the trick lies in having a suitable ‘key’ with which to do the reordering. A classic example is to write the message in a rectangle of a size known to both encoder and recipient, writing in columns but ‘reading it off’ in rows. The recipient can then reverse the process to read the original message. This is a nice method, and it’s very hard to decipher a single message encoded this way, but if the ‘key’ (e.g. the size of the rectangle) is not changed regularly then one’s adversaries can figure it out after a while. The army of ancient Sparta used a kind of transposition cipher based on a tapered wooden rod called a skytale (pronounced skih-tah-ly), around which a strip of paper was wrapped and the message written down it, one on each turn of paper. The recipient then wrapped the paper around a skytale of identical girth and taper (the tapering prevented letters being evenly spaced, making it harder to decipher), and read the message off- again, a nice idea, but the need to make a new set of skytale’s for everyone every time the key needed changing rendered it impractical. Nonetheless, transposition ciphers are a nice idea, and the Union used them to great effect during the American Civil War.

In the last century, cryptography has developed into even more of an advanced science, and most modern ciphers are based on the concept of transposition ciphers- however, to avoid the problem of using letter frequencies to work out the key, modern ciphers use intricate and elaborate systems to change by how much the ‘value’ of the letter changes each time. The German Lorenz cipher machine used during the Second World War (and whose solving I have discussed in a previous post) involved putting the message through three wheels and electronic pickups to produce another letter; but the wheels moved on one click after each letter was typed, totally changing the internal mechanical arrangement. The only way the British cryptographers working against it could find to solve it was through brute force, designing a computer specifically to test every single possible starting position for the wheels against likely messages. This generally took them several hours to work out- but if they had had a computer as powerful as the one I am typing on, then provided it was set up in the correct manner it would have the raw power to ‘solve’ the day’s starting positions within a few minutes. Such is the power of modern computers, and against such opponents must modern cryptographers pit themselves.

One technique used nowadays presents a computer with a number that is simply too big for it to deal with; they are called ‘trapdoor ciphers’. The principle is relatively simple; it is far easier to find that 17 x 19 = 323 than it is to find the prime factors of 323, even with a computer, so if we upscale this business to start dealing with huge numbers a computer will whimper and hide in the corner just looking at them. If we take two prime numbers, each more than 100 digits long (this is, by the way, the source of the oft-quoted story that the CIA will pay $10,000 to anyone who finds a prime number of over 100 digits due to its intelligence value) and multiply them together, we get a vast number with only two prime factors which we shall, for now, call M. Then, we convert our message into number form (so A=01, B=02, I LIKE TRAINS=0912091105201801091419) and the resulting number is then raised to the power of a third (smaller, three digits will do) prime number. This will yield a number somewhat bigger than M, and successive lots of M are then subtracted from it until it reaches a number less than M (this is known as modulo arithmetic, and can be best visualised by example: so 19+16=35, but 19+16 (mod 24)=11, since 35-24=11). This number is then passed to the intended recipient, who can decode it relatively easily (well, so long as they have a correctly programmed computer) if they know the two prime factors of M (this business is actually known as the RSA problem, and for reasons I cannot hope to understand current mathematical thinking suggests that finding the prime factors of M is the easiest way of solving this; however, this has not yet been proven, and the matter is still open for debate). However, even if someone trying to decode the message knows M and has the most powerful computer on earth, it would take him thousands of years to find out what its prime factors are. To many, trapdoor ciphers have made cryptoanalysis (the art of breaking someone else’s codes), a dead art.

Man, there’s a ton of cool crypto stuff I haven’t even mentioned yet… screw it, this is going to be a two-parter. See you with it on Wednesday…

Money- what the &*$!?

Money is a funny old thing- the cornerstone of our way of existence, the bedrock of modern-day life, and the cause of, and solution to, 99% of all life’s problems. But… well, why? When you think about it, money doesn’t actually mean anything- it is an arbitrary creation brought in for convenience’s sake, and yet it as an entity has spiralled into so much more than a mere tool. Now how on earth did that happen?

Before about two and a half thousand (ish) years ago, money just about not exist. To the best of my knowledge, coinage only became commonplace in Europe with the rise of the Roman Empire- indeed, when they left Britain in the 5th century AD, much of the country went back to simply bartering- trading goods and services for other goods and services. This began to change as time went on however, and by the time of William the Conqueror’s invasion, the monetary system was firmly established across Europe. Coins were a far more efficient system than bartering- trading stuff for one another is a highly subjective process, and it can be hard to get a sense of value and to what extent you were being ripped off. By giving everything a fixed, arbitrary value (ie a price), everything suddenly had a value relative to one another. More importantly, this allowed for goods and services to be traded for the potential to buy more goods and services of equal value in coin form, rather than the things themselves, which was both easier and more efficient (there was now no risk of carrying a lampshade to the supermarket to exchange for a pint of milk, because a wallet is far easier to carry). The idea of money representing the potential to buy things can be seen by anyone looking on a British note, where it reads “I promise to pay the bearer on demand the sum of…” however many pounds (this is in fact a callback to the days when banks stuck to the gold standard, when you could theoretically walk into the Bank of England and ask for five pound’s worth of gold for your fiver).

However, with coins representing potential to buy things, they instantly took on a value of their own, and here things start to get confusing. Because, when money itself takes on a value, it instantly becomes a commodity just like any other- just as people trade in gold bullion, oil and bits of companies, so people trade with money itself. And this… actually, I’ve got ahead of myself- let me take a step backward.

The input of human effort can be used to increase the value of various bits of the world we live in. For example- a heap of planks may be bought for £50 from a sawmill, but once you have gone home and spent 6 hours swearing at a hammer, you may now have a bench or something worth £500 or more. The materials themselves have not changed, but since a bench is more useful, better looking, and is better appreciated by people than a few planks, people set more value by it. Because more value is set by it, so it is worth more money.

This, at a base level, is how the economic system works- human effort is used to turn raw materials, which we don’t want, into products, which we do. Because people want these products, they pay money for them, and because they need this money to pay for them, they get a job. Because they are providing human effort to their boss (which itself has a value for its ability to raise the value of raw materials), their boss pays them the money they need. The boss gets the money he uses to pay his employees from selling things to people, which makes money because the human effort put in to make his final product raises the value of his final product above that of the raw materials he bought in order to make it- and thus we are back at the beginning of the cycle.

If we study this process, we can see that the only way the boss can make any money out of it is if the value of his final product (F) is greater than the value of its raw materials (M) plus however much he pays his employees for the effort they input (E)- ie, F>M+E. However, pretty much by definition, F should equal M+E- thereby the only way he can make money is by paying his workers less than their human effort is actually worth in the context of the product (A communist would seize on this as evidence of corporations exploiting the masses, but I refuse to go into this argument here- it is far too messy). This is the only way that any money actually gets produced in an economy, and the result is inflation. If inflation did not exist, then the only way anyone could make any money would be by spending less- but this automatically means that somebody else will not be getting your money, and so will be losing some. Thus inflation is vital to ensure that everybody in an economy gains money, and although this does lead to the gentle devaluation of currency, it allows the human race to stay one step ahead of a potential vicious cycle of decline- and inflation can only be generated by an economy manufacturing things.

But why do we need our level of money to continually rise? Well, imagine you have a steak worth £5 (It’s just an example, don’t judge me on my figures). When you eat that steak, something of value £5 has been turned into the contents of your gut, and ultimately into what comes out the other end- which is clearly worth a lot less than the steak. Thus, the human race consuming resources  reduces the overall value of planet earth, just as making stuff increases it. Nature in fact has an inbuilt system to prevent this from turning into a cycle of endless decline- reproduction. If the cow you ate your steak from had had a calf, then nature has ensured that your consumption of the steak has not, in the long run, decreased the overall steak value of the world due to the steak potential existing in the calf (I’ve just realised I’m making all these terms up on the fly- my apologies). I could go into the whole energy from calf <- energy from grass <- energy from sun <- universe in general etc. thing here, but this is extrapolating the economic problem somewhat. However, suffice it to say that ensuring our overall monetary value continues to rise via inflation is our version, from an economic perspective, of reproduction, balancing out our consumption of finite resources in terms of value.

Phew- this is getting longer than I anticipated. My apologies once again for it turning into a semi-coherent ramble, I only hope you could follow it. There is still quite a lot more to get through, so I think I’ll try to wind this all up on Wednesday (after another Six Nations post Monday- COME ON ENGLAND!). If you have been able to follow all of that then congratulations- you now understand core economics. If you haven’t then also congratulations- you are sufficiently normal to not understand my way of thinking.