Time is an illusion, lunchtime doubly so…

In the dim and distant past, time was, to humankind, a thing and not much more. There was light-time, then there was dark-time, then there was another lot of light-time; during the day we could hunt, fight, eat and try to stay alive, and during the night we could sleep and have sex. However, we also realised that there were some parts of the year with short days and colder night, and others that were warmer, brighter and better for hunting. Being the bright sort, we humans realised that the amount of time it spent in winter, spring, summer and autumn (fall is the WRONG WORD) was about the same each time around, and thought that rather than just waiting for it to warm up every time we could count how long it took for one cycle (or year) so that we could work out when it was going to get warm next year. This enabled us to plan our hunting and farming patterns, and it became recognised that some knowledge of how the year worked was advantageous to a tribe. Eventually, this got so important that people started building monuments to the annual seasonal progression, hence such weird and staggeringly impressive prehistoric engineering achievements as Stonehenge.

However, this basic understanding of the year and the seasons was only one step on the journey, and as we moved from a hunter-gatherer paradigm to more of a civilised existence, we realised the benefits that a complete calendar could offer us, and thus began our still-continuing test to quantify time. Nowadays our understanding of time extends to clocks accurate to the degree of nanoseconds, and an understanding of relativity, but for a long time our greatest quest into the realm of bringing organised time into our lives was the creation of the concept of the wee.

Having seven days of the week is, to begin with, a strange idea; seven is an awkward prime number, and it seems odd that we don’t pick number that is easier to divide and multiply by, like six, eight or even ten, as the basis for our temporal system. Six would seem to make the most sense; most of our months have around 30 days, or 5 six-day weeks, and 365 days a year is only one less than multiple of six, which could surely be some sort of religious symbolism (and there would be an exact multiple on leap years- even better). And it would mean a shorter week, and more time spent on the weekend, which would be really great. But no, we’re stuck with seven, and it’s all the bloody moon’s fault.

Y’see, the sun’s daily cycle is useful for measuring short-term time (night and day), and the earth’s rotation around it provides the crucial yearly change of season. However, the moon’s cycle is 28 days long (fourteen to wax, fourteen to wane, regular as clockwork), providing a nice intermediary time unit with which to divide up the year into a more manageable number of pieces than 365. Thus, we began dividing the year up into ‘moons’ and using them as a convenient reference that we could refer to every night. However, even a moon cycle is a bit long for day-to-day scheduling, and it proved advantageous for our distant ancestors to split it up even further. Unfortunately, 28 is an awkward number to divide into pieces, and its only factors are 1, 2, 4, 7 and 14. An increment of 1 or 2 days is simply too small to be useful, and a 4 day ‘week’ isn’t much better. A 14 day week would hardly be an improvement on 28 for scheduling purposes, so seven is the only number of a practical size for the length of the week. The fact that months are now mostly 30 or 31 days rather than 28 to try and fit the awkward fact that there are 12.36 moon cycles in a year, hasn’t changed matters, so we’re stuck with an awkward 7 day cycle.

However, this wasn’t the end of the issue for the historic time-definers (for want of a better word); there’s not much advantage in defining a seven day week if you can’t then define which day of said week you want the crops to be planted on. Therefore, different days of the week needed names for identification purposes, and since astronomy had already provided our daily, weekly and yearly time structures it made sense to look skyward once again when searching for suitable names. At this time, centuries before the invention of the telescope, we only knew of seven planets, those celestial bodies that could be seen with the naked eye; the sun, the moon (yeah, their definition of ‘planet’ was a bit iffy), Mercury, Venus, Mars, Jupiter and Saturn. It might seem to make sense, with seven planets and seven days of the week, to just name the days after the planets in a random order, but humankind never does things so simply, and the process of picking which day got named after which planet was a complicated one.

In around 1000 BC the Egyptians had decided to divide the daylight into twelve hours (because they knew how to pick a nice, easy-to-divide number), and the Babylonians then took this a stage further by dividing the entire day, including night-time, into 24 hours. The Babylonians were also great astronomers, and had thus discovered the seven visible planets- however, because they were a bit weird, they decided that each planet had its place in a hierarchy, and that this hierarchy was dictated by which planet took the longest to complete its cycle and return to the same point in the sky. This order was, for the record, Saturn (29 years), Jupiter (12 years), Mars (687 days), Sun (365 days), Venus (225 days), Mercury (88 days) and Moon (28 days). So, did they name the days after the planets in this order? Of course not, that would be far too simple; instead, they decided to start naming the hours of the day after the planets (I did say they were a bit weird) in that order, going back to Saturn when they got to the Moon.

However, 24 hours does not divide nicely by seven planets, so the planet after which the first hour of the day was named changed each day. So, the first hour of the first day of the week was named after Saturn, the first hour of the second day after the Sun, and so on. Since the list repeated itself each week, the Babylonians decided to name each day after the planet that the first hour of each day was named, so we got Saturnday, Sunday, Moonday, Marsday, Mercuryday, Jupiterday and Venusday.

Now, you may have noticed that these are not the days of the week we English speakers are exactly used to, and for that we can blame the Vikings. The planetary method for naming the days of the week was brought to Britain by the Romans, and when they left the Britons held on to the names. However, Britain then spent the next 7 centuries getting repeatedly invaded and conquered by various foreigners, and for most of that time it was the Germanic Vikings and Saxons who fought over the country. Both groups worshipped the same gods, those of Norse mythology (so Thor, Odin and so on), and one of the practices they introduced was to replace the names of four days of the week with those of four of their gods; Tyr’sday, Woden’sday (Woden was the Saxon word for Odin), Thor’sday and Frig’sday replaced Marsday, Mercuryday, Jupiterday and Venusday in England, and soon the fluctuating nature of language renamed the days of the week Saturday, Sunday, Monday, Tuesday, Wednesday, Thursday and Friday.

However, the old planetary names remained in the romance languages (the Spanish translations of the days Tuesday to Friday are Mardi, Mercredi, Jeudi and Vendredi), with one small exception. When the Roman Empire went Christian in the fourth century, the ten commandments dictated they remember the Sabbath day; but, to avoid copying the Jews (whose Sabbath was on Saturday), they chose to make Sunday the Sabbath day. It is for this reason that Monday, the first day of the working week after one’s day of rest, became the start of the week, taking over from the Babylonian’s choice of Saturday, but close to Rome they went one stage further and renamed Sunday ‘Deus Dominici’, or Day Of The Lord. The practice didn’t catch on in Britain, thousands of miles from Rome, but the modern day Spanish, French and Italian words for Sunday are domingo, dimanche and domenica respectively, all of which are locally corrupted forms of ‘Deus Dominici’.

This is one of those posts that doesn’t have a natural conclusion, or even much of a point to it. But hey; I didn’t start writing this because I wanted to make a point, but more to share the kind of stuff I find slightly interesting. Sorry if you didn’t.

Advertisement

The Red Flower

Fire is, without a doubt, humanity’s oldest invention and its greatest friend; to many, the fundamental example what separates us from other animals. The abilities to keep warm through the coldest nights and harshest winters, to scare away predators by harnessing this strange force of nature, and to cook a joint of meat because screw it, it tastes better that way, are incredibly valuable ones, and they have seen us through many a tough moment. Over the centuries, fire in one form or another has been used for everything from being a weapon of war to furthering science, and very grateful we are for it too.

However, whilst the social history of fire is interesting, if I were to do a post on it then you dear readers would be faced with 1000 words of rather repetitive and somewhat boring myergh (technical term), so instead I thought I would take this opportunity to resort to my other old friend in these matters: science, as well as a few things learned from several years of very casual outdoorsmanship.

Fire is the natural product of any sufficiently exothermic reaction (ie one that gives out heat, rather than taking it in). These reactions can be of any type, but since fire can only form in air most of such reactions we are familiar with tend to be oxidation reactions; oxygen from the air bonding chemically with the substance in question (although there are exceptions;  a sample of potassium placed in water will float on the top and react with the water itself, become surrounded surrounded by a lilac flame sufficiently hot to melt it, and start fizzing violently and pushing itself around the container. A larger dose of potassium, or a more reactive alkali metal such as rubidium, will explode). The emission of heat causes a relatively gentle warming effect for the immediate area, but close to the site of the reaction itself a very large amount of heat is emitted in a small area. This excites the molecules of air close to the reaction and causes them to vibrate violently, emitting photons of electromagnetic radiation as they do so in the form of heat & light (among other things). These photons cause the air to glow brightly, creating the visible flame we can see; this large amount of thermal energy also ionises a lot of atoms and molecules in the area of the flame, meaning that a flame has a slight charge and is more conductive than the surrounding air. Because of this, flame probes are sometimes used to get rid of the excess charge in sensitive electromagnetic experiments, and flamethrowers can be made to fire lightning. Most often the glowing flame results in the characteristic reddy/orange colour of fire, but some reactions, such as the potassium one mentioned, cause them to emit radiation of other frequencies for a variety of reasons (chief among them the temperature of the flame and the spectral properties of the material in question), causing the flames to be of different colours, whilst a white-hot area of a fire is so hot that the molecules don’t care what frequency the photons they’re emitting are at so long as they can get rid of the things fast enough. Thus, light of all wavelengths gets emitted, and we see white light. The flickery nature of a flame is generally caused by the excited hot air moving about rapidly, until it gets far enough away from the source of heat to cool down and stop glowing; this process happens all the time with hundreds of packets of hot air, causing them to flicker back and forth.

However, we must remember that fires do not just give out heat, but must take some in too. This is to do with the way the chemical reaction to generate the heat in question works; the process requires the bonds between atoms to be broken, which uses up energy, before they can be reformed into a different pattern to release energy, and the energy needed to break the bonds and get the reaction going is known as the activation energy. Getting the molecules of the stuff you’re trying to react to the activation energy is the really hard part of lighting a fire, and different reactions (involving the burning of different stuff) have different activation energies, and thus different ‘ignition temperatures’ for the materials involved. Paper, for example, famously has an ignition temperature of 451 Fahrenheit (which means, incidentally, that you can cook with it if you’re sufficiently careful and not in a hurry to eat), whilst wood’s is only a little higher at around 300 degrees centigrade, both of which are less than that of a spark or flame. However, we must remember that neither fuel will ignite if it is wet, as water is not a fuel that can be burnt, meaning that it often takes a while to dry wood out sufficiently for it to catch, and that big, solid blocks of wood take quite a bit of energy to heat up.

From all of this information we can extrapolate the first rule that everybody learns about firelighting; that in order to catch a fire needs air, dry fuel and heat (the air provides the oxygen, the fuel the stuff it reacts with and the heat the activation energy). When one of these is lacking, one must make up for it by providing an excess of at least one of the other two, whilst remembering not to let the provision of the other ingredients suffer; it does no good, for example, to throw tons of fuel onto a new, small fire since it will snuff out its access to the air and put the fire out. Whilst fuel and air are usually relatively easy to come by when starting a fire, heat is always the tricky thing; matches are short lived, sparks even more so, and the fact that most of your fuel is likely to be damp makes the job even harder.

Provision of heat is also the main reason behind all of our classical methods of putting a fire out; covering it with cold water cuts it off from both heat and oxygen, and whilst blowing on a fire will provide it with more oxygen, it will also blow away the warm air close to the fire and replace it with cold, causing small flames like candles to be snuffed out (it is for this reason that a fire should be blown on very gently if you are trying to get it to catch and also why doing so will cause the flames, which are caused by hot air remember, to disappear but the embers to glow more brightly and burn with renewed vigour once you have stopped blowing).  Once a fire has sufficient heat, it is almost impossible to put out and blowing on it will only provide it with more oxygen and cause it to burn faster, as was ably demonstrated during the Great Fire of London. I myself have once, with a few friends, laid a fire that burned for 11 hours straight; many times it was reduced to a few humble embers, but it was so hot that all we had to do was throw another log on it and it would instantly begin to burn again. When the time came to put it out, it took half an hour for the embers to dim their glow.