Shining Curtains

When the Vikings swept across Europe in the 7th and 8th centuries, they brought with them many stories; stories of their Gods, of the birth of the world, of Asgard, of Valhalla, of Jormundur the world-serpent, of Loki the trickster, Odin the father and of Ragnarok- the end of this world and the beginning of the next. However, the reason I mention the Vikings today is in reference to one particular set of stories they brought with them; of shining curtains of brilliant, heavenly fire, dancing across the northern sky as the Gods fought with one another. Such lights were not common in Europe, but they were certainly known, and throughout history have provoked terror at the anger of the various Gods that was clearly being displayed across the heavens. Now, we know these shining curtains as the aurora borealis (Aurora was the Roman goddess of the dawn, whilst boreas was the Greek name for the north wind (because the aurora was only observed in the far north- a similar feature known as the aurora australis is seen near the south pole). The name was acquired in 1621).

Nowadays, we know that the auroras are an electromagnetic effect, which was demonstrated quite spectacularly in 1859. On the 28th of August and 2nd of September that year, spectacular auroras erupted across much of the northern hemisphere, reaching their peak at one o’clock in the morning EST, and as far south as Boston the light was enough to read by. However, the feature I am interested here concerns the American Telegraph Line, stretching almost due north between Boston, Massachusetts, and Portland, Maine. Because of the great length and orientation of this line, the electromagnetic field generated by the aurora was sufficient to induce a current in the telegraph, to the extent that operators at both ends of the line communicated to decide to switch off their batteries (which were only interfering) and operate solely on aurora-power for around two hours. Aside from a gentle fluctuation of current, no problems were reported with this system.

We now know that the ultimate cause of the aurorae is our sun, and that two loads of exceptional solar activity were responsible for the 1859 aurora. We all know the sun emits a great deal of energy from the nuclear fusion going on in its core, but it also emits a whole lot of other stuff; including a lot of ionised (charged) gas, or plasma. This outflow of charged particles forms what is known as the solar wind, flowing out into space in all directions; it is this solar wind that generates the tail on comets, and is why such a tail always points directly away from the sun. However, things get interesting when the solar wind hits a planet such as Earth, which has a magnetic field surrounding it. Earth’s magnetic field looks remarkably similar to that of a large, three-dimensional bar magnet (this picture demonstrates it’s shape well), and when a large amount of charged particles passes through this magnetic field it is subject to something known as the motor effect. As every GCSE physics student knows, it is this effect that allows us to generate motion from electricity, and the same thing happens here; the large mass of moving charge acts as a current, and this cuts across the earth’s magnetic field. This generates a force (this is basically what the motor effect does), and this force points sideways, pushing the solar wind sideways. However, as it moves, so does the direction of the ‘current’, and thus the direction of the force changes too; this process ends up causing the charge to spin around the magnetic field lines of the earth, causing it to spiral as this mass of charged particles moves along them. Following these field lines, the charge will end up spiralling towards the poles of the earth, at which point the field lines bend and start going into the earth itself. As the plasma follows these lines therefore, it will come into contact with the Earth’s atmosphere, and one section of it in particular; the magnetosphere.

The magnetosphere is a region of our atmosphere that covers the upper level of our ionosphere which has a strong magnetic field. Here, the magnetic fields of both the charged plasma and the magnetosphere itself combine in a rather complicated process known as magnetic reconnection, the importance of which will be discussed later. Now, let us consider the contents of the plasma, all these charged particles and in particular high energy electrons that are now bumping into atoms of air in the ionosphere. This bumping into atoms gives them energy, which an atom deals with by having electrons within the atoms jump up energy levels and enter an excited state. After a short while, the atoms ‘cool down’ by having electrons drop down energy levels again, releasing packets of electromagnetic energy as they do so. We observe this release of EM radiation as visible light, and hey presto! we can see the aurorae. What colour the aurora ends up being depends on what atoms we are interacting with; oxygen is more common higher up and generates green and red aurorae depending on height, so these are the most common colours. If the solar wind is able to get further down in the atmosphere, it can interact with nitrogen and produce blue and purple aurorae.

The shape of the aurorae can be put down to the whole business of spiralling around field lines; this causes, as the field lines bend in towards the earth’s poles, them to describe roughly circular paths around the north and south poles. However, plasma does not conduct electricity very well between magnetic field lines, as this pattern is, so we would not expect the aurora to be very bright under normal circumstances. The reason this is not the case, and that aurorae are as visible and beautiful as they are, can be put down to the process of magnetic reconnection, which makes the plasma more conductive and allows these charged particles to flow more easily around in a circular path. This circular path around the poles causes the aurora to follow approximately east-west lines into the far distance, and thus we get the effect of ‘curtains’ of light following (roughly) this east-west pattern. The flickery, wavy nature of these aurora is, I presume, due to fluctuations in the solar wind and/or actual winds in the upper atmosphere. The end result? Possibly the most beautiful show Earth has to offer us. I love science.

Advertisement

The Alternative Oven

During the Second World War, the RAF pioneered the use of radar to detect the presence of the incoming Luftwaffe raids. One of the key pieces of equipment used in the construction of the radars was called a magnetron, which uses a magnetic field to propel high-speed electrons and generate the kind of high-powered radio waves needed for such a technology to be successful over long distances. After the war was over, the British government felt it could share such technology with its American allies, and so granted permission for Raytheon, a private American enterprise, to produce them. Whilst experimenting with such a radar set in 1945, a Raytheon engineer called Percy Spencer reached to the chocolate bar in his pocket; and discovered it had melted. He later realised that the electromagnetic radiation generated by the radar set had been the cause of this heating effect, and thought that such technology could be put to a different, non-military use- and so the microwave oven was born.

Since then, the microwave has become the epitome of western capitalism’s golden age; the near-ubiquitous kitchen gadget, usually in the traditional white plastic casing, designed to make certain specific aspects of a process already technically performed  by another appliance (the oven) that bit faster and more convenient. As such, it has garnered its fair share of hate over the years, shunned by serious foodies as a taste-ruining harbinger of doom to one’s gastric juices that wouldn’t be seen dead in any serious kitchen. The simplicity of the microwaving process (especially given that there is frequently no need for a pot or container) has also lead to the rise of microwavable meals, designed to take the concept of ultra-simple cooking to its extreme by creating an entire meal  from a few minutes in the microwave. However, as everyone who’s every attempted a bit of home cooking will know, such process does not naturally occur quite so easily and thus these ready meals generally require large quantities of what is technically known as ‘crap’ for them to function as meals. This low quality food has become distinctly associated with the microwave itself, further enhancing its image as a tool for the lazy and the kind of societal dregs that the media like to portray in scare statistics.

In fairness, this is hardly the device’s fault, and it is a pretty awesome one. Microwave ovens work thanks to the polarity of water molecules; they consist of one positively charged end (where the hydrogen part of H2O is) and a negatively charged end (where the electron-rich oxygen bit is). Also charged are electromagnetic waves, such as the microwaves after which the oven takes its name, and such waves (being as they are, y’know, waves) also oscillate (aka ‘wobble) back and forth. This charge wobbling back and forth causes the water molecules (technically it works with other polarised molecules too, but there are very few other liquids consisting of polarised molecules that one encounters in cookery; this is why microwaves can heat up stuff without water in, but don’t do it very well) to oscillate too. This oscillation means that they gain kinetic energy from the microwave radiation; it just so happens that the frequency of the microwave radiation is chosen so that it closely matches the resonant frequency of the oscillation of the water molecules, meaning this energy transfer is very efficient*; a microwave works out as a bit over 60% efficient (most of the energy being lost in the aforementioned magnetron used to generate the microwaves), which is exceptional compared to a kettle’s level of around 10%. The efficiency of an oven really depends on the meal and how it’s being used, but for small meals or for reheating cold (although not frozen, since ice molecules aren’t free to vibrate as much as liquid water) food the microwave is definitely the better choice. It helps even more that microwaves are really bad at penetrating the metal & glass walls of a microwave, meaning they tend to bounce off until they hit the food and that very little of the energy gets lost to the surroundings once it’s been emitted. However, if nothing is placed in the microwave then these waves are not ‘used up’ in heating food and tend to end up back in the microwave emitter, causing it to burn out and doing the device some serious damage.

*I have heard it said that this is in fact a myth, and that microwaves are in fact selected to be slightly off the resonant frequency range so that they don’t end up heating the food too violently. I can’t really cite my sources on this one nor explain why it makes sense.

This use of microwave radiation to heat food incurs some rather interesting side-effects; up first is the oft-cited myth that microwaves cook food ‘from the inside out’. This isn’t actually true, for although the inside of a piece of food may be slightly more insulated than the outside the microwaves should transfer energy to all of the food at a roughly equal rate; if anything the outside will get more heating since it is hit first by the microwaves. This effect is observed thanks to the chemical makeup of a lot of the food put in a microwave, which generally have the majority of their water content beneath the surface; this makes the surface relatively cool and crusty, with little water to heat it up, and the inside scaldingly hot. The use of high-power microwaves also means that just about everyone in the country has in their home a death ray capable of quite literally boiling someone’s brain if the rays were directed towards them (hence why dismantling a microwave is semi-illegal as I understand it), but it also means that everyone has ample opportunity to, so long as they don’t intend to use the microwave again afterwards  and have access to a fire extinguisher, do some seriously cool stuff with it. Note that this is both dangerous, rather stupid and liable to get you into some quite weird stuff, nothing is a more sure fire indicator of a scientific mind than an instinct to go ‘what happens when…’ and look at the powerful EM radiation emitter sitting in your kitchen. For the record, I did not say that this was a good idea…

The Red Flower

Fire is, without a doubt, humanity’s oldest invention and its greatest friend; to many, the fundamental example what separates us from other animals. The abilities to keep warm through the coldest nights and harshest winters, to scare away predators by harnessing this strange force of nature, and to cook a joint of meat because screw it, it tastes better that way, are incredibly valuable ones, and they have seen us through many a tough moment. Over the centuries, fire in one form or another has been used for everything from being a weapon of war to furthering science, and very grateful we are for it too.

However, whilst the social history of fire is interesting, if I were to do a post on it then you dear readers would be faced with 1000 words of rather repetitive and somewhat boring myergh (technical term), so instead I thought I would take this opportunity to resort to my other old friend in these matters: science, as well as a few things learned from several years of very casual outdoorsmanship.

Fire is the natural product of any sufficiently exothermic reaction (ie one that gives out heat, rather than taking it in). These reactions can be of any type, but since fire can only form in air most of such reactions we are familiar with tend to be oxidation reactions; oxygen from the air bonding chemically with the substance in question (although there are exceptions;  a sample of potassium placed in water will float on the top and react with the water itself, become surrounded surrounded by a lilac flame sufficiently hot to melt it, and start fizzing violently and pushing itself around the container. A larger dose of potassium, or a more reactive alkali metal such as rubidium, will explode). The emission of heat causes a relatively gentle warming effect for the immediate area, but close to the site of the reaction itself a very large amount of heat is emitted in a small area. This excites the molecules of air close to the reaction and causes them to vibrate violently, emitting photons of electromagnetic radiation as they do so in the form of heat & light (among other things). These photons cause the air to glow brightly, creating the visible flame we can see; this large amount of thermal energy also ionises a lot of atoms and molecules in the area of the flame, meaning that a flame has a slight charge and is more conductive than the surrounding air. Because of this, flame probes are sometimes used to get rid of the excess charge in sensitive electromagnetic experiments, and flamethrowers can be made to fire lightning. Most often the glowing flame results in the characteristic reddy/orange colour of fire, but some reactions, such as the potassium one mentioned, cause them to emit radiation of other frequencies for a variety of reasons (chief among them the temperature of the flame and the spectral properties of the material in question), causing the flames to be of different colours, whilst a white-hot area of a fire is so hot that the molecules don’t care what frequency the photons they’re emitting are at so long as they can get rid of the things fast enough. Thus, light of all wavelengths gets emitted, and we see white light. The flickery nature of a flame is generally caused by the excited hot air moving about rapidly, until it gets far enough away from the source of heat to cool down and stop glowing; this process happens all the time with hundreds of packets of hot air, causing them to flicker back and forth.

However, we must remember that fires do not just give out heat, but must take some in too. This is to do with the way the chemical reaction to generate the heat in question works; the process requires the bonds between atoms to be broken, which uses up energy, before they can be reformed into a different pattern to release energy, and the energy needed to break the bonds and get the reaction going is known as the activation energy. Getting the molecules of the stuff you’re trying to react to the activation energy is the really hard part of lighting a fire, and different reactions (involving the burning of different stuff) have different activation energies, and thus different ‘ignition temperatures’ for the materials involved. Paper, for example, famously has an ignition temperature of 451 Fahrenheit (which means, incidentally, that you can cook with it if you’re sufficiently careful and not in a hurry to eat), whilst wood’s is only a little higher at around 300 degrees centigrade, both of which are less than that of a spark or flame. However, we must remember that neither fuel will ignite if it is wet, as water is not a fuel that can be burnt, meaning that it often takes a while to dry wood out sufficiently for it to catch, and that big, solid blocks of wood take quite a bit of energy to heat up.

From all of this information we can extrapolate the first rule that everybody learns about firelighting; that in order to catch a fire needs air, dry fuel and heat (the air provides the oxygen, the fuel the stuff it reacts with and the heat the activation energy). When one of these is lacking, one must make up for it by providing an excess of at least one of the other two, whilst remembering not to let the provision of the other ingredients suffer; it does no good, for example, to throw tons of fuel onto a new, small fire since it will snuff out its access to the air and put the fire out. Whilst fuel and air are usually relatively easy to come by when starting a fire, heat is always the tricky thing; matches are short lived, sparks even more so, and the fact that most of your fuel is likely to be damp makes the job even harder.

Provision of heat is also the main reason behind all of our classical methods of putting a fire out; covering it with cold water cuts it off from both heat and oxygen, and whilst blowing on a fire will provide it with more oxygen, it will also blow away the warm air close to the fire and replace it with cold, causing small flames like candles to be snuffed out (it is for this reason that a fire should be blown on very gently if you are trying to get it to catch and also why doing so will cause the flames, which are caused by hot air remember, to disappear but the embers to glow more brightly and burn with renewed vigour once you have stopped blowing).  Once a fire has sufficient heat, it is almost impossible to put out and blowing on it will only provide it with more oxygen and cause it to burn faster, as was ably demonstrated during the Great Fire of London. I myself have once, with a few friends, laid a fire that burned for 11 hours straight; many times it was reduced to a few humble embers, but it was so hot that all we had to do was throw another log on it and it would instantly begin to burn again. When the time came to put it out, it took half an hour for the embers to dim their glow.