F=ma

On Christmas Day 1642, a baby boy was born to a well-off Lincolnshire family in Woolsthorpe Manor. His childhood was somewhat chaotic; his father had died before he was born, and his mother remarried (to a stepfather he came to acutely dislike) when he was three. He was later to run away from school, discovered he hated the farming alternative and returned to become the school’s top pupil. He was also to later attend Trinity College Cambridge; oh, and became arguably the greatest scientist and mathematician of all time. His name was Isaac Newton.

Newton started off in a small way, developing binomial theorem; a technique used to expand powers of polynomials, which is a kind of fundamental technique used pretty much everywhere in modern science and mathematics; the advanced mathematical equivalent of knowing that 2 x 4 = 8. Oh, and did I mention that he was still a student at this point? Taking a break from his Cambridge career for a couple of years due to the minor inconvenience of the Great Plague, he whiled away the hours inventing calculus, which he finalised upon his return to Cambridge. Calculus is the collective name for differentiating and integrating, which allows one to find out the rate at which something is occurring, the gradient of a graph and the area under it algebraically; plus enabling us to reverse all of the above processes. This makes it sound like rather a neat and useful gimmick, but belies the fact that it allows us to mathematically describe everything from water flowing through a pipe to how aeroplanes fly (the Euler equations mentioned in my aerodynamics posts come from advanced calculus), and the discovery of it alone would have been enough to warrant Newton’s place in the history books. OK, and Leibniz who discovered pretty much the same thing at roughly the same time, but he got there later than Newton. So there.

However, discovering the most important mathematical tool to modern scientists and engineers was clearly not enough to occupy Newton’s prodigious mind during his downtime, so he also turned his attention to optics, aka the behaviour of light. He began by discovering that white light was comprised of all colours, revolutionising all contemporary scientific understanding of light itself by suggesting that coloured objects did not create their own colour, but reflected only certain portions of already coloured light. He combined this with discovering diffraction; that light shone through glass or another transparent material at an angle will bend. This then lead him to explain how telescopes worked, why the existing designs (based around refracting light through a lens) were flawed, and to design an entirely new type of telescope (the reflecting telescope) that is used in all modern astronomical equipment, allowing us to study, look at and map the universe like never before. Oh, and he also took the time to theorise the existence of photons (he called them corpuscles), which wouldn’t be discovered for another 250 years.

When that got boring, Newton turned his attention to a subject that he had first fiddled around with during his calculus time: gravity. Nowadays gravity is a concept taught to every schoolchild, but in Newton’s day the idea that objects fall to earth was barely even considered. Aristotle’s theories dictated that every object ‘wanted’ to be in a state of stillness on the ground unless disturbed, and Newton was the first person to make a serious challenge to that theory in nearly two millennia (whether an apple tree was involved in his discovery is heavily disputed). Not only did he and colleague Robert Hooke define the force of gravity, but they also discovered the inverse-square law for its behaviour (aka if you multiply the distance you are away from a planet by 2, then you will decrease the gravitational force on you by 2 squared, or 4) and turned it into an equation (F=-GMm/r^2). This single equation would explain Kepler’s work on celestial mechanics, accurately predict the orbit of the ****ing planets (predictions based, just to remind you, on the thoughts of one bloke on earth with little technology more advanced than a pen and paper) and form the basis of his subsequent book: “Philosophiæ Naturalis Principia Mathematica”.

Principia, as it is commonly known, is probably the single most important piece of scientific writing ever written. Not only does it set down all Newton’s gravitational theories and explore their consequences (in minute detail; the book in its original Latin is bigger than a pair of good-sized bricks), but he later defines the concepts of mass, momentum and force properly for the first time; indeed, his definitions survive to this day and have yet to be improved upon.  He also set down his three laws of motion: velocity is constant unless a force acts upon an object, the acceleration of an object is proportional to the force acting on it and the object’s mass (summarised in the title of this post) and action and reaction are equal and opposite. These three laws not only tore two thousand years of scientific theory to shreds, but nowadays underlie everything we understand about object mechanics; indeed, no flaw was found in Newton’s equations until relativity was discovered 250 years later, which only really applies to objects travelling at around 100,000 kilometres per second or greater; not something Newton was ever likely to come across.

Isaac Newton’s life outside science was no less successful; he was something of an amateur alchemist and when he was appointed Master of the Royal Mint (a post he held for 30 years until his death; there is speculation his alchemical meddling may have resulted in mercury poisoning) he used those skills to great affect in assessing coinage, in an effort to fight Britain’s massive forgery problem. He was successful in this endeavour and later became the first man to put Britain onto the gold, rather than silver, standard, reflecting his knowledge of the superior chemical qualities of the latter metal (see another previous post). He is still considered by many to be the greatest genius who ever lived, and I can see where those people are coming from.

However, the reason I find Newton especially interesting concerns his private life. Newton was a notoriously hard man to get along with; he never married, almost certainly died a virgin and is reported to have only laughed once in his life (when somebody asked him what was the point in studying Euclid. The joke is somewhat highbrow, I’ll admit). His was a lonely existence, largely friendless, and he lived, basically for his work (he has been posthumously diagnosed with everything from bipolar disorder to Asperger’s syndrome). In an age when we are used to such charismatic scientists as Richard Feynman and Stephen Hawking, Newton’s cut-off, isolated existence with only his prodigious intellect for company seems especially alien. That the approach was effective is most certainly not in doubt; every one of his scientific discoveries would alone be enough to place him in science’s hall of fame, and to have done all of them puts him head and shoulders above all of his compatriots. In many ways, Newton’s story is one of the price of success. Was Isaac Newton a successful man? Undoubtedly, in almost every field he turned his hand to. Was he a happy man? We don’t know, but it would appear not. Given the choice between success and happiness, where would you fall?

Advertisement

The Story of the Atom

Possibly the earliest scientific question we as a race attempted to answer was ‘what is our world made of’. People reasoned that everything had to be made of something- all the machines and things we build have different components in them that we can identify, so it seemed natural that those materials and components were in turn made of some ‘stuff’ or other. Some reasoned that everything was made up of the most common things present in our earth; the classical ‘elements’ of earth, air, fire and water, but throughout the latter stages of the last millennia the burgeoning science of chemistry began to debunk this idea. People sought for a new theory to answer what everything consisted of, what the building blocks were, and hoped to find in this search an answer to several other questions; why chemicals that reacted together did so in fixed ratios, for example. For a solution to this problem, they returned to an idea almost as old as science itself; that everything consisted of tiny blobs of matter, invisible to the naked eye, that joined to one another in special ways. The way they joined together varied depending on the stuff they made up, hence the different properties of different materials, and the changing of these ‘joinings’ was what was responsible for chemical reactions and their behaviour. The earliest scientists who theorised the existence of these things called them corpuscles; nowadays we call them atoms.

By the turn of the twentieth century, thanks to two hundred years of chemistry using atoms to conveniently explain their observations, it was considered common knowledge among the scientific community that an atom was the basic building block of matter, and it was generally considered to be the smallest piece of matter in the universe; everything was made of atoms, and atoms were fundamental and solid. However, in 1897 JJ Thomson discovered the electron, with a small negative charge, and his evidence suggested that electrons were a constituent part of atoms. But atoms were neutrally charged, so there had to be some positive charge present to balance out; Thomson postulated that the negative electrons ‘floated’ within a sea of positive charge, in what became known as the plum pudding model. Atoms were not fundamental at all; even these components of all matter had components themselves. A later experiment by Ernest Rutherford sought to test the theory of the plum pudding model; he bombarded a thin piece of gold foil with positively charged alpha particles, and found that some were deflected at wild angles but that most passed straight through. This suggested, rather than a large uniform area of positive charge, a small area of very highly concentrated positive charge, such that when the alpha particle came close to it it was repelled violently (just like putting two like poles of a magnet together) but that most of the time it would miss this positive charge completely; most of the atom was empty space. So, he thought the atom must be like the solar system, with the negative electrons acting like planets orbiting a central, positive nucleus.

This made sense in theory, but the maths didn’t check out; it predicted the electrons to either spiral into the nucleus and for the whole of creation to smash itself to pieces, or for it all to break apart. It took Niels Bohr to suggest that the electrons might be confined to discrete orbital energy levels (roughly corresponding to distances from the nucleus) for the model of the atom to be complete; these energy levels (or ‘shells’) were later extrapolated to explain why chemical reactions occur, and the whole of chemistry can basically be boiled down to different atoms swapping electrons between energy levels in accordance with the second law of thermodynamics. Bohr’s explanation drew heavily from Max Planck’s recent explanation of quantum theory, which modelled photons of light as having discrete energy levels, and this suggested that electrons were also quantum particles; this ran contrary to people’s previous understanding of them, since they had been presumed to be solid ‘blobs’ of matter. This was but one step along the principle that defines quantum theory; nothing is actually real, everything is quantum, so don’t even try to imagine how it all works.

However, this still left the problem of the nucleus unsolved; what was this area of such great charge density packed  tightly into the centre of each atom, around which the electrons moved? What was it made of? How big was it? How was it able to account for almost all of a substance’s mass, given how little the electrons weighed?

Subsequent experiments have revealed an atomic nucleus to tiny almost beyond imagining; if your hand were the size of the earth, an atom would be roughly one millimetre in diameter, but if an atom were the size of St. Paul’s Cathedral then its nucleus would be the size of a full stop. Imagining the sheer tinyness of such a thing defies human comprehension. However, this tells us nothing about the nucleus’ structure; it took Ernest Rutherford (the guy who had disproved the plum pudding model) to take the first step along this road when he, in 1918, confirmed that the nucleus of a hydrogen atom comprised just one component (or ‘nucleon’ as we collectively call them today). Since this component had a positive charge, to cancel out the one negative electron of a hydrogen atom, he called it a proton, and then (entirely correctly) postulated that all the other positive charges in larger atomic nuclei were caused by more protons stuck together in the nucleus. However, having multiple positive charges all in one place would normally cause them to repel one another, so Rutherford suggested that there might be some neutrally-charged particles in there as well, acting as a kind of electromagnetic glue. He called these neutrons (since they were neutrally charged), and he has since been proved correct; neutrons and protons are of roughly the same size, collectively constitute around 99.95% of any given atom’s mass, and are found in all atomic nuclei. However, even these weren’t quite fundamental subatomic particles, and as the 20th century drew on, scientists began to delve even deeper inside the atom; and I’ll pick up that story next time.

The Red Flower

Fire is, without a doubt, humanity’s oldest invention and its greatest friend; to many, the fundamental example what separates us from other animals. The abilities to keep warm through the coldest nights and harshest winters, to scare away predators by harnessing this strange force of nature, and to cook a joint of meat because screw it, it tastes better that way, are incredibly valuable ones, and they have seen us through many a tough moment. Over the centuries, fire in one form or another has been used for everything from being a weapon of war to furthering science, and very grateful we are for it too.

However, whilst the social history of fire is interesting, if I were to do a post on it then you dear readers would be faced with 1000 words of rather repetitive and somewhat boring myergh (technical term), so instead I thought I would take this opportunity to resort to my other old friend in these matters: science, as well as a few things learned from several years of very casual outdoorsmanship.

Fire is the natural product of any sufficiently exothermic reaction (ie one that gives out heat, rather than taking it in). These reactions can be of any type, but since fire can only form in air most of such reactions we are familiar with tend to be oxidation reactions; oxygen from the air bonding chemically with the substance in question (although there are exceptions;  a sample of potassium placed in water will float on the top and react with the water itself, become surrounded surrounded by a lilac flame sufficiently hot to melt it, and start fizzing violently and pushing itself around the container. A larger dose of potassium, or a more reactive alkali metal such as rubidium, will explode). The emission of heat causes a relatively gentle warming effect for the immediate area, but close to the site of the reaction itself a very large amount of heat is emitted in a small area. This excites the molecules of air close to the reaction and causes them to vibrate violently, emitting photons of electromagnetic radiation as they do so in the form of heat & light (among other things). These photons cause the air to glow brightly, creating the visible flame we can see; this large amount of thermal energy also ionises a lot of atoms and molecules in the area of the flame, meaning that a flame has a slight charge and is more conductive than the surrounding air. Because of this, flame probes are sometimes used to get rid of the excess charge in sensitive electromagnetic experiments, and flamethrowers can be made to fire lightning. Most often the glowing flame results in the characteristic reddy/orange colour of fire, but some reactions, such as the potassium one mentioned, cause them to emit radiation of other frequencies for a variety of reasons (chief among them the temperature of the flame and the spectral properties of the material in question), causing the flames to be of different colours, whilst a white-hot area of a fire is so hot that the molecules don’t care what frequency the photons they’re emitting are at so long as they can get rid of the things fast enough. Thus, light of all wavelengths gets emitted, and we see white light. The flickery nature of a flame is generally caused by the excited hot air moving about rapidly, until it gets far enough away from the source of heat to cool down and stop glowing; this process happens all the time with hundreds of packets of hot air, causing them to flicker back and forth.

However, we must remember that fires do not just give out heat, but must take some in too. This is to do with the way the chemical reaction to generate the heat in question works; the process requires the bonds between atoms to be broken, which uses up energy, before they can be reformed into a different pattern to release energy, and the energy needed to break the bonds and get the reaction going is known as the activation energy. Getting the molecules of the stuff you’re trying to react to the activation energy is the really hard part of lighting a fire, and different reactions (involving the burning of different stuff) have different activation energies, and thus different ‘ignition temperatures’ for the materials involved. Paper, for example, famously has an ignition temperature of 451 Fahrenheit (which means, incidentally, that you can cook with it if you’re sufficiently careful and not in a hurry to eat), whilst wood’s is only a little higher at around 300 degrees centigrade, both of which are less than that of a spark or flame. However, we must remember that neither fuel will ignite if it is wet, as water is not a fuel that can be burnt, meaning that it often takes a while to dry wood out sufficiently for it to catch, and that big, solid blocks of wood take quite a bit of energy to heat up.

From all of this information we can extrapolate the first rule that everybody learns about firelighting; that in order to catch a fire needs air, dry fuel and heat (the air provides the oxygen, the fuel the stuff it reacts with and the heat the activation energy). When one of these is lacking, one must make up for it by providing an excess of at least one of the other two, whilst remembering not to let the provision of the other ingredients suffer; it does no good, for example, to throw tons of fuel onto a new, small fire since it will snuff out its access to the air and put the fire out. Whilst fuel and air are usually relatively easy to come by when starting a fire, heat is always the tricky thing; matches are short lived, sparks even more so, and the fact that most of your fuel is likely to be damp makes the job even harder.

Provision of heat is also the main reason behind all of our classical methods of putting a fire out; covering it with cold water cuts it off from both heat and oxygen, and whilst blowing on a fire will provide it with more oxygen, it will also blow away the warm air close to the fire and replace it with cold, causing small flames like candles to be snuffed out (it is for this reason that a fire should be blown on very gently if you are trying to get it to catch and also why doing so will cause the flames, which are caused by hot air remember, to disappear but the embers to glow more brightly and burn with renewed vigour once you have stopped blowing).  Once a fire has sufficient heat, it is almost impossible to put out and blowing on it will only provide it with more oxygen and cause it to burn faster, as was ably demonstrated during the Great Fire of London. I myself have once, with a few friends, laid a fire that burned for 11 hours straight; many times it was reduced to a few humble embers, but it was so hot that all we had to do was throw another log on it and it would instantly begin to burn again. When the time came to put it out, it took half an hour for the embers to dim their glow.

The Age of Reason

Science is a wonderful thing- particularly in the modern age where the more adventurous (or more willing to tempt fate, depending on your point of view) like to think that most of science is actually pretty well done and dusted. I mean, yes there are a lot of the little details we have yet to work out, but the big stuff, the major hows and whys, have been basically sorted out. We know why there are rainbows, why quantum tunnelling composite appears to defy basic logic, and even why you always seem to pick the slowest queue- science appears to have got it pretty much covered.

[I feel I must take this opportunity to point out one of my favourite stories about the world of science- at the start of the 20th century, there was a prevailing attitude among physicists that physics was going to last, as an advanced science, for about another 20 years or so. They basically presumed that they had worked almost everything out, and now all they had to do was to tie up all the loose ends. However, one particular loose end, the photoelectric effect, simply refused to budge by their classical scientific laws. The only person to come up with a solution was Max Planck who, by modelling light (which everyone knew was a wave) as a particle instead, opened the door to the modern age of quantum theory. Physics as a whole took one look at all the new questions this proposed and, as one, took a collective facepalm.]

In any case, we are now at such an advanced stage of the scientific revolution, that there appears to be nothing, in everyday life at least, that we cannot, at least in part, explain. We might not know, for example, exactly how the brain is wired up, but we still have enough of an understanding to have a pretty accurate guess as to what part of it isn’t working properly when somebody comes in with brain damage. We don’t get exactly why or how photons appear to defy the laws of logic, but we can explain enough of it to tell you why a lens focuses light onto a point. You get the idea.

Any scientist worth his salt will scoff at this- a chemist will bang on about the fact that nanotubes were only developed a decade ago and will revolutionise the world in another, a biologist will tell you about all the myriad of species we know next to nothing about, and the myriad more that we haven’t discovered yet, and a theoretical physicist will start quoting logical impossibilities and make you feel like a complete fool. But this is all, really, rather high-level science- the day-to-day stuff is all pretty much done. Right?

Well… it’s tempting to think so. But in reality all the scientists are pretty correct- Newton’s great ocean of truth remains very much a wild and unexplored place, and not just in all the nerdy places that nobody without 3 separate doctorates can understand. There are some things that everybody, from the lowliest man in the street to the cleverest scientists, can comprehend completely and not understand in the slightest.

Take, for instance, the case of Sugar the cat. Sugar was a part-Persian with a hip deformity who often got uncomfortable in cars. As such when her family moved house, they opted to leave her with a neighbour. After a couple of weeks, Sugar disappeared, before reappearing 14 months later… at her family’s new house. What makes this story even more remarkable? The fact that Silky’s owners had moved from California to Oklahoma, and that a cat with a severe hip problem had trekked 1500 miles, over 100 a month,  to a place she had never even seen. How did she manage it? Nobody has a sodding clue.

This isn’t the only story of long-distance cat return, although Sugar holds the distance record. But an ability to navigate that a lot of sat navs would be jealous of isn’t the only surprising oddity in the world of nature. Take leopards, for example. The most common, and yet hardest to find and possibly deadliest of ‘The Big Five’, everyone knows that they are born killers. Humans, by contrast, are in many respects born prey- we are slow over short distances, have no horns, claws, long teeth or other natural defences, are fairly poor at hiding and don’t even live in herds for safety in numbers. Especially vulnerable are, of course, babies and young children, who by animal standards take an enormously long time to even stand upright, let alone mature. So why exactly, in 1938, were a leopard and her cubs found with a near-blind human child who she had carried off as a baby five years ago. Even more remarkable was the superlative sense of smell the child had, being able to differentiate between different people and even objects with nothing more than a good sniff- which also reminds me of a video I saw a while ago of a blind Scottish boy who can tell what material something is made of and how far away it is (well enough to play basketball) simply by making a clicking sound with his mouth.

I’m not really sure what I’m trying to say in this post- I have a sneaking suspicion my subconscious simply wanted to give me an excuse to share some of the weirdest stories I have yet to see on Cracked.com. So, to round off, I’ll leave you with a final one. In 1984 a hole was found in a farm in Washington State, about 3 metres by 2 and around 60cm deep. 25 metres away, the three tons of grass-covered earth that had previously filled the hole was found- completely intact, in a single block. One person described it as looking like it had been cut away with ‘a gigantic cookie cutter’, but this failed to explain why all of the roots hanging off it were intact. There were no tracks or any distinguishing feature apart from a dribble of earth leading between hole and divot, and the closest thing anyone had to an explanation was to lamely point out that there had been a minor earthquake 20 miles ago a week beforehand.

When I invent a time machine, forget killing Hitler- the first thing I’m doing is going back to find out what the &*^% happened with that hole.