F=ma

On Christmas Day 1642, a baby boy was born to a well-off Lincolnshire family in Woolsthorpe Manor. His childhood was somewhat chaotic; his father had died before he was born, and his mother remarried (to a stepfather he came to acutely dislike) when he was three. He was later to run away from school, discovered he hated the farming alternative and returned to become the school’s top pupil. He was also to later attend Trinity College Cambridge; oh, and became arguably the greatest scientist and mathematician of all time. His name was Isaac Newton.

Newton started off in a small way, developing binomial theorem; a technique used to expand powers of polynomials, which is a kind of fundamental technique used pretty much everywhere in modern science and mathematics; the advanced mathematical equivalent of knowing that 2 x 4 = 8. Oh, and did I mention that he was still a student at this point? Taking a break from his Cambridge career for a couple of years due to the minor inconvenience of the Great Plague, he whiled away the hours inventing calculus, which he finalised upon his return to Cambridge. Calculus is the collective name for differentiating and integrating, which allows one to find out the rate at which something is occurring, the gradient of a graph and the area under it algebraically; plus enabling us to reverse all of the above processes. This makes it sound like rather a neat and useful gimmick, but belies the fact that it allows us to mathematically describe everything from water flowing through a pipe to how aeroplanes fly (the Euler equations mentioned in my aerodynamics posts come from advanced calculus), and the discovery of it alone would have been enough to warrant Newton’s place in the history books. OK, and Leibniz who discovered pretty much the same thing at roughly the same time, but he got there later than Newton. So there.

However, discovering the most important mathematical tool to modern scientists and engineers was clearly not enough to occupy Newton’s prodigious mind during his downtime, so he also turned his attention to optics, aka the behaviour of light. He began by discovering that white light was comprised of all colours, revolutionising all contemporary scientific understanding of light itself by suggesting that coloured objects did not create their own colour, but reflected only certain portions of already coloured light. He combined this with discovering diffraction; that light shone through glass or another transparent material at an angle will bend. This then lead him to explain how telescopes worked, why the existing designs (based around refracting light through a lens) were flawed, and to design an entirely new type of telescope (the reflecting telescope) that is used in all modern astronomical equipment, allowing us to study, look at and map the universe like never before. Oh, and he also took the time to theorise the existence of photons (he called them corpuscles), which wouldn’t be discovered for another 250 years.

When that got boring, Newton turned his attention to a subject that he had first fiddled around with during his calculus time: gravity. Nowadays gravity is a concept taught to every schoolchild, but in Newton’s day the idea that objects fall to earth was barely even considered. Aristotle’s theories dictated that every object ‘wanted’ to be in a state of stillness on the ground unless disturbed, and Newton was the first person to make a serious challenge to that theory in nearly two millennia (whether an apple tree was involved in his discovery is heavily disputed). Not only did he and colleague Robert Hooke define the force of gravity, but they also discovered the inverse-square law for its behaviour (aka if you multiply the distance you are away from a planet by 2, then you will decrease the gravitational force on you by 2 squared, or 4) and turned it into an equation (F=-GMm/r^2). This single equation would explain Kepler’s work on celestial mechanics, accurately predict the orbit of the ****ing planets (predictions based, just to remind you, on the thoughts of one bloke on earth with little technology more advanced than a pen and paper) and form the basis of his subsequent book: “Philosophiæ Naturalis Principia Mathematica”.

Principia, as it is commonly known, is probably the single most important piece of scientific writing ever written. Not only does it set down all Newton’s gravitational theories and explore their consequences (in minute detail; the book in its original Latin is bigger than a pair of good-sized bricks), but he later defines the concepts of mass, momentum and force properly for the first time; indeed, his definitions survive to this day and have yet to be improved upon.  He also set down his three laws of motion: velocity is constant unless a force acts upon an object, the acceleration of an object is proportional to the force acting on it and the object’s mass (summarised in the title of this post) and action and reaction are equal and opposite. These three laws not only tore two thousand years of scientific theory to shreds, but nowadays underlie everything we understand about object mechanics; indeed, no flaw was found in Newton’s equations until relativity was discovered 250 years later, which only really applies to objects travelling at around 100,000 kilometres per second or greater; not something Newton was ever likely to come across.

Isaac Newton’s life outside science was no less successful; he was something of an amateur alchemist and when he was appointed Master of the Royal Mint (a post he held for 30 years until his death; there is speculation his alchemical meddling may have resulted in mercury poisoning) he used those skills to great affect in assessing coinage, in an effort to fight Britain’s massive forgery problem. He was successful in this endeavour and later became the first man to put Britain onto the gold, rather than silver, standard, reflecting his knowledge of the superior chemical qualities of the latter metal (see another previous post). He is still considered by many to be the greatest genius who ever lived, and I can see where those people are coming from.

However, the reason I find Newton especially interesting concerns his private life. Newton was a notoriously hard man to get along with; he never married, almost certainly died a virgin and is reported to have only laughed once in his life (when somebody asked him what was the point in studying Euclid. The joke is somewhat highbrow, I’ll admit). His was a lonely existence, largely friendless, and he lived, basically for his work (he has been posthumously diagnosed with everything from bipolar disorder to Asperger’s syndrome). In an age when we are used to such charismatic scientists as Richard Feynman and Stephen Hawking, Newton’s cut-off, isolated existence with only his prodigious intellect for company seems especially alien. That the approach was effective is most certainly not in doubt; every one of his scientific discoveries would alone be enough to place him in science’s hall of fame, and to have done all of them puts him head and shoulders above all of his compatriots. In many ways, Newton’s story is one of the price of success. Was Isaac Newton a successful man? Undoubtedly, in almost every field he turned his hand to. Was he a happy man? We don’t know, but it would appear not. Given the choice between success and happiness, where would you fall?

Advertisement

A Continued History

This post looks set to at least begin by following on directly from my last one- that dealt with the story of computers up to Charles Babbage’s difference and analytical engines, whilst this one will try to follow the history along from there until as close to today as I can manage, hopefully getting in a few of the basics of the workings of these strange and wonderful machines.

After Babbage’s death as a relatively unknown and unloved mathematician in 1871, the progress of the science of computing continued to tick over. A Dublin accountant named Percy Ludgate, independently of Babbage’s work, did develop his own programmable, mechanical computer at the turn of the century, but his design fell into a similar degree of obscurity and hardly added anything new to the field. Mechanical calculators had become viable commercial enterprises, getting steadily cheaper and cheaper, and as technological exercises were becoming ever more sophisticated with the invention of the analogue computer. These were, basically a less programmable version of the difference engine- mechanical devices whose various cogs and wheels were so connected up that they would perform one specific mathematical function to a set of data. James Thomson in 1876 built the first, which could solve differential equations by integration (a fairly simple but undoubtedly tedious mathematical task), and later developments were widely used to collect military data and for solving problems concerning numbers too large to solve by human numerical methods. For a long time, analogue computers were considered the future of modern computing, but since they solved and modelled problems using physical phenomena rather than data they were restricted in capability to their original setup.

A perhaps more significant development came in the late 1880s, when an American named Herman Hollerith invented a method of machine-readable data storage in the form of cards punched with holes. These had been around for a while to act rather like programs, such as the holed-paper reels of a pianola or the punched cards used to automate the workings of a loom, but this was the first example of such devices being used to store data (although Babbage had theorised such an idea for the memory systems of his analytical engine). They were cheap, simple, could be both produced and read easily by a machine, and were even simple to dispose of. Hollerith’s team later went on to process the data of the 1890 US census, and would eventually become most of IBM. The pattern of holes on these cards could be ‘read’ by a mechanical device with a set of levers that would go through a hole if there was one present, turning the appropriate cogs to tell the machine to count up one. This system carried on being used right up until the 1980s on IBM systems, and could be argued to be the first programming language.

However, to see the story of the modern computer truly progress we must fast forward to the 1930s. Three interesting people and acheivements came to the fore here: in 1937 George Stibitz, and American working in Bell Labs, built an electromechanical calculator that was the first to process data digitally using on/off binary electrical signals, making it the first digital. In 1936, a bored German engineering student called Konrad Zuse dreamt up a method for processing his tedious design calculations automatically rather than by hand- to this end he devised the Z1, a table-sized calculator that could be programmed to a degree via perforated film and also operated in binary. His parts couldn’t be engineered well enough for it to ever work properly, but he kept at it to eventually build 3 more models and devise the first programming language. However, perhaps the most significant figure of 1930s computing was a young, homosexual, English maths genius called Alan Turing.

Turing’s first contribution to the computing world came in 1936, when he published a revolutionary paper showing that certain computing problems cannot be solved by one general algorithm. A key feature of this paper was his description of a ‘universal computer’, a machine capable of executing programs based on reading and manipulating a set of symbols on a strip of tape. The symbol on the tape currently being read would determine whether the machine would move up or down the strip, how far, and what it would change the symbol to, and Turing proved that one of these machines could replicate the behaviour of any computer algorithm- and since computers are just devices running algorithms, they can replicate any modern computer too. Thus, if a Turing machine (as they are now known) could theoretically solve a problem, then so could a general algorithm, and vice versa if it couldn’t. Not only that, but since modern computers cannot multi-task on the. These machines not only lay the foundations for computability and computation theory, on which nearly all of modern computing is built, but were also revolutionary as they were the first theorised to use the same medium for both data storage and programs, as nearly all modern computers do. This concept is known as a von Neumann architecture, after the man who first pointed out and explained this idea in response to Turing’s work.

Turing machines contributed one further, vital concept to modern computing- that of Turing-completeness. A Turing-complete system was defined as a single Turing machine (known as a Universal Turing machine) capable of replicating the behaviour of any other theoretically possible Turing machine, and thus any possible algorithm or computable sequence. Charles Babbage’s analytical engine would have fallen into that class had it ever been built, in part because it was capable of the ‘if X then do Y’ logical reasoning that characterises a computer rather than a calculator. Ensuring the Turing-completeness of a system is a key part of designing a computer system or programming language to ensure its versatility and that it is capable of performing all the tasks that could be required of it.

Turing’s work had laid the foundations for nearly all the theoretical science of modern computing- now all the world needed was machines capable of performing the practical side of things. However, in 1942 there was a war on, and Turing was being employed by the government’s code breaking unit at Bletchley Park, Buckinghamshire. They had already cracked the German’s Enigma code, but that had been a comparatively simple task since they knew the structure and internal layout of the Enigma machine. However, they were then faced by a new and more daunting prospect: the Lorenz cipher, encoded by an even more complex machine for which they had no blueprints. Turing’s genius, however, apparently knew no bounds, and his team eventually worked out its logical functioning. From this a method for deciphering it was formulated, but it required an iterative process that took hours of mind-numbing calculation to get a result out. A faster method of processing these messages was needed, and to this end an engineer named Tommy Flowers designed and built Colossus.

Colossus was a landmark of the computing world- the first electronic, digital, and partially programmable computer ever to exist. It’s mathematical operation was not highly sophisticated- it used vacuum tubes containing light emission and sensitive detection systems, all of which were state-of-the-art electronics at the time, to read the pattern of holes on a paper tape containing the encoded messages, and then compared these to another pattern of holes generated internally from a simulation of the Lorenz machine in different configurations. If there were enough similarities (the machine could obviously not get a precise matching since it didn’t know the original message content) it flagged up that setup as a potential one for the message’s encryption, which could then be tested, saving many hundreds of man-hours. But despite its inherent simplicity, its legacy is simply one of proving a point to the world- that electronic, programmable computers were both possible and viable bits of hardware, and paved the way for modern-day computing to develop.