NUMBERS

One of the most endlessly charming parts of the human experience is our capacity to see something we can’t describe and just make something up in order to do so, never mind whether it makes any sense in the long run or not. Countless examples have been demonstrated over the years, but the mother lode of such situations has to be humanity’s invention of counting.

Numbers do not, in and of themselves, exist- they are simply a construct designed by our brains to help us get around the awe-inspiring concept of the relative amounts of things. However, this hasn’t prevented this ‘neat little tool’ spiralling out of control to form the vast field that is mathematics. Once merely a diverting pastime designed to help us get more use out of our counting tools, maths (I’m British, live with the spelling) first tentatively applied itself to shapes and geometry before experimenting with trigonometry, storming onwards to algebra, turning calculus into a total mess about four nanoseconds after its discovery of something useful, before just throwing it all together into a melting point of cross-genre mayhem that eventually ended up as a field that it as close as STEM (science, technology, engineering and mathematics) gets to art, in that it has no discernible purpose other than for the sake of its own existence.

This is not to say that mathematics is not a useful field, far from it. The study of different ways of counting lead to the discovery of binary arithmetic and enabled the birth of modern computing, huge chunks of astronomy and classical scientific experiments were and are reliant on the application of geometric and trigonometric principles, mathematical modelling has allowed us to predict behaviour ranging from economics & statistics to the weather (albeit with varying degrees of accuracy) and just about every aspect of modern science and engineering is grounded in the brute logic that is core mathematics. But… well, perhaps the best way to explain where the modern science of maths has lead over the last century is to study the story of i.

One of the most basic functions we are able to perform to a number is to multiply it by something- a special case, when we multiply it by itself, is ‘squaring’ it (since a number ‘squared’ is equal to the area of a square with side lengths of that number). Naturally, there is a way of reversing this function, known as finding the square root of a number (ie square rooting the square of a number will yield the original number). However, convention dictates that a negative number squared makes a positive one, and hence there is no number squared that makes a negative and there is no such thing as the square root of a negative number, such as -1. So far, all I have done is use a very basic application of logic, something a five-year old could understand, to explain a fact about ‘real’ numbers, but maths decided that it didn’t want to not be able to square root a negative number, so had to find a way round that problem. The solution? Invent an entirely new type of number, based on the quantity i (which equals the square root of -1), with its own totally arbitrary and made up way of fitting  on a number line, and which can in no way exist in real life.

Admittedly, i has turned out to be useful. When considering electromagnetic forces, quantum physicists generally assign the electrical and magnetic components real and imaginary quantities in order to identify said different components, but its main purpose was only ever to satisfy the OCD nature of mathematicians by filling a hole in their theorems. Since then, it has just become another toy in the mathematician’s arsenal, something for them to play with, slip into inappropriate situations to try and solve abstract and largely irrelevant problems, and with which they can push the field of maths in ever more ridiculous directions.

A good example of the way mathematics has started to lose any semblance of its grip on reality concerns the most famous problem in the whole of the mathematical world- Fermat’s last theorem. Pythagoras famously used the fact that, in certain cases, a squared plus b squared equals c squared as a way of solving some basic problems of geometry, but it was never known as to whether a cubed plus b cubed could ever equal c cubed if a, b and c were whole numbers. This was also true for all other powers of a, b and c greater than 2, but in 1637 the brilliant French mathematician Pierre de Fermat claimed, in a scrawled note inside his copy of Diohantus’ Arithmetica, to have a proof for this fact ‘that is too large for this margin to contain’. This statement ensured the immortality of the puzzle, but its eventual solution (not found until 1995, leading most independent observers to conclude that Fermat must have made a mistake somewhere in his ‘marvellous proof’) took one man, Andrew Wiles, around a decade to complete. His proof involved showing that the terms involved in the theorem could be expressed in the form of an incredibly weird equation that doesn’t exist in the real world, and that all equations of this type had a counterpart equation of an equally irrelevant type. However, since the ‘Fermat equation’ was too weird to exist in the other format, it could not logically be true.

To a mathematician, this was the holy grail; not only did it finally lay to rest an ages-old riddle, but it linked two hitherto unrelated branches of algebraic mathematics by way of proving what is (now it’s been solved) known as the Taniyama-Shimura theorem. To anyone interested in the real world, this exercise made no contribution to it whatsoever- apart from satisfying a few nerds, nobody’s life was made easier by the solution, it didn’t solve any real-world problem, and it did not make the world a tangibly better place. In this respect then, it was a total waste of time.

However, despite everything I’ve just said, I’m not going to decide that all modern day mathematics is a waste of time; very few human activities ever are. Mathematics is many things; among them ridiculous, confusing, full of contradictions and potential slip-ups and, in a field whose age of winning a major prize is younger than in any other STEM field, apparently full of those likely to belittle you out of future success should you enter the world of serious academia. But, for some people, maths is just what makes the world makes sense, and at its heart that was all it was ever created to do. And if some people want their life to be all about the little symbols that make the world make sense, then well done to the world for making a place for them.

Oh, and there’s a theory doing the rounds of cosmology nowadays that reality is nothing more than a mathematical construct. Who knows in what obscure branch of reverse logarithmic integrals we’ll find answers about that one…

Advertisements

The Encyclopaedia Webbanica

Once again, today’s post will begin with a story- this time, one about a place that was envisaged over a hundred years ago. It was called the Mundaneum.

The Mundaneum today is a tiny museum in the city of Mons, Belgium, which opened in its current form in 1998. It is a far cry from the original, first conceptualised by Nobel Peace Prize winner Henri la Fontaine and fellow lawyer and pioneer Paul Otlet in 1895. The two men, Otlet in particular, had a vision- to create a place where every single piece of knowledge in the world was housed. Absolutely all of it.

Even in the 19th century, when the breadth of scientific knowledge was a million times smaller than it is today (a 19th century version of New Scientist would be publishable about once a year), this was a huge undertaking, this was a truly gigantic undertaking from a practical perspective. Not only did Otlet and la Fontaine attempt to collect a copy of just about every book ever written in search of information, but went further than any conventional library of the time by also looking through pamphlets, photographs, magazines, and posters in search of data. The entire thing was stored on small 3×5 index cards and kept in a carefully organised and detailed system of files, and this paper database eventually grew to contain over 12 million entries. People would send letters or telegraphs to the government-funded Mundaneum (the name referencing to the French monde, meaning world, rather than mundane as in boring), who in turn would have their staff search through their files in order to give a response to just about any question that could be asked.

However, the most interesting thing of all about Otlet’s operation, quite apart from the sheer conceptual genius of a man who was light-years ahead of his time, was his response to the problems posed when the enterprise got too big for its boots. After a while, the sheer volume of information and, more importantly, paper, meant that the filing system was getting too big to be practical for the real world. Otlet realised that this was not a problem that could ever be resolved by more space or manpower- the problem lay in the use of paper. And this was where Otlet pulled his masterstroke of foresight.

Otlet envisaged a version of the Mundaneum where the whole paper and telegraph business would be unnecessary- instead, he foresaw a “mechanical, collective brain”, through which people of the world could access all the information the world had to offer stored within it via a system of “electric microscopes”. Not only that, but he envisaged the potential for these ‘microscopes’ to connect to one another, and letting people “participate, applaud, give ovations, [or] sing in the chorus”. Basically, a pre-war Belgian lawyer predicted the internet (and, in the latter statement, social networking too).

Otlet has never been included in the pantheon of web pioneers- he died in 1944 after his beloved Mundaneum had been occupied and used to house a Nazi art collection, and his vision of the web as more of an information storage tool for nerdy types is hardly what we have today. But, to me, his vision of a web as a hub for sharing information and a man-made font of all knowledge is envisaged, at least in part, by one huge and desperately appealing corner of the web today: Wikipedia.

If you take a step back and look at Wikipedia as a whole, its enormous success and popularity can be quite hard to understand. Beginning from a practical perspective, it is a notoriously difficult site to work with- whilst accessing the information is very user-friendly, the editing process can be hideously confusing and difficult, especially for the not very computer-literate (seriously, try it). My own personal attempts at article-editing have almost always resulted in failure, bar some very small changes and additions to existing text (where I don’t have to deal with the formatting). This difficulty in formatting is a large contributor to another issue- Wikipedia articles are incredibly text-heavy, usually with only a few pictures and captions, which would be a major turn-off in a magazine or book. The very concept of an encyclopaedia edited and made by the masses, rather than a select team of experts, also (initially) seems incredibly foolhardy. Literally anyone can type in just about anything they want, leaving the site incredibly prone to either vandalism or accidental misdirection (see xkcd.com/978/ for Randall Munroe’s take on how it can get things wrong). The site has come under heavy criticism over the years for this fact, particularly on its pages about people (Dan Carter, the New Zealand fly-half, has apparently considered taking up stamp collecting, after hundreds of fans have sent him stamps based on a Wikipedia entry stating that he was a philatelist), and just letting normal people edit it also leaves bias prone to creep in, despite the best efforts of Wikipedia’s team of writers and editors (personally, I think that the site keeps its editing software deliberately difficult to use to minimise the amount of people who can use it easily and so try to minimise this problem).

But, all that aside… Wikipedia is truly wonderful- it epitomises all that is good about the web. It is a free to use service, run by a not-for-profit organisation that is devoid of advertising and is funded solely by the people of the web whom it serves. It is the font of all knowledge to an entire generation of students and schoolchildren, and is the number one place to go for anyone looking for an answer about anything- or who’s just interested in something and would like to learn more. It is built on the principles of everyone sharing and contributing- even flaws or areas lacking citation are denoted by casual users if they slip up past the editors the first time around. It’s success is built upon its size, both big and small- the sheer quantity of articles (there are now almost four million, most of which are a bit bigger than would have fitted on one of Otlet’s index cards), means that it can be relied upon for just about any query (and will be at the top of 80% of my Google searches), but its small server space, and staff size (less than 50,000, most of whom are volunteers- the Wikimedia foundation employs less than 150 people) keeps running costs low and allows it to keep on functioning despite its user-sourced funding model. Wikipedia is currently the 6th (ish) most visited website in the world, with 12 billion page views a month. And all this from an entirely not-for-profit organisation designed to let people know facts.

Nowadays, the Mundaneum is a small museum, a monument to a noble but ultimately flawed experiment. It original offices in Brussels were left empty, gathering dust after the war until a graduate student discovered it and eventually provoked enough interest to move the old collection to Mons, where it currently resides as a shadow of its former glory. But its spirit lives on in the collective brain that its founder envisaged. God bless you, Wikipedia- long may you continue.