In the early years of the 19th century, science was on a roll. The dark days of alchemy were beginning to give way to the modern science of chemistry as we know it today, the world of physics and the study of electromagnetism were starting to get going, and the world was on the brink of an industrial revolution that would be powered by scientists and engineers. Slowly, we were beginning to piece together exactly how our world works, and some dared to dream of a day where we might understand all of it. Yes, it would be a long way off, yes there would be stumbling blocks, but maybe, just maybe, so long as we don’t discover anything inconvenient like advanced cosmology, we might one day begin to see the light at the end of the long tunnel of science.

Most of this stuff was the preserve of hopeless dreamers, but in the year 1814 a brilliant mathematician and philosopher, responsible for underpinning vast quantities of modern mathematics and cosmology, called Pierre-Simon Laplace published a bold new article that took this concept to extremes. Laplace lived in the age of ‘the clockwork universe’, a theory that held Newton’s laws of motion to be sacrosanct truths and claimed that these laws of physics caused the universe to just keep on ticking over, just like the mechanical innards of a clock- and just like a clock, the universe was predictable. Just as one hour after five o clock will always be six, presuming a perfect clock, so every result in the world can be predicted from the results. Laplace’s arguments took such theory to its logical conclusion; if some vast intellect were able to know the precise positions of every particle in the universe, and all the forces and motions of them, at a single point in time, then using the laws of physics such an intellect would be able to know everything, see into the past, and predict the future.

Those who believed in this theory were generally disapproved of by the Church for devaluing the role of God and the unaccountable divine, whilst others thought it implied a lack of free will (although these issues are still considered somewhat up for debate to this day). However, among the scientific community Laplace’s ideas conjured up a flurry of debate; some entirely believed in the concept of a predictable universe, in the theory of scientific determinism (as it became known), whilst others pointed out the sheer difficulty in getting any ‘vast intellect’ to fully comprehend so much as a heap of sand as making Laplace’s arguments completely pointless. Other, far later, observers, would call into question some of the axiom’s upon which the model of the clockwork universe was based, such as Newton’s laws of motion (which collapse when one does not take into account relativity at very high velocities); but the majority of the scientific community was rather taken with the idea that they could know everything about something should they choose to. Perhaps the universe was a bit much, but being able to predict everything, to an infinitely precise degree, about a few atoms perhaps, seemed like a very tempting idea, offering a delightful sense of certainty. More than anything, to these scientists there work now had one overarching goal; to complete the laws necessary to provide a deterministic picture of the universe.

However, by the late 19th century scientific determinism was beginning to stand on rather shaky ground; although  the attack against it came from the rather unexpected direction of science being used to support the religious viewpoint. By this time the laws of thermodynamics, detailing the behaviour of molecules in relation to the heat energy they have, had been formulated, and fundamental to the second law of thermodynamics (which is, to this day, one of the fundamental principles of physics) was the concept of entropy.  Entropy (denoted in physics by the symbol S, for no obvious reason) is a measure of the degree of uncertainty or ‘randomness’ inherent in the universe; or, for want of a clearer explanation, consider a sandy beach. All of the grains of sand in the beach can be arranged in a vast number of different ways to form the shape of a disorganised heap, but if we make a giant, detailed sandcastle instead there are far fewer arrangements of the molecules of sand that will result in the same structure. Therefore, if we just consider the two situations separately, it is far, far more likely that we will end up with a disorganised ‘beach’ structure rather than a castle forming of its own accord (which is why sandcastles don’t spring fully formed from the sea), and we say that the beach has a higher degree of entropy than the castle. This increased likelihood of higher entropy situations, on an atomic scale, means that the universe tends to increase the overall level of entropy in it; if we attempt to impose order upon it (by making a sandcastle, rather than waiting for one to be formed purely by chance), we must input energy, which increases the entropy of the surrounding air and thus resulting in a net entropy increase. This is the second law of thermodynamics; entropy always increases, and this principle underlies vast quantities of modern physics and chemistry.

If we extrapolate this situation backwards, we realise that the universe must have had a definite beginning at some point; a starting point of order from which things get steadily more chaotic, for order cannot increase infinitely as we look backwards in time. This suggests some point at which our current universe sprang into being, including all the laws of physics that make it up; but this cannot have occurred under ‘our’ laws of physics that we experience in the everyday universe, as they could not kickstart their own existence. There must, therefore, have been some other, higher power to get the clockwork universe in motion, destroying the image of it as some eternal, unquestionable predictive cycle. At the time, this was seen as vindicating the idea of the existence of God to start everything off; it would be some years before Edwin Hubble would venture the Big Bang Theory, but even now we understand next to nothing about the moment of our creation.

However, this argument wasn’t exactly a death knell for determinism; after all, the laws of physics could still describe our existing universe as a ticking clock, surely? True; the killer blow for that idea would come from Werner Heisenburg in 1927.

Heisenburg was a particle physicist, often described as the person who invented quantum mechanics (a paper which won him a Nobel prize). The key feature of his work here was the concept of uncertainty on a subatomic level; that certain properties, such as the position and momentum of a particle, are impossible to know exactly at any one time. There is an incredibly complicated explanation for this concerning wave functions and matrix algebra, but a simpler way to explain part of the concept concerns how we examine something’s position (apologies in advance to all physics students I end up annoying). If we want to know where something is, then the tried and tested method is to look at the thing; this requires photons of light to bounce off the object and enter our eyes, or hypersensitive measuring equipment if we want to get really advanced. However, at a subatomic level a photon of light represents a sizeable chunk of energy, so when it bounces off an atom or subatomic particle, allowing us to know where it is, it so messes around with the atom’s energy that it changes its velocity and momentum, although we cannot predict how. Thus, the more precisely we try to measure the position of something, the less accurately we are able to know its velocity (and vice versa; I recognise this explanation is incomplete, but can we just take it as red that finer minds than mine agree on this point). Therefore, we cannot ever measure every property of every particle in a given space, never mind the engineering challenge; it’s simply not possible.

This idea did not enter the scientific consciousness comfortably; many scientists were incensed by the idea that they couldn’t know everything, that their goal of an entirely predictable, deterministic universe would forever remain unfulfilled. Einstein was a particularly vocal critic, dedicating the rest of his life’s work to attempting to disprove quantum mechanics and back up his famous statement that ‘God does not play dice with the universe’. But eventually the scientific world came to accept the truth; that determinism was dead. The universe would never seem so sure and predictable again.


The End of The World

As everyone who understands the concept of buying a new calendar when the old one runs out should be aware, the world is emphatically due to not end on December 21st this year thanks to a Mayan ‘prophecy’ that basically amounts to one guy’s arm getting really tired and deciding ‘sod carving the next year in, it’s ages off anyway’. Most of you should also be aware of the kind of cosmology theories that talk about the end of the world/the sun’s expansion/the universe committing suicide that are always hastily suffixed with an ‘in 200 billion years or so’, making the point that there’s really no need to worry and that the world is probably going to be fine for the foreseeable future; or at least, that by the time anything serious does happen we’re probably not going to be in a position to complain.

However, when thinking about this, we come across a rather interesting, if slightly macabre, gap; an area nobody really wants to talk about thanks to a mixture of lack of certainty and simple fear. At some point in the future, we as a race and a culture will surely not be here. Currently, we are. Therefore, between those two points, the human race is going to die.

Now, from a purely biological perspective there is nothing especially surprising or worrying about this; species die out all the time (in fact we humans are getting so good at inadvertent mass slaughter that between 2 and 20 species are going extinct every day), and others evolve and adapt to slowly change the face of the earth. We humans and our few thousand years of existence, and especially our mere two or three thousand of organised mass society, are the merest blip in the earth’s long and varied history. But we are also unique in more ways than one; the first race to, to a very great extent, remove ourselves from the endless fight for survival and start taking control of events once so far beyond our imagination as to be put down to the work of gods. If the human race is to die, as it surely will one day, we are simply getting too smart and too good at thinking about these things for it to be the kind of gradual decline & changing of a delicate ecosystem that characterises most ‘natural’ extinctions. If we are to go down, it’s going to be big and it’s going to be VERY messy.

In short, with the world staying as it is and as it has for the past few millennia we’re not going to be dying out very soon. However, this is also not very biologically unusual, for when a species go extinct it is usually the result of either another species with which they are engaging in direct competition out-competing them and causing them to starve, or a change in environmental conditions meaning they are no longer well-adapted for the environment they find themselves in. But once again, human beings appear to be showing a semblance of being rather above this; having carved out what isn’t so much an ecological niche as a categorical redefining of the way the world works there is no other creature that could be considered our biological competitor, and the thing that has always set humans apart ecologically is our ability to adapt. From the ice ages where we hunted mammoth, to the African deserts where the San people still live in isolation, there are very few things the earth can throw at us that are beyond the wit of humanity to live through. Especially a human race that is beginning to look upon terraforming and cultured food as a pretty neat idea.

So, if our environment is going to change sufficiently for us to begin dying out, things are going to have to change not only in the extreme, but very quickly as well (well, quickly in geographical terms at least). This required pace of change limits the number of potential extinction options to a very small, select list. Most of these you could make a disaster film out of (and in most cases one has), but one that is slightly less dramatic (although they still did end up making a film about it) is global warming.

Some people are adamant that global warming is either a) a myth, b) not anything to do with human activity or c) both (which kind of seems a contradiction in terms, but hey). These people can be safely categorized under ‘don’t know what they’re *%^&ing talking about’, as any scientific explanation that covers all the available facts cannot fail to reach the conclusion that global warming not only exists, but that it’s our fault. Not only that, but it could very well genuinely screw up the world- we are used to the idea that, in the long run, somebody will sort it out, we’ll come up with a solution and it’ll all be OK, but one day we might have to come to terms with a state of affairs where the combined efforts of our entire race are simply not enough. It’s like the way cancer always happens to someone else, until one morning you find a lump. One day, we might fail to save ourselves.

The extent to which global warming looks set to screw around with our climate is currently unclear, but some potential scenarios are extreme to say the least. Nothing is ever quite going to match up to the picture portrayed in The Day After Tomorrow (for the record, the Gulf Stream will take around a decade to shut down if/when it does so), but some scenarios are pretty horrific. Some predict the flooding of vast swathes of the earth’s surface, including most of our biggest cities, whilst others predict mass desertification, a collapse of many of the ecosystems we rely on, or the polar regions swarming across Northern Europe. The prospect of the human population being decimated is a very real one.

But destroyed? Totally? After thousands of years of human society slowly getting the better of and dominating all that surrounds it? I don’t know about you, but I find that quite unlikely- at the very least, it at least seems to me like it’s going to take more than just one wave of climate change to finish us off completely. So, if climate change is unlikely to kill us, then what else is left?

Well, in rather a nice, circular fashion, cosmology may have the answer, even if we don’t some how manage to pull off a miracle and hang around long enough to let the sun’s expansion get us. We may one day be able to blast asteroids out of existence. We might be able to stop the super-volcano that is Yellowstone National Park blowing itself to smithereens when it erupts as it is due to in the not-too-distant future (we also might fail at both of those things, and let either wipe us out, but ho hum). But could we ever prevent the sun emitting a gamma ray burst at us, of a power sufficient to cause the third largest extinction in earth’s history last time it happened? Well, we’ll just have to wait and see…

The Inevitable Dilemma

And so, today I conclude this series of posts on the subject of alternative intelligence (man, I am getting truly sick of writing that word). So far I have dealt with the philosophy, the practicalities and the fundamental nature of the issue, but today I tackle arguably the biggest and most important aspect of AI- the moral side. The question is simple- should we be pursuing AI at all?

The moral arguments surrounding AI are a mixed bunch. One of the biggest is the argument that is being thrown at a steadily wider range of high-level science nowadays (cloning, gene analysis and editing, even synthesis of new artificial proteins)- that the human race does not have the moral right, experience or ability to ‘play god’ and modify the fundamentals of the world in this way. Our intelligence, and indeed our entire way of being, has evolved over thousands upon millions of years of evolution, and has been slowly sculpted and built upon by nature over this time to find the optimal solution for self-preservation and general well being- this much scientists will all accept. However, this argument contends that the relentless onward march of science is simply happening too quickly, and that the constant demand to make the next breakthrough, do the next big thing before everybody else, means that nobody is stopping to think of the morality of creating a new species of intelligent being.

This argument is put around a lot with issues such as cloning or culturing meat, and it’s probably not helped matters that it is typically put around by the Church- never noted as getting on particularly well with scientists (they just won’t let up about bloody Galileo, will they?). However, just think about what could happen if we ever do succeed in creating a fully sentient computer. Will we all be enslaved by some robotic overlord (for further reference, see The Matrix… or any other of the myriad sci-fi flicks based on the same idea)? Will we keep on pushing and pushing to greater endeavours until we build a computer with intelligence on all levels infinitely superior to that of the human race? Or will we turn robot-kind into a slave race- more expendable than humans, possibly with programmed subservience? Will we have to grant them rights and freedoms just like us?

Those last points present perhaps the biggest other dilemma concerning AI from a purely moral standpoint- at what point will AI blur the line between being merely a machine and being a sentient entity worthy of all the rights and responsibilities that entails? When will a robot be able to be considered responsible for its own actions? When will be able to charge a robot as the perpetrator of a crime? So far, only one person has ever been killed by a robot (during an industrial accident at a car manufacturing plant), but if such an event were ever to occur with a sentient robot, how would we punish it? Should it be sentenced to life in prison? If in Europe, would the laws against the death penalty prevent a sentient robot from being ‘switched off’? The questions are boundless, but if the current progression of AI is able to continue until sentient AI is produced, then they will have to be answered at some point.

But there are other, perhaps more worrying issues to confront surrounding advanced AI. The most obvious non-moral opposition to AI comes from an argument that has been made in countless films over the years, from Terminator to I, Robot- namely, the potential that if robot-kind are ever able to equal or even better our mental faculties, then they could one day be able to overthrow us as a race. This is a very real issue when confronting the stereotypical issue of a war robot- that of an invincible metal machine capable of wanton destruction on par with a medium sized tank, and who is easily able to repair itself and make more of itself. It’s an idea that is reasonably unlikely to ever become real, but it actually raises another idea- one that is more likely to happen, more likely to build unnoticed, and is far, far more scary. What if the human race, fragile little blobs of fairly dumb flesh that we are, were ever to be totally superseded as an entity by robots?

This, for me, is the single most terrifying aspect of AI- the idea that I may one day become obsolete, an outdated model, a figment of the past. When compared to a machine’s ability to churn out hundreds of copies of itself simply from a blueprint and a design, the human reproductive system suddenly looks very fragile and inefficient by comparison. When compared to tough, hard, flexible modern metals and plastics that can be replaced in minutes, our mere flesh and blood starts to seem delightfully quaint. And if the whirring numbers of a silicon chip are ever able to become truly intelligent, then their sheer processing capacity makes our brains seem like outdated antiques- suddenly, the organic world doesn’t seem quite so amazing, and certainly more defenceless.

But could this ever happen? Could this nightmare vision of the future where humanity is nothing more than a minority race among a society ruled by silicon and plastic ever become a reality? There is a temptation from our rational side to say of course not- for one thing, we’re smart enough not to let things get to that stage, and that’s if AI even gets good enough for it to happen. But… what if it does? What if they can be that good? What if intelligent, sentient robots are able to become a part of a society to an extent that they become the next generation of engineers, and start expanding upon the abilities of their kind? From there on, one can predict an exponential spiral of progression as each successive and more intelligent generation turns out the next, even better one. Could it ever happen? Maybe not. Should we be scared? I don’t know- but I certainly am.