‘Fish’ is one of my favourite words. Having only a single syllable means it can be dropped into conversation without a second thought, thus enabling one to cause maximum confusion with minimal time spent considering one’s move, which often rather spoils the moment. The very… forward nature of the word also suits this function- the very bluntness of it, its definitive end and beginning with little in the way of middle to get distracting, almost forces it to take centre stage in any statement, whether alone or accompanied by other words, demanding it be said loud and proud without a trace of fear or embarrassment. It also helps that the word is very rarely an appropriate response to anything, enhancing its inherent weirdness.

Ahem. Sorry about that.

However, fish themselves are very interesting in their own right; and yes, I am about to attempt an overall summary of one of the largest groups in the animal kingdom in less than 1000 words.  For one thing, every single vertebrate on the planet is descended from them; in 1999 a fossil less than 3cm long and 524 million years old was discovered in China with a single ‘stick’ of rigid material, probably cartilage, running down the length of its body. It may be the only example ever discovered of Myllokunmingia fengjiaoa (awesome name), but that tiny little fossil has proved to be among the most significant ever found. Although not proven, that little bit of cartilage is thought to be the first ever backbone, making Myllokunmingia the world’s first fish and the direct ancestor of everything from you to the pigeon outside your window. It’s quite a humbling thought.

This incredible age of fish as a group, which in turn means there are very few specimens of early fish, has meant that piscine evolution is not studied as a single science; the three different classes of fish (bony, cartilaginous and jawless, representing the likes of cod, sharks and hagfish respectively- a fourth class of armoured fish died out some 360 million years ago) all split into separate entities long before any other group of vertebrates began to evolve, and all modern land-based vertebrates (tetrapods, meaning four-limbed) are direct descendants of the bony fish, the most successful of the three groups. This has two interesting side-effects; firstly that a salmon is more closely related to you than to a shark, and secondly (for precisely this reason) that some argue there is no such thing as a fish. The term ‘fish’ was introduced as a coverall term to everything whose lack of weight-bearing limbs confines them to the water before evolutionary biology had really got going, and technically the like of sharks and lamprey should each get a name to themselves- but it appears we’re stuck with fish, so any grumpy biologists are just going to have to suck it.

The reason for this early designation of fish in our language is almost certainly culinary in origin, for this is the main reason we ever came, and indeed continue to come, into contact with them at all. Fish have been an available, nutritious and relatively simple to catch food source for humans for many a millennia, but a mixture of their somewhat limited size, the fact that they can’t be farmed and the fact that bacon tastes damn good meant they are considered by some, particularly in the west (fish has always enjoyed far greater popularity in far eastern cultures), to the poor cousins to ‘proper meat’ like pork or beef. Indeed, many vegetarians (including me; it’s how I was brought up) will eschew meat but quite happily eat fish in large quantities, usually using the logic that fish are so damn stupid they’re almost vegetables anyway. Vegetarians were not, however, the main reason for fish’s survival as a common food for everyone, including those living far inland, in Europe- for that we can thank the Church. Somewhere in the dim and distant past, the Catholic Church decreed that one should not eat red meat on the Sabbath day- but that fish was permitted. This kept fish a common dish throughout Europe, as well as encouraging the rampant rule bending that always accompanies any inconvenient law; beaver were hunted almost to extinction in Europe by being classed as fish under this rule. It was also this ruling that lead to lamprey (a type of jawless fish that looks like a cross between a sea snake and a leech) becoming a delicacy among the crowned heads of Europe, and Henry I of England (third son of William the Conqueror, in case you wanted to know) is reported to have died from eating too many of the things.

The feature most characteristic of fish is, of course, gills, even though not all fish have them and many other aquatic species do (albeit less obviously). To many, how gills work is an absolute mystery, but then again how many of you can say, when it comes right down to the science of the gas exchange process, how your lungs work? In both systems, the basic principle is the same; very small, thin blood vessels within the structure concerned are small and permeable enough to allow gas molecules to move across the gap from one side of the blood vessel’s wall to the other, allowing carbon dioxide built up from moving and generally being alive to move out of the bloodstream and fresh oxygen to move in. The only real difference concerns structure; the lungs consist of a complex, intertwining labyrinth of air spaces of various size with blood vessels spread over the surface and designed to filter oxygen from the air, whilst gills basically string the blood vessels up along a series of sticks and hold them in the path of flowing water to absorb the oxygen dissolved within it- gills are usually located such that water flows through the mouth and out via the gills as the fish swims forward. In order to ensure a constant supply of oxygen-rich water is flowing over the gills, most fish must keep swimming constantly or else the water beside their gills would begin to stagnate- but some species’, such as nurse sharks, are able to pump water over their gills manually, allowing them to lie still and allow them to do… sharky things. Interestingly, the reason gills won’t work on land isn’t simply that they aren’t designed to filter oxygen from the air; a major contributory factor is the fact that, without the surrounding water to support them, the structure of the gills is prone to collapse, causing parts of it cease to be able to function as a gas exchange mechanism.

Well, that was a nice ramble. What’s up next time, I wonder…



Cryptography is a funny business; shady from the beginning, the whole business of codes and ciphers has been specifically designed to hide your intentions and move in the shadows, unnoticed. However, the art of cryptography has been changed almost beyond recognition in the last hundred years thanks to the invention of the computer, and what was once an art limited by the imagination of the nerd responsible has now turned into a question of sheer computing might. But, as always, the best way to start with this story is at the beginning…

There are two different methods of applying cryptography to a message; with a code or with a cipher. A code is a system involving replacing words with other words (‘Unleash a fox’ might mean ‘Send more ammunition’, for example), whilst a cipher involves changing individual letters and their ordering. Use of codes can generally only be limited to a few words that can be easily memorised, and/or requires endless cross-referencing with a book of known ‘translations’, as well as being relatively insecure when it comes to highly secretive information. Therefore, most modern encoding (yes, that word is still used; ‘enciphering’ sounds stupid) takes the form of employing ciphers, and has done for hundreds of years; they rely solely on the application of a simple rule, require far smaller reference manuals, and are more secure.

Early attempts at ciphers were charmingly simple; the ‘Caesar cipher’ is a classic example, famously invented and used by Julius Caesar, where each letter is replaced by the one three along from it in the alphabet (so A becomes D, B becomes E and so on). Augustus Caesar, who succeeded Julius, didn’t set much store by cryptography and used a similar system, although with only a one-place transposition (so A to B and such)- despite the fact that knowledge of the Caesar cipher was widespread, and his messages were hopelessly insecure. These ‘substitution ciphers’ suffered from a common problem; the relative frequency with which certain letters appear in the English language (E being the most common, followed by T) is well-known, so by analysing the frequency of occurring letters in a substitution-enciphered message one can work out fairly accurately what letter corresponds to which, and work out the rest from there. This problem can be partly overcome by careful phrasing of messages and using only short ones, but it’s nonetheless a problem.

Another classic method is to use a transposition cipher, which changes the order of letters- the trick lies in having a suitable ‘key’ with which to do the reordering. A classic example is to write the message in a rectangle of a size known to both encoder and recipient, writing in columns but ‘reading it off’ in rows. The recipient can then reverse the process to read the original message. This is a nice method, and it’s very hard to decipher a single message encoded this way, but if the ‘key’ (e.g. the size of the rectangle) is not changed regularly then one’s adversaries can figure it out after a while. The army of ancient Sparta used a kind of transposition cipher based on a tapered wooden rod called a skytale (pronounced skih-tah-ly), around which a strip of paper was wrapped and the message written down it, one on each turn of paper. The recipient then wrapped the paper around a skytale of identical girth and taper (the tapering prevented letters being evenly spaced, making it harder to decipher), and read the message off- again, a nice idea, but the need to make a new set of skytale’s for everyone every time the key needed changing rendered it impractical. Nonetheless, transposition ciphers are a nice idea, and the Union used them to great effect during the American Civil War.

In the last century, cryptography has developed into even more of an advanced science, and most modern ciphers are based on the concept of transposition ciphers- however, to avoid the problem of using letter frequencies to work out the key, modern ciphers use intricate and elaborate systems to change by how much the ‘value’ of the letter changes each time. The German Lorenz cipher machine used during the Second World War (and whose solving I have discussed in a previous post) involved putting the message through three wheels and electronic pickups to produce another letter; but the wheels moved on one click after each letter was typed, totally changing the internal mechanical arrangement. The only way the British cryptographers working against it could find to solve it was through brute force, designing a computer specifically to test every single possible starting position for the wheels against likely messages. This generally took them several hours to work out- but if they had had a computer as powerful as the one I am typing on, then provided it was set up in the correct manner it would have the raw power to ‘solve’ the day’s starting positions within a few minutes. Such is the power of modern computers, and against such opponents must modern cryptographers pit themselves.

One technique used nowadays presents a computer with a number that is simply too big for it to deal with; they are called ‘trapdoor ciphers’. The principle is relatively simple; it is far easier to find that 17 x 19 = 323 than it is to find the prime factors of 323, even with a computer, so if we upscale this business to start dealing with huge numbers a computer will whimper and hide in the corner just looking at them. If we take two prime numbers, each more than 100 digits long (this is, by the way, the source of the oft-quoted story that the CIA will pay $10,000 to anyone who finds a prime number of over 100 digits due to its intelligence value) and multiply them together, we get a vast number with only two prime factors which we shall, for now, call M. Then, we convert our message into number form (so A=01, B=02, I LIKE TRAINS=0912091105201801091419) and the resulting number is then raised to the power of a third (smaller, three digits will do) prime number. This will yield a number somewhat bigger than M, and successive lots of M are then subtracted from it until it reaches a number less than M (this is known as modulo arithmetic, and can be best visualised by example: so 19+16=35, but 19+16 (mod 24)=11, since 35-24=11). This number is then passed to the intended recipient, who can decode it relatively easily (well, so long as they have a correctly programmed computer) if they know the two prime factors of M (this business is actually known as the RSA problem, and for reasons I cannot hope to understand current mathematical thinking suggests that finding the prime factors of M is the easiest way of solving this; however, this has not yet been proven, and the matter is still open for debate). However, even if someone trying to decode the message knows M and has the most powerful computer on earth, it would take him thousands of years to find out what its prime factors are. To many, trapdoor ciphers have made cryptoanalysis (the art of breaking someone else’s codes), a dead art.

Man, there’s a ton of cool crypto stuff I haven’t even mentioned yet… screw it, this is going to be a two-parter. See you with it on Wednesday…

Practical computing

This looks set to be my final post of this series about the history and functional mechanics of computers. Today I want to get onto the nuts & bolts of computer programming and interaction, the sort of thing you might learn as a budding amateur wanting to figure out how to mess around with these things, and who’s interested in exactly how they work (bear in mind that I am not one of these people and am, therefore, likely to get quite a bit of this wrong). So, to summarise what I’ve said in the last two posts (and to fill in a couple of gaps): silicon chips are massive piles of tiny electronic switches, memory is stored in tiny circuits that are either off or on, this pattern of off and on can be used to represent information in memory, memory stores data and instructions for the CPU, the CPU has no actual ability to do anything but automatically delegates through the structure of its transistors to the areas that do, the arithmetic logic unit is a dumb counting machine used to do all the grunt work and is also responsible, through the CPU, for telling the screen how to make the appropriate pretty pictures.

OK? Good, we can get on then.

Programming languages are a way of translating the medium of computer information and instruction (binary data) into our medium of the same: words and language. Obviously, computers do not understand that the buttons we press on our screen have symbols on them, that these symbols mean something to us and that they are so built to produce the same symbols on the monitor when we press them, but we humans do and that makes computers actually usable for 99.99% of the world population. When a programmer brings up an appropriate program and starts typing instructions into it, at the time of typing their words mean absolutely nothing. The key thing is what happens when their data is committed to memory, for here the program concerned kicks in.

The key feature that defines a programming language is not the language itself, but the interface that converts words to instructions. Built into the workings of each is a list of ‘words’ in binary, each word having a corresponding, but entirely different, string of data associated with it that represents the appropriate set of ‘ons and offs’ that will get a computer to perform the correct task. This works in one of two ways: an ‘interpreter’ is an inbuilt system whereby the programming is stored just as words and is then converted to ‘machine code’ by the interpreter as it is accessed from memory, but the most common form is to use a compiler. This basically means that once you have finished writing your program, you hit a button to tell the computer to ‘compile’ your written code into an executable program in data form. This allows you to delete the written file afterwards, makes programs run faster, and gives programmers an excuse to bum around all the time (I refer you here)

That is, basically how computer programs work- but there is one last, key feature, in the workings of a modern computer, one that has divided both nerds and laymen alike across the years and decades and to this day provokes furious debate: the operating system.

An OS, something like Windows (Microsoft), OS X (Apple) or Linux (nerds), is basically the software that enables the CPU to do its job of managing processes and applications. Think of it this way: whilst the CPU might put two inputs through a logic gate and send an output to a program, it is the operating system that will set it up to determine exactly which gate to put it through and exactly how that program will execute. Operating systems are written onto the hard drive, and can, theoretically, be written using nothing more than a magnetized needle, a lot of time and a plethora of expertise to flip the magnetically charged ‘bits’ on the hard disk. They consist of many different parts, but the key feature of all of them is the kernel, the part that manages the memory, optimises the CPU performance and translates programs from memory to screen. The precise translation and method by which this latter function happens differs from OS to OS, hence why a program written for Windows won’t work on a Mac, and why Android (Linux-powered) smartphones couldn’t run iPhone (iOS) apps even if they could access the store. It is also the cause of all the debate between advocates of different operating systems, since different translation methods prioritise/are better at dealing with different things, work with varying degrees of efficiency and are more  or less vulnerable to virus attack. However, perhaps the most vital things that modern OS’s do on our home computers is the stuff that, at first glance seems secondary- moving stuff around and scheduling. A CPU cannot process more than one task at once, meaning that it should not be theoretically possible for a computer to multi-task; the sheer concept of playing minesweeper whilst waiting for the rest of the computer to boot up and sort itself out would be just too outlandish for words. However, a clever piece of software called a scheduler in each OS which switches from process to process very rapidly (remember computers run so fast that they can count to a billion, one by one, in under a second) to give the impression of it all happening simultaneously. Similarly, a kernel will allocate areas of empty memory for a given program to store its temporary information and run on, but may also shift some rarely-accessed memory from RAM (where it is accessible) to hard disk (where it isn’t) to free up more space (this is how computers with very little free memory space run programs, and the time taken to do this for large amounts of data is why they run so slowly) and must cope when a program needs to access data from another part of the computer that has not been specifically allocated a part of that program.

If I knew what I was talking about, I could witter on all day about the functioning of operating systems and the vast array of headache-causing practicalities and features that any OS programmer must consider, but I don’t and as such won’t. Instead, I will simply sit back, pat myself on the back for having actually got around to researching and (after a fashion) understanding all this, and marvel at what strange, confusing, brilliant inventions computers are.


Today, as very few of you will I’m sure be aware (hey, I wasn’t until a few minutes ago) is World Mental Health Day. I have touched on my own personal experiences of mental health problems before, having spent the last few years suffering from depression, but I feel today is a suitably appropriate time to bring it up again, because this is an issue that, in the modern world, cannot be talked about enough.

Y’see, conservative estimates claim at least 1 in 4 of us will suffer from a mental health problem at some point in our lives, be it a relatively temporary one such as post-natal depression or a lifelong battle with the likes of manic depressive disorder or schizophrenia. Mental health is also in the top five biggest killers in the developed world, through a mixture of suicide, drug usage, self-harming or self-negligence, and as such there is next to zero chance that you will go through your life without somebody you know very closely suffering or even dying as a result of what’s going on in their upstairs. If mental health disorders were a disease in the traditional sense, this would be labelled a red alert, emergency level pandemic.

However, despite the prevalence and danger associated with mental health, the majority of sufferers do so in silence. Some have argued that the two correlate due to the mindset of sufferers, but this claim does not change the fact 9 out of 10 people suffering from a mental health problem say that they feel a degree of social stigma and discrimination against their disability (and yes that description is appropriate; a damaged mind is surely just as debilitating, if not more so, than a damaged body), and this prevents them from coming out to their friends about their suffering.

The reason for this is an all too human one; we humans rely heavily, perhaps more so than any other species, on our sense of sight to formulate our mental picture of the world around us, from the obviously there to the unsaid subtext. We are, therefore, easily able to identify with and relate to physical injuries and obvious behaviours that suggest something is ‘broken’ with another’s body and general being, and that they are injured or disabled is clear to us. However, a mental problem is confined to the unseen recesses of our brain, hiding away from the physical world and making it hard for us to identify with as a problem. We may see people acting down a lot, hanging their head and giving other hints through their body language that something’s up, but everybody looks that way from time to time and it is generally considered a regrettable but normal part of being human. If we see someone acting like that every day, our sympathy for what we perceive as a short-term issue may often turn into annoyance that people aren’t resolving it, creating a sense that they are in the wrong for being so unhappy the whole time and not taking a positive outlook on life.

Then we must also consider the fact that mental health problems tend to place a lot of emphasis on the self, rather than one’s surroundings. With a physical disability, such as a broken leg, the source of our problems, and our worry, is centred on the physical world around us; how can I get up that flight of stairs, will I be able to keep up with everyone, what if I slip or get knocked over, and so on. However, when one suffers from depression, anxiety or whatever, the source of our worry is generally to do with our own personal failings or problems, and less on the world around us. We might continually beat ourselves up over the most microscopic of failings and tell ourselves that we’re not good enough, or be filled by an overbearing, unidentifiable sense of dread that we can only identify as emanating from within ourselves. Thus, when suffering from mental issues we tend to focus our attention inwards, creating a barrier between our suffering and the outside world and making it hard to break through the wall and let others know of our suffering.

All this creates an environment surrounding mental health that it is a subject not to be broached in general conversation, that it just doesn’t get talked about; not so much because it is a taboo of any kind but more due to a sense that it will not fit into the real world that well. This is even a problem in the environment of counselling  specifically designed to try and address such issues, as people are naturally reluctant to let it out or even to ‘give in’ and admit there is something wrong. Many people who take a break from counselling, me included, confident that we’ve come a long way towards solving our various issues, are for this reason resistive to the idea of going back if things take a turn for the worse again.

And it’s not as simple as making people go to counselling either, because quite frequently that’s not the answer. For some people, they go to the wrong place and find their counsellor is not good at relating to and helping them; others may need medication or some such rather than words to get them through the worst times, and for others counselling just plain doesn’t work. But this does not detract from the fact that no mental health condition in no person, however serious, is so bad as to be untreatable, and the best treatment I’ve ever found for my depression has been those moments when people are just nice to me, and make me feel like I belong.

This then, is the two-part message of today, of World Mental Health Day, and of every day and every person across the world; if you have a mental health problem, talk. Get it out there, let people know. Tell your friends, tell your family, find a therapist and tell them, but break the walls of your own mental imprisonment and let the message out. This is not something that should be forever bottled up inside us.

And for the rest of you, those of us who do not suffer or are not at the moment, your task is perhaps even more important; be there. Be prepared to hear that someone has a mental health problem, be ready to offer them support, a shoulder to lean on, but most importantly, just be a nice human being. Share a little love wherever and to whoever you can, and help to make the world a better place for every silent sufferer out there.