Components of components of components…

By the end of my last post, science had reached the level of GCSE physics/chemistry; the world is made of atoms, atoms consist of electrons orbiting a nucleus, and a nucleus consists of a mixture of positively charged protons and neutrally charged neutrons. Some thought that this was the deepest level things could go; that everything was made simply of these three things and that they were the fundamental particles of the universe. However, others pointed out the enormous size difference between an electron and proton, suggesting that the proton and neutron were not as fundamental as the electron, and that we could look even deeper.

In any case, by this point our model of the inside of a nucleus was incomplete anyway; in 1932 James Chadwick had discovered (and named) the neutron, first theorised about by Ernest Rutherford to act as a ‘glue’ preventing the protons of a nucleus from repelling one another and causing the whole thing to break into pieces. However, nobody actually had any idea exactly how this worked, so in 1934 a concept known as the nuclear force was suggested. This theory, proposed by Hideki Yukawa, held that nucleons (then still considered fundamental particles) emitted particles he called mesons; smaller than nucleons, they acted as carriers of the nuclear force. The physics behind this is almost unintelligible to anyone who isn’t a career academic (as I am not), but this is because there is no equivalent to the nuclear force that we encounter during the day-to-day. We find it very easy to understand electromagnetism because we have all seen magnets attracting and repelling one another and see the effects of electricity everyday, but the nuclear force was something more fundamental; a side effect of the constant exchange of mesons between nucleons*. The meson was finally found (proving Yukawa’s theory) in 1947, and Yukawa won the 1949 Nobel Prize for it. Mesons are now understood to belong to a family of particles called gluons, which all act as the intermediary for the nuclear strong force between various different particles; the name gluon hints at this purpose, coming from the word ‘glue’.

*This, I am told, becomes a lot easier to understand once electromagnetism has been studied from the point of view of two particles exchanging photons, but I’m getting out of my depth here; time to move on.

At this point, the physics world decided to take stock; the list of all the different subatomic particles that had been discovered became known as ‘the particle zoo’, but our understanding of them was still patchy. We knew nothing of what the various nucleons and mesons consisted of, how they were joined together, or what allowed the strong nuclear force to even exist; where did mesons come from? How could these particles, 2/3 the size of a proton, be emitted from one without tearing the thing to pieces?

Nobody really had the answers to these, but when investigating them people began to discover other new particles, of a similar size and mass to the nucleons. Most of these particles were unstable and extremely short-lived, decaying into the undetectable in trillionths of trillionths of a second, but whilst they did exist they could be detected using incredibly sophisticated machinery and their existence, whilst not ostensibly meaning anything, was a tantalising clue for physicists. This family of nucleon-like particles was later called baryons, and in 1961 American physicist Murray Gell-Mann organised the various baryons and mesons that had been discovered into groups of eight, a system that became known as the eightfold way. There were two octets to be considered; one contained the mesons, and all the baryons with a ‘spin’ (a quantum property of subatomic particles that I won’t even try to explain) of 1/2. Other baryons had a spin of 3/2 (or one and a half), and they formed another octet; except that only seven of them had been discovered. Gell-Mann realised that each member of the ‘spin 1/2’ group had a corresponding member in the ‘spin 3/2’ group, and so by extrapolating this principle he was able to theorise about the existence of an eighth ‘spin 3/2’ baryon, which he called the omega baryon. This particle, with properties matching almost exactly those he predicted, was discovered in 1964 by a group experimenting with a particle accelerator (a wonderful device that takes two very small things and throws them at one another in the hope that they will collide and smash to pieces; particle physics is a surprisingly crude business, and few other methods have ever been devised for ‘looking inside’ these weird and wonderful particles), and Gell-Mann took the Nobel prize five years later.

But, before any of this, the principle of the eightfold way had been extrapolated a stage further. Gell-Mann collaborated with George Zweig on a theory concerning entirely theoretical particles known as quarks; they imagined three ‘flavours’ of quark (which they called, completely arbitrarily, the up, down and strange quarks), each with their own properties of spin, electrical charge and such. They theorised that each of the properties of the different hadrons (as mesons and baryons are collectively known) could be explained by the fact that each was made up of a different combination of these quarks, and that the overall properties of  each particle were due, basically, to the properties of their constituent quarks added together. At the time, this was considered somewhat airy-fairy; Zweig and Gell-Mann had absolutely no physical evidence, and their theory was essentially little more than a mathematical construct to explain the properties of the different particles people had discovered. Within a year, supporters of the theory Sheldon Lee Glashow and James Bjorken suggested that a fourth quark, which they called the ‘charm’ quark, should be added to the theory, in order to better explain radioactivity (ask me about the weak nuclear force, go on, I dare you). It was also later realised that the charm quark might explain the existence of the kaon and pion, two particles discovered in cosmic rays 15 years earlier that nobody properly understood. Support for the quark theory grew; and then, in 1968, a team studying deep inelastic scattering (another wonderfully blunt technique that involves firing an electron at a nucleus and studying how it bounces off in minute detail) revealed a proton to consist of three point-like objects, rather than being the solid, fundamental blob of matter it had previously been thought of. Three point-like objects matched exactly Zweig and Gell-Mann’s prediction for the existence of quarks; they had finally moved from the mathematical theory to the physical reality.

(The quarks discovered were of the up and down flavours; the charm quark wouldn’t be discovered until 1974, by which time two more quarks, the top and bottom, had been predicted to account for an incredibly obscure theory concerning the relationship between antimatter and normal matter. No, I’m not going to explain how that works. For the record, the bottom quark was discovered in 1977 and the top quark in 1995)

Nowadays, the six quarks form an integral part of the standard model; physics’ best attempt to explain how everything in the world works, or at least on the level of fundamental interactions. Many consider them, along with the six leptons and four bosons*, to be the fundamental particles that everything is made of; these particles exist, are fundamental, and that’s an end to it. But, the Standard Model is far from complete; it isn’t readily compatible with the theory of relativity and doesn’t explain either gravity or many observed effects in cosmology blamed on ‘dark matter’ or ‘dark energy’- plus it gives rise to a few paradoxical situations that we aren’t sure how to explain. Some say it just isn’t finished yet, and that we just need to think of another theory or two and discover another boson. Others say that we need to look deeper once again and find out what quarks themselves contain…

*A boson is anything, like a gluon, that ‘carries’ a fundamental force; the recently discovered Higgs boson is not really part of the list of fundamental particles since it exists solely to effect the behaviour of the W and Z bosons, giving them mass

Advertisement

The Story of the Atom

Possibly the earliest scientific question we as a race attempted to answer was ‘what is our world made of’. People reasoned that everything had to be made of something- all the machines and things we build have different components in them that we can identify, so it seemed natural that those materials and components were in turn made of some ‘stuff’ or other. Some reasoned that everything was made up of the most common things present in our earth; the classical ‘elements’ of earth, air, fire and water, but throughout the latter stages of the last millennia the burgeoning science of chemistry began to debunk this idea. People sought for a new theory to answer what everything consisted of, what the building blocks were, and hoped to find in this search an answer to several other questions; why chemicals that reacted together did so in fixed ratios, for example. For a solution to this problem, they returned to an idea almost as old as science itself; that everything consisted of tiny blobs of matter, invisible to the naked eye, that joined to one another in special ways. The way they joined together varied depending on the stuff they made up, hence the different properties of different materials, and the changing of these ‘joinings’ was what was responsible for chemical reactions and their behaviour. The earliest scientists who theorised the existence of these things called them corpuscles; nowadays we call them atoms.

By the turn of the twentieth century, thanks to two hundred years of chemistry using atoms to conveniently explain their observations, it was considered common knowledge among the scientific community that an atom was the basic building block of matter, and it was generally considered to be the smallest piece of matter in the universe; everything was made of atoms, and atoms were fundamental and solid. However, in 1897 JJ Thomson discovered the electron, with a small negative charge, and his evidence suggested that electrons were a constituent part of atoms. But atoms were neutrally charged, so there had to be some positive charge present to balance out; Thomson postulated that the negative electrons ‘floated’ within a sea of positive charge, in what became known as the plum pudding model. Atoms were not fundamental at all; even these components of all matter had components themselves. A later experiment by Ernest Rutherford sought to test the theory of the plum pudding model; he bombarded a thin piece of gold foil with positively charged alpha particles, and found that some were deflected at wild angles but that most passed straight through. This suggested, rather than a large uniform area of positive charge, a small area of very highly concentrated positive charge, such that when the alpha particle came close to it it was repelled violently (just like putting two like poles of a magnet together) but that most of the time it would miss this positive charge completely; most of the atom was empty space. So, he thought the atom must be like the solar system, with the negative electrons acting like planets orbiting a central, positive nucleus.

This made sense in theory, but the maths didn’t check out; it predicted the electrons to either spiral into the nucleus and for the whole of creation to smash itself to pieces, or for it all to break apart. It took Niels Bohr to suggest that the electrons might be confined to discrete orbital energy levels (roughly corresponding to distances from the nucleus) for the model of the atom to be complete; these energy levels (or ‘shells’) were later extrapolated to explain why chemical reactions occur, and the whole of chemistry can basically be boiled down to different atoms swapping electrons between energy levels in accordance with the second law of thermodynamics. Bohr’s explanation drew heavily from Max Planck’s recent explanation of quantum theory, which modelled photons of light as having discrete energy levels, and this suggested that electrons were also quantum particles; this ran contrary to people’s previous understanding of them, since they had been presumed to be solid ‘blobs’ of matter. This was but one step along the principle that defines quantum theory; nothing is actually real, everything is quantum, so don’t even try to imagine how it all works.

However, this still left the problem of the nucleus unsolved; what was this area of such great charge density packed  tightly into the centre of each atom, around which the electrons moved? What was it made of? How big was it? How was it able to account for almost all of a substance’s mass, given how little the electrons weighed?

Subsequent experiments have revealed an atomic nucleus to tiny almost beyond imagining; if your hand were the size of the earth, an atom would be roughly one millimetre in diameter, but if an atom were the size of St. Paul’s Cathedral then its nucleus would be the size of a full stop. Imagining the sheer tinyness of such a thing defies human comprehension. However, this tells us nothing about the nucleus’ structure; it took Ernest Rutherford (the guy who had disproved the plum pudding model) to take the first step along this road when he, in 1918, confirmed that the nucleus of a hydrogen atom comprised just one component (or ‘nucleon’ as we collectively call them today). Since this component had a positive charge, to cancel out the one negative electron of a hydrogen atom, he called it a proton, and then (entirely correctly) postulated that all the other positive charges in larger atomic nuclei were caused by more protons stuck together in the nucleus. However, having multiple positive charges all in one place would normally cause them to repel one another, so Rutherford suggested that there might be some neutrally-charged particles in there as well, acting as a kind of electromagnetic glue. He called these neutrons (since they were neutrally charged), and he has since been proved correct; neutrons and protons are of roughly the same size, collectively constitute around 99.95% of any given atom’s mass, and are found in all atomic nuclei. However, even these weren’t quite fundamental subatomic particles, and as the 20th century drew on, scientists began to delve even deeper inside the atom; and I’ll pick up that story next time.

The Red Flower

Fire is, without a doubt, humanity’s oldest invention and its greatest friend; to many, the fundamental example what separates us from other animals. The abilities to keep warm through the coldest nights and harshest winters, to scare away predators by harnessing this strange force of nature, and to cook a joint of meat because screw it, it tastes better that way, are incredibly valuable ones, and they have seen us through many a tough moment. Over the centuries, fire in one form or another has been used for everything from being a weapon of war to furthering science, and very grateful we are for it too.

However, whilst the social history of fire is interesting, if I were to do a post on it then you dear readers would be faced with 1000 words of rather repetitive and somewhat boring myergh (technical term), so instead I thought I would take this opportunity to resort to my other old friend in these matters: science, as well as a few things learned from several years of very casual outdoorsmanship.

Fire is the natural product of any sufficiently exothermic reaction (ie one that gives out heat, rather than taking it in). These reactions can be of any type, but since fire can only form in air most of such reactions we are familiar with tend to be oxidation reactions; oxygen from the air bonding chemically with the substance in question (although there are exceptions;  a sample of potassium placed in water will float on the top and react with the water itself, become surrounded surrounded by a lilac flame sufficiently hot to melt it, and start fizzing violently and pushing itself around the container. A larger dose of potassium, or a more reactive alkali metal such as rubidium, will explode). The emission of heat causes a relatively gentle warming effect for the immediate area, but close to the site of the reaction itself a very large amount of heat is emitted in a small area. This excites the molecules of air close to the reaction and causes them to vibrate violently, emitting photons of electromagnetic radiation as they do so in the form of heat & light (among other things). These photons cause the air to glow brightly, creating the visible flame we can see; this large amount of thermal energy also ionises a lot of atoms and molecules in the area of the flame, meaning that a flame has a slight charge and is more conductive than the surrounding air. Because of this, flame probes are sometimes used to get rid of the excess charge in sensitive electromagnetic experiments, and flamethrowers can be made to fire lightning. Most often the glowing flame results in the characteristic reddy/orange colour of fire, but some reactions, such as the potassium one mentioned, cause them to emit radiation of other frequencies for a variety of reasons (chief among them the temperature of the flame and the spectral properties of the material in question), causing the flames to be of different colours, whilst a white-hot area of a fire is so hot that the molecules don’t care what frequency the photons they’re emitting are at so long as they can get rid of the things fast enough. Thus, light of all wavelengths gets emitted, and we see white light. The flickery nature of a flame is generally caused by the excited hot air moving about rapidly, until it gets far enough away from the source of heat to cool down and stop glowing; this process happens all the time with hundreds of packets of hot air, causing them to flicker back and forth.

However, we must remember that fires do not just give out heat, but must take some in too. This is to do with the way the chemical reaction to generate the heat in question works; the process requires the bonds between atoms to be broken, which uses up energy, before they can be reformed into a different pattern to release energy, and the energy needed to break the bonds and get the reaction going is known as the activation energy. Getting the molecules of the stuff you’re trying to react to the activation energy is the really hard part of lighting a fire, and different reactions (involving the burning of different stuff) have different activation energies, and thus different ‘ignition temperatures’ for the materials involved. Paper, for example, famously has an ignition temperature of 451 Fahrenheit (which means, incidentally, that you can cook with it if you’re sufficiently careful and not in a hurry to eat), whilst wood’s is only a little higher at around 300 degrees centigrade, both of which are less than that of a spark or flame. However, we must remember that neither fuel will ignite if it is wet, as water is not a fuel that can be burnt, meaning that it often takes a while to dry wood out sufficiently for it to catch, and that big, solid blocks of wood take quite a bit of energy to heat up.

From all of this information we can extrapolate the first rule that everybody learns about firelighting; that in order to catch a fire needs air, dry fuel and heat (the air provides the oxygen, the fuel the stuff it reacts with and the heat the activation energy). When one of these is lacking, one must make up for it by providing an excess of at least one of the other two, whilst remembering not to let the provision of the other ingredients suffer; it does no good, for example, to throw tons of fuel onto a new, small fire since it will snuff out its access to the air and put the fire out. Whilst fuel and air are usually relatively easy to come by when starting a fire, heat is always the tricky thing; matches are short lived, sparks even more so, and the fact that most of your fuel is likely to be damp makes the job even harder.

Provision of heat is also the main reason behind all of our classical methods of putting a fire out; covering it with cold water cuts it off from both heat and oxygen, and whilst blowing on a fire will provide it with more oxygen, it will also blow away the warm air close to the fire and replace it with cold, causing small flames like candles to be snuffed out (it is for this reason that a fire should be blown on very gently if you are trying to get it to catch and also why doing so will cause the flames, which are caused by hot air remember, to disappear but the embers to glow more brightly and burn with renewed vigour once you have stopped blowing).  Once a fire has sufficient heat, it is almost impossible to put out and blowing on it will only provide it with more oxygen and cause it to burn faster, as was ably demonstrated during the Great Fire of London. I myself have once, with a few friends, laid a fire that burned for 11 hours straight; many times it was reduced to a few humble embers, but it was so hot that all we had to do was throw another log on it and it would instantly begin to burn again. When the time came to put it out, it took half an hour for the embers to dim their glow.

Drunken Science

In my last post, I talked about the societal impact of alcohol and its place in our everyday culture; today, however, my inner nerd has taken it upon himself to get stuck into the real meat of the question of alcohol, the chemistry and biology of it all, and how all the science fits together.

To a scientist, the word ‘alcohol’ does not refer to a specific substance at all, but rather to a family of chemical compounds containing an oxygen and hydrogen atom bonded to one another (known as an OH group) on the end of a chain of carbon atoms. Different members of the family (or ‘homologous series’, to give it its proper name) have different numbers of carbon atoms and have slightly different physical properties (such as melting point), and they also react chemically to form slightly different compounds. The stuff we drink is that with two carbon atoms in its chain, and is technically known as ethanol.

There are a few things about ethanol that make it special stuff to us humans, and all of them refer to chemical reactions and biological interactions. The first is the formation of it; there are many different types of sugar found in nature (fructose & sucrose are two common examples; the ‘-ose’ ending is what denotes them as sugars), but one of the most common is glucose, with six carbon atoms. This is the substance our body converts starch and other sugars into in order to use for energy or store as glycogen. As such, many biological systems are so primed to convert other sugars into glucose, and it just so happens that when glucose breaks down in the presence of the right enzymes, it forms carbon dioxide and an alcohol; ethanol, to be precise, in a process known as either glycolosis (to a scientist) or fermentation (to everyone else).

Yeast performs this process in order to respire (ie produce energy) anaerobically (in the absence of oxygen), so leading to the two most common cases where this reaction occurs. The first we know as brewing, in which an anaerobic atmosphere is deliberately produced to make alcohol; the other occurs when baking bread. The yeast we put in the bread causes the sugar (ie glucose) in it to produce carbon dioxide, which is what causes the bread to rise since it has been filled with gas, whilst the ethanol tends to boil off in the heat of the baking process. For industrial purposes, ethanol is made by hydrating (reacting with water) an oil by-product called ethene, but the product isn’t generally something you’d want to drink.

But anyway, back to the booze itself, and this time what happens upon its entry into the body. Exactly why alcohol acts as a depressant and intoxicant (if that’s a proper word) is down to a very complex interaction with various parts and receptors of the brain that I am not nearly intelligent enough to understand, let alone explain. However, what I can explain is what happens when the body gets round to breaking the alcohol down and getting rid of the stuff. This takes place in the liver, an amazing organ that performs hundreds of jobs within the body and contains a vast repetoir of enzymes. One of these is known as alcohol dehydrogenase, which has the task of oxidising the alcohol (not a simple task, and one impossible without enzymes) into something the body can get rid of. However, most ethanol we drink is what is known as a primary alcohol (meaning the OH group is on the end of the carbon chain), and this causes it to oxidise in two stages, only the first of which can be done using alcohol dehydrogenase. This process converts the alcohol into an aldehyde (with an oxygen chemically double-bonded to the carbon where the OH group was), which in the case of ethanol is called acetaldehyde (or ethanal). This molecule cannot be broken down straight away, and instead gets itself lodged in the body’s tissues in such a way (thanks to its shape) to produce mild toxins, activate our immune system and make us feel generally lousy. This is also known as having a hangover, and only ends when the body is able to complete the second stage of the oxidation process and convert the acetaldehyde into acetic acid, which the body can get rid of relatively easily. Acetic acid is commonly known as the active ingredient in vinegar, which is why alcoholics smell so bad and are often said to be ‘pickled’.

This process occurs in the same way when other alcohols enter the body, but ethanol is unique in how harmless (relatively speaking) its aldehyde is. Methanol, for example, can also be oxidised by alcohol dehydrogenase, but the aldehyde it produces (officially called methanal) is commonly known as formaldehyde; a highly toxic substance used in preservation work and as a disinfectant that will quickly poison the body. It is for this reason that methanol is present in the fuel commonly known as ‘meths’- ethanol actually produces more energy per gram and makes up 90% of the fuel by volume, but since it is cheaper than most alcoholic drinks the toxic methanol is put in to prevent it being drunk by severely desperate alcoholics. Not that it stops many of them; methanol poisoning is a leading cause of death among many homeless people.

Homeless people were also responsible for a major discovery in the field of alcohol research, concerning the causes of alcoholism. For many years it was thought that alcoholics were purely addicts mentally rather than biologically, and had just ‘let it get to them’, but some years ago a young student (I believe she was Canadian, but certainty of that fact and her name both escape me) was looking for some fresh cadavers for her PhD research. She went to the police and asked if she could use the bodies of the various dead homeless people who they found on their morning beats, and when she started dissecting them she noticed signs of a compound in them that was known to be linked to heroin addiction. She mentioned to a friend that all these people appeared to be on heroin, but her friend said that these people barely had enough to buy drink, let alone something as expensive as heroin. This young doctor-to-be realised she might be onto something here, and changed the focus of her research onto studying how alcohol was broken down by different bodies, and discovered something quite astonishing. Inside serious alcoholics, ethanol was being broken down into this substance previously only linked to heroin addiction, leading her to believe that for some unlucky people, the behaviour of their bodies made alcohol as addictive to them as heroin was to others. Whilst this research has by no means settled the issue, it did demonstrate two important facts; firstly, that whilst alcoholism certainly has some links to mental issues, it is also fundamentally biological and genetic by nature and cannot be solely put down as the fault of the victim’s brain. Secondly, it ‘sciencified’ (my apologies to grammar nazis everywhere for making that word up) a fact already known by many reformed drinkers; that when a former alcoholic stops drinking, they can never go back. Not even one drink. There can be no ‘just having one’, or drinking socially with friends, because if one more drink hits their body, deprived for so long, there’s a very good chance it could kill them.

Still, that’s not a reason to get totally down about alcohol, for two very good reasons. The first of these comes from some (admittely rather spurious) research suggesting that ‘addictive personalities’, including alcoholics, are far more likely to do well in life, have good jobs and overall succeed; alcoholics are, by nature, present at the top as well as the bottom of our society. The other concerns the one bit of science I haven’t tried to explain here- your body is remarkably good at dealing with alcohol, and we all know it can make us feel better, so if only for your mental health a little drink now and then isn’t an all bad thing after all. And anyway, it makes for some killer YouTube videos…

Getting bored with history lessons

Last post’s investigation into the post-Babbage history of computers took us up to around the end of the Second World War, before the computer age could really be said to have kicked off. However, with the coming of Alan Turing the biggest stumbling block for the intellectual development of computing as a science had been overcome, since it now clearly understood what it was and where it was going. From then on, therefore, the history of computing is basically one long series of hardware improvements and business successes, and the only thing of real scholarly interest was Moore’s law. This law is an unofficial, yet surprisingly accurate, model of the exponential growth in the capabilities of computer hardware, stating that every 18 months computing hardware gets either twice as powerful, half the size, or half the price for the same other specifications. This law was based on a 1965 paper by Gordon E Moore, who noted that the number of transistors on integrated circuits had been doubling every two years since their invention 7 years earlier. The modern day figure of an 18-monthly doubling in performance comes from an Intel executive’s estimate based on both the increasing number of transistors and their getting faster & more efficient… but I’m getting sidetracked. The point I meant to make was that there is no point me continuing with a potted history of the last 70 years of computing, so in this post I wish to get on with the business of exactly how (roughly fundamentally speaking) computers work.

A modern computer is, basically, a huge bundle of switches- literally billions of the things. Normal switches are obviously not up to the job, being both too large and requiring an electromechanical rather than purely electrical interface to function, so computer designers have had to come up with electrically-activated switches instead. In Colossus’ day they used vacuum tubes, but these were large and prone to breaking so, in the late 1940s, the transistor was invented. This is a marvellous semiconductor-based device, but to explain how it works I’m going to have to go on a bit of a tangent.

Semiconductors are materials that do not conduct electricity freely and every which way like a metal, but do not insulate like a wood or plastic either- sometimes they conduct, sometimes they don’t. In modern computing and electronics, silicon is the substance most readily used for this purpose. For use in a transistor, silicon (an element with four electrons in its outer atomic ‘shell’) must be ‘doped’ with other elements, meaning that they are ‘mixed’ into the chemical, crystalline structure of the silicon. Doping with a substance such as boron, with three electrons in its outer shell, creates an area with a ‘missing’ electron, known as a hole. Holes have, effectively, a positive charge compared a ‘normal’ area of silicon (since electrons are negatively charged), so this kind of doping produces what is known as p-type silicon. Similarly, doping with something like phosphorus, with five outer shell electrons, produces an excess of negatively-charged electrons and n-type silicon. Thus electrons, and therefore electricity (made up entirely of the net movement of electrons from one area to another) finds it easy to flow from n- to p-type silicon, but not very well going the other way- it conducts in one direction and insulates in the other, hence a semiconductor. However, it is vital to remember that the p-type silicon is not an insulator and does allow for free passage of electrons, unlike pure, undoped silicon. A transistor generally consists of three layers of silicon sandwiched together, in order NPN or PNP depending on the practicality of the situation, with each layer of the sandwich having a metal contact or ‘leg’ attached to it- the leg in the middle is called the base, and the ones at either side are called the emitter and collector.

Now, when the three layers of silicon are stuck next to one another, some of the free electrons in the n-type layer(s) jump to fill the holes in the adjacent p-type, creating areas of neutral, or zero, charge. These are called ‘depletion zones’ and are good insulators, meaning that there is a high electrical resistance across the transistor and that a current cannot flow between the emitter and collector despite usually having a voltage ‘drop’ between them that is trying to get a current flowing. However, when a voltage is applied across the collector and base a current can flow between these two different types of silicon without a problem, and as such it does. This pulls electrons across the border between layers, and decreases the size of the depletion zones, decreasing the amount of electrical resistance across the transistor and allowing an electrical current to flow between the collector and emitter. In short, one current can be used to ‘turn on’ another.

Transistor radios use this principle to amplify the signal they receive into a loud, clear sound, and if you crack one open you should be able to see some (well, if you know what you’re looking for). However, computer and manufacturing technology has got so advanced over the last 50 years that it is now possible to fit over ten million of these transistor switches onto a silicon chip the size of your thumbnail- and bear in mind that the entire Colossus machine, the machine that cracked the Lorenz cipher, contained only ten thousand or so vacuum tube switches all told. Modern technology is a wonderful thing, and the sheer achievement behind it is worth bearing in mind next time you get shocked over the price of a new computer (unless you’re buying an Apple- that’s just business elitism).

…and dammit, I’ve filled up a whole post again without getting onto what I really wanted to talk about. Ah well, there’s always next time…

(In which I promise to actually get on with talking about computers)