Practical computing

This looks set to be my final post of this series about the history and functional mechanics of computers. Today I want to get onto the nuts & bolts of computer programming and interaction, the sort of thing you might learn as a budding amateur wanting to figure out how to mess around with these things, and who’s interested in exactly how they work (bear in mind that I am not one of these people and am, therefore, likely to get quite a bit of this wrong). So, to summarise what I’ve said in the last two posts (and to fill in a couple of gaps): silicon chips are massive piles of tiny electronic switches, memory is stored in tiny circuits that are either off or on, this pattern of off and on can be used to represent information in memory, memory stores data and instructions for the CPU, the CPU has no actual ability to do anything but automatically delegates through the structure of its transistors to the areas that do, the arithmetic logic unit is a dumb counting machine used to do all the grunt work and is also responsible, through the CPU, for telling the screen how to make the appropriate pretty pictures.

OK? Good, we can get on then.

Programming languages are a way of translating the medium of computer information and instruction (binary data) into our medium of the same: words and language. Obviously, computers do not understand that the buttons we press on our screen have symbols on them, that these symbols mean something to us and that they are so built to produce the same symbols on the monitor when we press them, but we humans do and that makes computers actually usable for 99.99% of the world population. When a programmer brings up an appropriate program and starts typing instructions into it, at the time of typing their words mean absolutely nothing. The key thing is what happens when their data is committed to memory, for here the program concerned kicks in.

The key feature that defines a programming language is not the language itself, but the interface that converts words to instructions. Built into the workings of each is a list of ‘words’ in binary, each word having a corresponding, but entirely different, string of data associated with it that represents the appropriate set of ‘ons and offs’ that will get a computer to perform the correct task. This works in one of two ways: an ‘interpreter’ is an inbuilt system whereby the programming is stored just as words and is then converted to ‘machine code’ by the interpreter as it is accessed from memory, but the most common form is to use a compiler. This basically means that once you have finished writing your program, you hit a button to tell the computer to ‘compile’ your written code into an executable program in data form. This allows you to delete the written file afterwards, makes programs run faster, and gives programmers an excuse to bum around all the time (I refer you here)

That is, basically how computer programs work- but there is one last, key feature, in the workings of a modern computer, one that has divided both nerds and laymen alike across the years and decades and to this day provokes furious debate: the operating system.

An OS, something like Windows (Microsoft), OS X (Apple) or Linux (nerds), is basically the software that enables the CPU to do its job of managing processes and applications. Think of it this way: whilst the CPU might put two inputs through a logic gate and send an output to a program, it is the operating system that will set it up to determine exactly which gate to put it through and exactly how that program will execute. Operating systems are written onto the hard drive, and can, theoretically, be written using nothing more than a magnetized needle, a lot of time and a plethora of expertise to flip the magnetically charged ‘bits’ on the hard disk. They consist of many different parts, but the key feature of all of them is the kernel, the part that manages the memory, optimises the CPU performance and translates programs from memory to screen. The precise translation and method by which this latter function happens differs from OS to OS, hence why a program written for Windows won’t work on a Mac, and why Android (Linux-powered) smartphones couldn’t run iPhone (iOS) apps even if they could access the store. It is also the cause of all the debate between advocates of different operating systems, since different translation methods prioritise/are better at dealing with different things, work with varying degrees of efficiency and are more  or less vulnerable to virus attack. However, perhaps the most vital things that modern OS’s do on our home computers is the stuff that, at first glance seems secondary- moving stuff around and scheduling. A CPU cannot process more than one task at once, meaning that it should not be theoretically possible for a computer to multi-task; the sheer concept of playing minesweeper whilst waiting for the rest of the computer to boot up and sort itself out would be just too outlandish for words. However, a clever piece of software called a scheduler in each OS which switches from process to process very rapidly (remember computers run so fast that they can count to a billion, one by one, in under a second) to give the impression of it all happening simultaneously. Similarly, a kernel will allocate areas of empty memory for a given program to store its temporary information and run on, but may also shift some rarely-accessed memory from RAM (where it is accessible) to hard disk (where it isn’t) to free up more space (this is how computers with very little free memory space run programs, and the time taken to do this for large amounts of data is why they run so slowly) and must cope when a program needs to access data from another part of the computer that has not been specifically allocated a part of that program.

If I knew what I was talking about, I could witter on all day about the functioning of operating systems and the vast array of headache-causing practicalities and features that any OS programmer must consider, but I don’t and as such won’t. Instead, I will simply sit back, pat myself on the back for having actually got around to researching and (after a fashion) understanding all this, and marvel at what strange, confusing, brilliant inventions computers are.

Advertisement

Up one level

In my last post (well, last excepting Wednesday’s little topical deviation), I talked about the real nuts and bolts of a computer, detailing the function of the transistors that are so vital to the workings of a computer. Today, I’m going to take one step up and study a slightly broader picture, this time concerned with the integrated circuits that utilise such components to do the real grunt work of computing.

An integrated circuit is simply a circuit that is not comprised of multiple, separate, electronic components- in effect, whilst a standard circuit might consist of a few bits of metal and plastic connected to one another by wires, in an IC they are all stuck in the same place and all assembled as one. The main advantage of this is that since all the components don’t have to be manually stuck to one another, but are built in circuit form from the start, there is no worrying about the fiddliness of assembly and they can be mass-produced quickly and cheaply with components on a truly microscopic scale. They generally consist of several layers on top of the silicon itself, simply to allow space for all of the metal connecting tracks and insulating materials to run over one another (this pattern is usually, perhaps ironically, worked out on a computer), and the sheer detail required of their manufacture surely makes it one of the marvels of the engineering world.

But… how do they make a computer work? Well, let’s start by looking at a computer’s memory, which in all modern computers takes the form of semiconductor memory. Memory takes the form of millions upon millions of microscopically small circuits known as memory circuits, each of which consists of one or more transistors. Computers are electronic, meaning to only thing they understand is electricity- for the sake of simplicity and reliability, this takes the form of whether the current flowing in a given memory circuit is ‘on’ or ‘off’. If the switch is on, then the circuit is represented as a 1, or a 0 if it is switched off. These memory circuits are generally grouped together, and so each group will consist of an ordered pattern of ones and zeroes, of which there are many different permutations. This method of counting in ones and zeroes is known as binary arithmetic, and is sometimes thought of as the simplest form of counting. On a hard disk, patches of magnetically charged material represent binary information rather than memory circuits.

Each little memory circuit, with its simple on/off value, represents one bit of information. 8 bits grouped together forms a byte, and there may be billions of bytes in a computer’s memory. The key task of a computer programmer is, therefore, to ensure that all the data that a computer needs to process is written in binary form- but this pattern of 1s and 0s might be needed to represent any information from the content of an email to the colour of one pixel of a video. Clearly, memory on its own is not enough, and the computer needs some way of translating the information stored into the appropriate form.

A computer’s tool for doing this is known as a logic gate, a simple electronic device consisting of (you guessed it) yet more transistor switches. This takes one or two inputs, either ‘on’ or ‘off’ binary ones, and translates them into another value. There are three basic types:  AND gates (if both inputs equal 1, output equals 1- otherwise, output equals 0), OR gates (if either input equals 1, output equals 1- if both inputs equal 0, output equals 0), and NOT gates (if input equals 1, output equals 0, if input equals 0, output equals 1). The NOT gate is the only one of these with a single input, and combinations of these gates can perform other functions too, such as NAND (not-and) or XOR (exclusive OR; if either input equals 1, output equals 1, but if both inputs equal 1 or 0, output equals 0) gates. A computer’s CPU (central processing unit) will contain hundreds of these, connected up in such a way as to link various parts of the computer together appropriately, translate the instructions of the memory into what function a given program should be performing, and thus cause the relevant bit (if you’ll pardon the pun) of information to translate into the correct process for the computer to perform.

For example, if you click on an icon on your desktop, your computer will put the position of your mouse and the input of the clicking action through an AND gate to determine that it should first highlight that icon. To do this, it orders the three different parts of each of the many pixels of that symbol to change their shade by a certain degree, and the the part of the computer responsible for the monitor’s colour sends a message to the Arithmetic Logic Unit (ALU), the computer’s counting department, to ask what the numerical values of the old shades plus the highlighting is, to give it the new shades of colour for the various pictures. Oh, and the CPU should also open the program. To do this, its connections send a signal off to the memory to say that program X should open now. Another bit of the computer then searches through the memory to find program X, giving it the master ‘1’ signal that causes it to open. Now that it is open, this program routes a huge amount of data back through the CPU to tell it to change the pattern of pretty colours on the screen again, requiring another slue of data to go through the ALU, and that areas of the screen A, B and C are now all buttons, so if you click there then we’re going to have to go through this business all over again. Basically the CPU’s logical function consists of ‘IF this AND/OR this happens, which signal do I send off to ask the right part of the memory what to do next?’. And it will do all this in a miniscule fraction of a second. Computers are amazing.

Obviously, nobody in their right mind is going to go through the whole business of telling the computer exactly what to do with each individual piece of binary data manually, because if they did nothing would ever get done. For this purpose, therefore, programmers have invented programming languages to translate their wishes into binary, and for a little more detail about them, tune in to my final post on the subject…