Quality vs. Quantity

It’s an old saying, usually coming straight from the lips of the slightly insecure or self confident: ‘I do quality, not quantity!’. It’s a very appealing idea, the concept that skill and ability can mark one out from pure brawn and throwing vast amounts of resources at a problem. But sometimes quality is not the answer. Sometimes quantity is the way forward: and to illustrate this, I’m going to tell a story.

The Industrial Revolution was the biggest, most tumultuous and most difficult period of transition that the western world has ever been put through, and design was no exception. Objects once made individually and by hand, the work of skilled craftsmen, were now produced using vast steam-powered machinery owned and directed by rich businessmen. And being businessmen is the key here; they wanted their products to sell to the general public, and the only real way they knew how to do that back then was to make the things cheap. This generally manifested itself in making products as simple to produce as possible using the machinery of the day, and the idea of ingratiating design elements into these processes didn’t register on their conciousness; perhaps rightly, since the majority of the populace being sold to were rather poor and may not have been able to afford the prettier, more expensive item. What this did mean was that the products ordinary people filled their homes with were usually large, since Victorian machinery couldn’t handle high tolerances too well, decorated in a rather gaudy, ‘tacked on’ fashion rather than having beauty as part of the design, and were often of a very poor quality as manufacturers skimped on materials and good processing. We must remember that this was the golden age of deregulated industry, companies having no responsibility to either adhere to standards or treat their low-paid, overworked and awfully treated workers with any degree of respect.

This state of affairs was deplored by John Ruskin, an art critic and thinker of the time. He argued for a return to the old ways of doing things, with small-scale industry taking the place of big business and simple, good-quality, hand-crafted products replacing the mass-produced goods of the Victorian age. Since he actually had other things to do with his time, he didn’t expand on this plan to any great degree; but a man named William Morris did. Morris was a designer whose textiles designs would make him rich and whose poetry would make him famous, but he latched on to Ruskin’s ideas whilst at university, and they turned him into an early socialist. Indeed, he spent a decent chunk of time in later life standing on street corners distributing socialist pamphlets, but that’s another story. Morris took Ruskin’s ideas and, with a group of like-minded friends, founded a new design movement based on Ruskin’s philosophy. They wanted a return to craftsmanship, to put skill back into design and for workers to ’empower’ themselves through the making of good-quality, appropriately sized, simply designed products, putting them in charge of their own lives rather than acting as slaves to the corporation. More than that, they wanted to reinstate the role of design as a fine art, putting beauty into everyday products for everyday people, and to advance the art of design in general. This movement would later become known as the Arts and Crafts movement.

This movement was around between around 1860 and 1900, and enjoyed some success in revitalising design as an art form. Various universities and other intellectual establishments began founding schools of design, and as the Industrial Revolution wore on even the capitalists began to take notice of these new ideas, realising that the common people could be persuaded to buy their products because they were designed well rather than just because they were cheap. And what about the Arts & Crafts philosophy? Well, a few organisations were set up promoting just that idea, trying to bring together skilled craftspeople into a setup styled as a medieval guild. And they totally bombed; the production of individual, hand crafted items took so many more man-hours than the industrial mass-production process that simple economics (then a rather under-developed field) dictated its price was far beyond the price point of the ordinary people Morris’ philosophy aimed to serve. Whilst the products they produced were undoubtedly beautiful, and advanced the art of design considerably through their use of unusual materials and craft techniques, producing such quality products was simply not a viable solution to providing for the common man.

Indeed, it wasn’t until the 1920s that a design movement finally managed to make itself felt among the common people. In Germany, the Bauhaus school was set up in Weimar (home of the titular new republic that would be replaced by the Nazis in 1933) and began educating designers in all aspects of craftsmanship and fine arts, something that William Morris would doubtless have agreed with. However, the reason that the Bauhaus style proved so internationally successful, and continues to be relevant today, is for its acceptance of the machine age, and for experimenting with techniques from an industrial, rather than crafts, background. A good example is tubular steel; in the 1920s, extruded tubular steel (without joints in it) was a new innovation that was much stronger than previous efforts. Bauhaus designers immediately began experimenting with it, and when Marcel Breuer designed his Model B3 chair (later known as the Wassily chair) based around a tubular steel frame it put the material, and the chair, on the map. Here was a style whose products could be produced on a large scale, making them globally famous, and the style flourished because of it. Even today, products can be bought based on Bauhaus designs or in the Bauhaus style, and whilst many of William Morris’ wallpaper prints are still available, they are all now mass-printed in factories. He must be turning in his grave.

This is just an example, but it successfully illustrates a point; that, on a large scale, going too far into the quality side of things simply isn’t sustainable if it comes at the expense of quantity. It just comes down to the economics of the problem; the consumer culture gets blamed for a lot of things, but the extent to which mass-production has made ‘luxury’ goods affordable to the common people and made all our lives more comfortable is frequently neglected. Quality at the expense of quantity is rarely the answer; best, of course, is trying to find a way to do both.

Advertisement

Up one level

In my last post (well, last excepting Wednesday’s little topical deviation), I talked about the real nuts and bolts of a computer, detailing the function of the transistors that are so vital to the workings of a computer. Today, I’m going to take one step up and study a slightly broader picture, this time concerned with the integrated circuits that utilise such components to do the real grunt work of computing.

An integrated circuit is simply a circuit that is not comprised of multiple, separate, electronic components- in effect, whilst a standard circuit might consist of a few bits of metal and plastic connected to one another by wires, in an IC they are all stuck in the same place and all assembled as one. The main advantage of this is that since all the components don’t have to be manually stuck to one another, but are built in circuit form from the start, there is no worrying about the fiddliness of assembly and they can be mass-produced quickly and cheaply with components on a truly microscopic scale. They generally consist of several layers on top of the silicon itself, simply to allow space for all of the metal connecting tracks and insulating materials to run over one another (this pattern is usually, perhaps ironically, worked out on a computer), and the sheer detail required of their manufacture surely makes it one of the marvels of the engineering world.

But… how do they make a computer work? Well, let’s start by looking at a computer’s memory, which in all modern computers takes the form of semiconductor memory. Memory takes the form of millions upon millions of microscopically small circuits known as memory circuits, each of which consists of one or more transistors. Computers are electronic, meaning to only thing they understand is electricity- for the sake of simplicity and reliability, this takes the form of whether the current flowing in a given memory circuit is ‘on’ or ‘off’. If the switch is on, then the circuit is represented as a 1, or a 0 if it is switched off. These memory circuits are generally grouped together, and so each group will consist of an ordered pattern of ones and zeroes, of which there are many different permutations. This method of counting in ones and zeroes is known as binary arithmetic, and is sometimes thought of as the simplest form of counting. On a hard disk, patches of magnetically charged material represent binary information rather than memory circuits.

Each little memory circuit, with its simple on/off value, represents one bit of information. 8 bits grouped together forms a byte, and there may be billions of bytes in a computer’s memory. The key task of a computer programmer is, therefore, to ensure that all the data that a computer needs to process is written in binary form- but this pattern of 1s and 0s might be needed to represent any information from the content of an email to the colour of one pixel of a video. Clearly, memory on its own is not enough, and the computer needs some way of translating the information stored into the appropriate form.

A computer’s tool for doing this is known as a logic gate, a simple electronic device consisting of (you guessed it) yet more transistor switches. This takes one or two inputs, either ‘on’ or ‘off’ binary ones, and translates them into another value. There are three basic types:  AND gates (if both inputs equal 1, output equals 1- otherwise, output equals 0), OR gates (if either input equals 1, output equals 1- if both inputs equal 0, output equals 0), and NOT gates (if input equals 1, output equals 0, if input equals 0, output equals 1). The NOT gate is the only one of these with a single input, and combinations of these gates can perform other functions too, such as NAND (not-and) or XOR (exclusive OR; if either input equals 1, output equals 1, but if both inputs equal 1 or 0, output equals 0) gates. A computer’s CPU (central processing unit) will contain hundreds of these, connected up in such a way as to link various parts of the computer together appropriately, translate the instructions of the memory into what function a given program should be performing, and thus cause the relevant bit (if you’ll pardon the pun) of information to translate into the correct process for the computer to perform.

For example, if you click on an icon on your desktop, your computer will put the position of your mouse and the input of the clicking action through an AND gate to determine that it should first highlight that icon. To do this, it orders the three different parts of each of the many pixels of that symbol to change their shade by a certain degree, and the the part of the computer responsible for the monitor’s colour sends a message to the Arithmetic Logic Unit (ALU), the computer’s counting department, to ask what the numerical values of the old shades plus the highlighting is, to give it the new shades of colour for the various pictures. Oh, and the CPU should also open the program. To do this, its connections send a signal off to the memory to say that program X should open now. Another bit of the computer then searches through the memory to find program X, giving it the master ‘1’ signal that causes it to open. Now that it is open, this program routes a huge amount of data back through the CPU to tell it to change the pattern of pretty colours on the screen again, requiring another slue of data to go through the ALU, and that areas of the screen A, B and C are now all buttons, so if you click there then we’re going to have to go through this business all over again. Basically the CPU’s logical function consists of ‘IF this AND/OR this happens, which signal do I send off to ask the right part of the memory what to do next?’. And it will do all this in a miniscule fraction of a second. Computers are amazing.

Obviously, nobody in their right mind is going to go through the whole business of telling the computer exactly what to do with each individual piece of binary data manually, because if they did nothing would ever get done. For this purpose, therefore, programmers have invented programming languages to translate their wishes into binary, and for a little more detail about them, tune in to my final post on the subject…