The Value of Transparency

Once you start looking for it, it can be quite staggering to realise just how much of our modern world is, quite literally, built on glass. The stuff is manufactured in vast quantities, coating our windows, lights, screens, skyscrapers and countless other uses. Some argue that it is even responsible for the entire development of the world, particularly in the west, as we know it; it’s almost a wonder we take it for granted so.

Technically, out commonplace use of the word ‘glass’ rather oversimplifies the term; glasses are in fact a family of materials that all exhibit the same amorphous structure and behaviour under heating whilst not actually all being made from the same stuff. The member of this family that we are most familiar with and will commonly refer to as simply ‘glass’ is soda-lime glass, made predominantly from silica dioxide with a few other additives to make it easier to produce. But I’m getting ahead of myself; let me tell the story from the beginning.

Like all the best human inventions, glass was probably discovered by accident. Archaeological evidence suggests glassworking was probably an Egyptian invention in around the third millennia BC, Egypt (or somewhere nearby) being just about the only place on earth at the time where the three key ingredients needed for glass production occured naturally and in the same place: silica dioxide (aka sand), sodium carbonate (aka soda, frequently found as a mineral or from plant ashes) and a relatively civilised group of people capable of building a massive great fire. When Egyptian metalworkers got sand and soda in their furnaces by accident, when removed they discovered the two had fused to form a hard, semi-transparent, almost alien substance; the first time glass had been produced anywhere on earth.

This type of glass was far from perfect; for one thing, adding soda has the unfortunate side-effect of making silica glass water-soluble, and for another they couldn’t yet work out how to make the glass clear. Then there were the problems that came with trying to actually make anything from the stuff. The only glass forming technique at the time was called core forming, a moderately effective but rather labour-intensive process illustrated well in this video. Whilst good for small, decorative pieces, it became exponentially more difficult to produce an item by this method the larger it needed to be, not to mention the fact that it couldn’t produce flat sheets of glass for use as windows or whatever.

Still, onwards and upwards and all that, and developments were soon being made in the field of glass technology. Experimentation with various additives soon yielded the discovery that adding lime (calcium oxide) plus a little aluminium and magnesium oxide made soda glass insoluble, and thus modern soda-lime glass was discovered. In the first century BC, an even more significant development came along with the discovery of glass blowing as a production method. Glass blowing was infinitely more flexible than core forming, opening up an entirely new avenue for glass as a material, but crucially it allowed glass products to be produced faster and thus be cheaper than pottery equivalents . By this time, the Eastern Mediterranean coast where these discoveries took place was part of the Roman Empire, and the Romans took to glass like a dieter to chocolate; glass containers and drinking vessels spread across the Empire from the glassworks of Alexandria, and that was before they discovered manganese dioxide could produce clear glass and that it was suddenly suitable for architectural work.

Exactly why glass took off on quite such a massive scale in Europe yet remained little more than a crude afterthought in the east and China (the other great superpower of the age) is somewhat unclear. Pottery remained the material of choice throughout the far east, and they got very skilled at making it too; there’s a reason we in the west today call exceptionally fine, high-quality pottery ‘china’. I’ve only heard one explanation for why this should be so, and it centres around alcohol.

Both the Chinese and Roman empires loved wine, but did so in different ways. To the Chinese, alcohol was a deeply spiritual thing, and played an important role in their religious procedures. This attitude was not unheard of in the west (the Egyptians, for example, believed the god Osiris invented beer, and both Greeks and Romans worshipped a god of wine), but the Roman Empire thought of wine in a secular as well as religious sense; in an age where water was often unsafe to drink, wine became the drink of choice for high society in all situations. One of the key features of wine to the Roman’s was its appearance, hence why the introduction of clear vessels allowing them to admire this colour was so attractive to them. By contrast, the Chinese day-to-day drink of choice was tea. whose appearance was of far less importance than the ability of its container to dissipate heat (as fine china is very good at). The introduction of clear drinking vessels would, therefore, have met with only a limited market in the east, and hence it never really took off. I’m not entirely sure that this argument holds up under scrutiny, but it’s quite a nice idea.

Whatever the reason, the result was unequivocal; only in Europe was glassmaking technology used and advanced over the years. Stained glass was one major discovery, and crown glass (a method for producing large, flat sheets) another. However, the crucial developments would be made in the early 14th century, not long after the Republic of Venice (already a centre for glassmaking) ordered all its glassmakers to move out to the island of Murano to reduce the risk of fire (which does seem ever so slightly strange for a city founded, quite literally, on water).  On Murano, the local quartz pebbles offered glassmakers silica of hitherto unprecedented purity which, combined with exclusive access to a source of soda ash, allowed for the production of exceptionally high-quality glassware. The Murano glassmakers became masters of the art, producing glass products of astounding quality, and from here onwards the technological revolution of glass could begin. The Venetians worked out how to make lenses, in turn allowing for the discovery of the telescope (forming the basis of the work of both Copernicus and Galileo) and spectacles (extending the working lifespan of scribes and monks across the western world). The widespread introduction of windows (as opposed to fabric-covered holes in the wall) to many houses, particularly in the big cities, dramatically improved the health of their occupants by both keeping the house warmer and helping keep out disease. Perhaps most crucially, the production of high-quality glass vessels was not only to revolutionise biology, and in turn medicine, as a discipline, but to almost single-handedly create the modern science of chemistry, itself the foundation stone upon which most of modern physics is based. These discoveries would all, given enough time and quite a lot of social upheaval, pave the way for the massive technological advancements that would characterise the western world in the centuries to come, and which would finally allow the west to take over from the Chinese and Arabs and become the world’s leading technological superpowers.* Nowadays, of course, glass has been taken even further, being widely used as a building material (its strength-to-weight ratio far exceeds that of concrete, particularly when it is made to ‘building grade’ standard), in televisions, and fibre optic cables (which may yet revolutionise our communications infrastructure).

Glass is, of course, not the only thing to have catalysed the technological breakthroughs that were to come; similar arguments have been made regarding gunpowder and the great social and political changes that were to grip Europe between roughly 1500 and 1750. History is never something that one can place a single cause on (the Big Bang excepted), but glass was undoubtedly significant in the western world’s rise to prominence during the second half of the last millennia, and the Venetians probably deserve a lot more credit than they get for creating our modern world.

*It is probably worth mentioning that China is nowadays the world’s largest producer of glass.


Pineapples (TM)

If the last few decades of consumerism have taught us anything, it is just how much faith people are able of setting store in a brand. In everything from motorbikes to washing powder, we do not simply test and judge effectiveness of competing products objectively (although, especially when considering expensive items such as cars, this is sometimes impractical); we must compare them to what we think of the brand and the label, what reputation this product has and what it is particularly good at, which we think most suits our social standing and how others will judge our use of it. And good thing too, from many companies’ perspective, otherwise the amount of business they do would be slashed. There are many companies whose success can be almost entirely put down to the effect of their branding and the impact their marketing has had on the psyche of western culture, but perhaps the most spectacular example concerns Apple.

In some ways, to typecast Apple as a brand-built company is a harsh one; their products are doubtless good ones, and they have shown a staggering gift for bringing existed ideas together into forms that, if not quite new, are always the first to be a practical, genuine market presence. It is also true that Apple products are often better than their competitors in very specific fields; in computing, for example, OS X is better at dealing with media than other operating systems, whilst Windows has traditionally been far stronger when it comes to word processing, gaming and absolutely everything else (although Windows 8 looks very likely to change all of that- I am not looking forward to it). However, it is almost universally agreed (among non-Apple whores anyway) that once the rest of the market gets hold of it Apple’s version of a product is almost never the definitive best, from a purely analytical perspective (the iPod is a possible exception, solely due to the existence of iTunes redefining the music industry before everyone else and remaining competitive to this day) and that every Apple product is ridiculously overpriced for what it is. Seriously, who genuinely thinks that top-end Macs are a good investment?

Still, Apple make high-end, high-quality products with a few things they do really, really well that are basically capable of doing everything else. They should have a small market share, perhaps among the creative or the indie, and a somewhat larger one in the MP3 player sector. They should be a status symbol for those who can afford them, a nice company with a good history but that nowadays has to face up to a lot of competitors. As it is, the Apple way of doing business has proven successful enough to make them the biggest private company in the world. Bigger than every other technology company, bigger than every hedge fund or finance company, bigger than any oil company, worth more than every single one (excluding state owned companies such as Saudi Aramco, which is estimated to be worth around 3 trillion dollars by dealing in Saudi oil exports). How has a technology company come to be worth $400 billion? How?

One undoubted feature is Apple’s uncanny knack of getting there first- the Apple II was the first real personal computer and provided the genes for Windows-powered PC’s to take the world, whilst the iPod was the first MP3 player that was genuinely enjoyable to use, the iPhone the first smartphone (after just four years, somewhere in the region of 30% of the world’s phones are now smartphones) and the iPad the first tablet computer. Being in the technology business has made this kind of innovation especially rewarding for them; every company is constantly terrified of being left behind, so whenever a new innovation comes along they will knock something together as soon as possible just to jump on the bandwagon. However, technology is a difficult business to get right, meaning that these products are usually rubbish and make the Apple version shine by comparison. This also means that if Apple comes up with the idea first, they have had a couple of years of working time to make sure they get it right, whilst everyone else’s first efforts have had only a few scance months; it takes a while for any serious competitors to develop, by which time Apple have already made a few hundred million off it and have moved on to something else; innovation matters in this business.

But the real reason for Apple’s success can be put down to the aura the company have built around themselves and their products. From their earliest infancy Apple fans have been self-dubbed as the independent, the free thinkers, the creative, those who love to be different and stand out from the crowd of grey, calculating Windows-users (which sounds disturbingly like a conspiracy theory or a dystopian vision of the future when it is articulated like that). Whilst Windows has its problems, Apple has decided on what is important and has made something perfect in this regard (their view, not mine), and being willing to pay for it is just part of the induction into the wonderful world of being an Apple customer (still their view). It’s a compelling world view, and one that thousands of people have subscribed to, simply because it is so comforting; it sells us the idea that we are special, individual, and not just one of the millions of customers responsible for Apple’s phenomenal size and success as a company. But the secret to the success of this vision is not just the view itself; it is the method and the longevity of its delivery. This is an image that has been present in their advertising campaign from its earliest infancy, and is now so ingrained that it doesn’t have to be articulated any more; it’s just present in the subtle hints, the colour scheme, the way the Apple store is structured and the very existence of Apple-dedicated shops generally. Apple have delivered the masterclass in successful branding; and that’s all the conclusion you’re going to get for today.

Practical computing

This looks set to be my final post of this series about the history and functional mechanics of computers. Today I want to get onto the nuts & bolts of computer programming and interaction, the sort of thing you might learn as a budding amateur wanting to figure out how to mess around with these things, and who’s interested in exactly how they work (bear in mind that I am not one of these people and am, therefore, likely to get quite a bit of this wrong). So, to summarise what I’ve said in the last two posts (and to fill in a couple of gaps): silicon chips are massive piles of tiny electronic switches, memory is stored in tiny circuits that are either off or on, this pattern of off and on can be used to represent information in memory, memory stores data and instructions for the CPU, the CPU has no actual ability to do anything but automatically delegates through the structure of its transistors to the areas that do, the arithmetic logic unit is a dumb counting machine used to do all the grunt work and is also responsible, through the CPU, for telling the screen how to make the appropriate pretty pictures.

OK? Good, we can get on then.

Programming languages are a way of translating the medium of computer information and instruction (binary data) into our medium of the same: words and language. Obviously, computers do not understand that the buttons we press on our screen have symbols on them, that these symbols mean something to us and that they are so built to produce the same symbols on the monitor when we press them, but we humans do and that makes computers actually usable for 99.99% of the world population. When a programmer brings up an appropriate program and starts typing instructions into it, at the time of typing their words mean absolutely nothing. The key thing is what happens when their data is committed to memory, for here the program concerned kicks in.

The key feature that defines a programming language is not the language itself, but the interface that converts words to instructions. Built into the workings of each is a list of ‘words’ in binary, each word having a corresponding, but entirely different, string of data associated with it that represents the appropriate set of ‘ons and offs’ that will get a computer to perform the correct task. This works in one of two ways: an ‘interpreter’ is an inbuilt system whereby the programming is stored just as words and is then converted to ‘machine code’ by the interpreter as it is accessed from memory, but the most common form is to use a compiler. This basically means that once you have finished writing your program, you hit a button to tell the computer to ‘compile’ your written code into an executable program in data form. This allows you to delete the written file afterwards, makes programs run faster, and gives programmers an excuse to bum around all the time (I refer you here)

That is, basically how computer programs work- but there is one last, key feature, in the workings of a modern computer, one that has divided both nerds and laymen alike across the years and decades and to this day provokes furious debate: the operating system.

An OS, something like Windows (Microsoft), OS X (Apple) or Linux (nerds), is basically the software that enables the CPU to do its job of managing processes and applications. Think of it this way: whilst the CPU might put two inputs through a logic gate and send an output to a program, it is the operating system that will set it up to determine exactly which gate to put it through and exactly how that program will execute. Operating systems are written onto the hard drive, and can, theoretically, be written using nothing more than a magnetized needle, a lot of time and a plethora of expertise to flip the magnetically charged ‘bits’ on the hard disk. They consist of many different parts, but the key feature of all of them is the kernel, the part that manages the memory, optimises the CPU performance and translates programs from memory to screen. The precise translation and method by which this latter function happens differs from OS to OS, hence why a program written for Windows won’t work on a Mac, and why Android (Linux-powered) smartphones couldn’t run iPhone (iOS) apps even if they could access the store. It is also the cause of all the debate between advocates of different operating systems, since different translation methods prioritise/are better at dealing with different things, work with varying degrees of efficiency and are more  or less vulnerable to virus attack. However, perhaps the most vital things that modern OS’s do on our home computers is the stuff that, at first glance seems secondary- moving stuff around and scheduling. A CPU cannot process more than one task at once, meaning that it should not be theoretically possible for a computer to multi-task; the sheer concept of playing minesweeper whilst waiting for the rest of the computer to boot up and sort itself out would be just too outlandish for words. However, a clever piece of software called a scheduler in each OS which switches from process to process very rapidly (remember computers run so fast that they can count to a billion, one by one, in under a second) to give the impression of it all happening simultaneously. Similarly, a kernel will allocate areas of empty memory for a given program to store its temporary information and run on, but may also shift some rarely-accessed memory from RAM (where it is accessible) to hard disk (where it isn’t) to free up more space (this is how computers with very little free memory space run programs, and the time taken to do this for large amounts of data is why they run so slowly) and must cope when a program needs to access data from another part of the computer that has not been specifically allocated a part of that program.

If I knew what I was talking about, I could witter on all day about the functioning of operating systems and the vast array of headache-causing practicalities and features that any OS programmer must consider, but I don’t and as such won’t. Instead, I will simply sit back, pat myself on the back for having actually got around to researching and (after a fashion) understanding all this, and marvel at what strange, confusing, brilliant inventions computers are.