Pineapples (TM)

If the last few decades of consumerism have taught us anything, it is just how much faith people are able of setting store in a brand. In everything from motorbikes to washing powder, we do not simply test and judge effectiveness of competing products objectively (although, especially when considering expensive items such as cars, this is sometimes impractical); we must compare them to what we think of the brand and the label, what reputation this product has and what it is particularly good at, which we think most suits our social standing and how others will judge our use of it. And good thing too, from many companies’ perspective, otherwise the amount of business they do would be slashed. There are many companies whose success can be almost entirely put down to the effect of their branding and the impact their marketing has had on the psyche of western culture, but perhaps the most spectacular example concerns Apple.

In some ways, to typecast Apple as a brand-built company is a harsh one; their products are doubtless good ones, and they have shown a staggering gift for bringing existed ideas together into forms that, if not quite new, are always the first to be a practical, genuine market presence. It is also true that Apple products are often better than their competitors in very specific fields; in computing, for example, OS X is better at dealing with media than other operating systems, whilst Windows has traditionally been far stronger when it comes to word processing, gaming and absolutely everything else (although Windows 8 looks very likely to change all of that- I am not looking forward to it). However, it is almost universally agreed (among non-Apple whores anyway) that once the rest of the market gets hold of it Apple’s version of a product is almost never the definitive best, from a purely analytical perspective (the iPod is a possible exception, solely due to the existence of iTunes redefining the music industry before everyone else and remaining competitive to this day) and that every Apple product is ridiculously overpriced for what it is. Seriously, who genuinely thinks that top-end Macs are a good investment?

Still, Apple make high-end, high-quality products with a few things they do really, really well that are basically capable of doing everything else. They should have a small market share, perhaps among the creative or the indie, and a somewhat larger one in the MP3 player sector. They should be a status symbol for those who can afford them, a nice company with a good history but that nowadays has to face up to a lot of competitors. As it is, the Apple way of doing business has proven successful enough to make them the biggest private company in the world. Bigger than every other technology company, bigger than every hedge fund or finance company, bigger than any oil company, worth more than every single one (excluding state owned companies such as Saudi Aramco, which is estimated to be worth around 3 trillion dollars by dealing in Saudi oil exports). How has a technology company come to be worth $400 billion? How?

One undoubted feature is Apple’s uncanny knack of getting there first- the Apple II was the first real personal computer and provided the genes for Windows-powered PC’s to take the world, whilst the iPod was the first MP3 player that was genuinely enjoyable to use, the iPhone the first smartphone (after just four years, somewhere in the region of 30% of the world’s phones are now smartphones) and the iPad the first tablet computer. Being in the technology business has made this kind of innovation especially rewarding for them; every company is constantly terrified of being left behind, so whenever a new innovation comes along they will knock something together as soon as possible just to jump on the bandwagon. However, technology is a difficult business to get right, meaning that these products are usually rubbish and make the Apple version shine by comparison. This also means that if Apple comes up with the idea first, they have had a couple of years of working time to make sure they get it right, whilst everyone else’s first efforts have had only a few scance months; it takes a while for any serious competitors to develop, by which time Apple have already made a few hundred million off it and have moved on to something else; innovation matters in this business.

But the real reason for Apple’s success can be put down to the aura the company have built around themselves and their products. From their earliest infancy Apple fans have been self-dubbed as the independent, the free thinkers, the creative, those who love to be different and stand out from the crowd of grey, calculating Windows-users (which sounds disturbingly like a conspiracy theory or a dystopian vision of the future when it is articulated like that). Whilst Windows has its problems, Apple has decided on what is important and has made something perfect in this regard (their view, not mine), and being willing to pay for it is just part of the induction into the wonderful world of being an Apple customer (still their view). It’s a compelling world view, and one that thousands of people have subscribed to, simply because it is so comforting; it sells us the idea that we are special, individual, and not just one of the millions of customers responsible for Apple’s phenomenal size and success as a company. But the secret to the success of this vision is not just the view itself; it is the method and the longevity of its delivery. This is an image that has been present in their advertising campaign from its earliest infancy, and is now so ingrained that it doesn’t have to be articulated any more; it’s just present in the subtle hints, the colour scheme, the way the Apple store is structured and the very existence of Apple-dedicated shops generally. Apple have delivered the masterclass in successful branding; and that’s all the conclusion you’re going to get for today.

Advertisement

The Epitome of Nerd-dom

A short while ago, I did a series of posts on computing based on the fact that I had done a lot of related research when studying the installation of Linux. I feel that I should now come clean and point out that between the time of that first post being written and now, I have tried and failed to install Ubuntu on an old laptop six times already, which has served to teach me even more about exactly how it works, and how it differs from is more mainstream competitors. So, since I don’t have any better ideas, I thought I might dedicate this post to Linux itself.

Linux is named after both its founder, Linus Torvalds, a Finnish programmer who finished compiling the Linux kernel in 1992, and Unix, the operating system that could be considered the grandfather of all modern OSs and which Torvalds based his design upon (note- whilst Torvald’s first name has a soft, extended first syllable, the first syllable of the word Linux should be a hard, short, sharp ‘ih’ sound). The system has its roots in the work of Richard Stallman, a lifelong pioneer and champion of the free-to-use, open source movement, who started the GNU project in 1983. His ultimate goal was to produce a free, Unix-like operating system, and in keeping with this he wrote a software license allowing anyone to use and distribute software associated with it so long as they stayed in keeping with the license’s terms (ie nobody can use the free software for personal profit). The software compiled as part of the GNU project was numerous (including a still widely-used compiler) and did eventually come to fruition as an operating system, but it never caught on and the project was, in regards to its achieving of its final aims, a failure (although the GNU General Public License remains the most-used software license of all time).

Torvalds began work on Linux as a hobby whilst a student in April 1991, using another Unix clone MINIX to write his code in and basing it on MINIX’s structure. Initially, he hadn’t been intending to write a complete operating system at all, but rather a type of display interface called a terminal emulator- a system that tries to emulate a graphical terminal, like a monitor, through a more text-based medium (I don’t really get it either- it’s hard to find information a newbie like me can make good sense of). Strictly speaking a terminal emulator is a program, existing independent of an operating system and acting almost like one in its own right, directly within the computer’s architecture. As such, the two are somewhat related and it wasn’t long before Torvalds ‘realised’ he had written a kernel for an operating system and, since the GNU operating system had fallen through and there was no widespread, free-to-use kernel out there, he pushed forward with his project. In August of that same year he published a now-famous post on a kind of early internet forum called Usenet, saying that he was developing an operating system that was “starting to get ready”, and asking for feedback concerning where MINIX was good and where it was lacking, “as my OS resembles it somewhat”. He also, interestingly,  said that his OS “probably never will support anything other than AT-harddisks”. How wrong that statement has proved to be.

When he finally published Linux, he originally did so under his own license- however, he borrowed heavily from GNU software in order to make it run properly (so to have a proper interface and such), and released later versions under the GNU GPL. Torvalds and his associates continue to maintain and update the Linux kernel (Version 3.0 being released last year) and, despite some teething troubles with those who have considered it old-fashioned, those who thought MINIX code was stolen (rather than merely borrowed from), and Microsoft (who have since turned tail and are now one of the largest contributors to the Linux kernel), the system is now regarded as the pinnacle of Stallman’s open-source dream.

One of the keys to its success lies in its constant evolution, and the interactivity of this process. Whilst Linus Torvalds and co. are the main developers, they write very little code themselves- instead, other programmers and members of the Linux community offer up suggestions, patches and additions to either the Linux distributors (more on them later) or as source code to the kernel itself. All the main team have to do is pick and choose the features they want to see included, and continually prune what they get to maximise the efficiency and minimise the vulnerability to viruses of the system- the latter being one of the key features that marks Linux (and OS X) over Windows. Other key advantages Linux holds includes its size and the efficiency with which it allocates CPU usage; whilst Windows may command a quite high percentage of your CPU capacity just to keep itself running, not counting any programs running on it, Linux is designed to use your CPU as efficiently as possible, in an effort to keep it running faster. The kernel’s open source roots mean it is easy to modify if you have the technical know-how, and the community of followers surrounding it mean that any problem you have with a standard distribution is usually only a few button clicks away. Disadvantages include a certain lack of user-friendliness to the uninitiated or not computer-literate user since a lot of programs require an instruction typed into the command bar, far fewer  programs, especially commercial, professional ones, than Windows, an inability to process media as well as OS X (which is the main reason Apple computers appear to exist), and a tendency to go wrong more frequently than commercial operating systems. Nonetheless, many ‘computer people’ consider this a small price to pay and flock to the kernel in their thousands.

However, the Linux kernel alone is not enough to make an operating system- hence the existence of distributions. Different distributions (or ‘distros’ as they’re known) consist of the Linux kernel bundled together with all the other features that make up an OS: software, documentation, window system, window manager, and desktop interface, to name but some. A few of these components, such as the graphical user interface (or GUI, which covers the job of several of the above components), or the package manager (that covers program installation, removal and editing), tend to be fairly ubiquitous (GNOME or KDE are common GUIs, and Synaptic the most typical package manager), but different people like their operating system to run in slightly different ways. Therefore, variations on these other components are bundled together with the kernel to form a distro, a complete package that will run as an operating system in exactly the same fashion as you would encounter with Windows or OS X. Such distros include Ubuntu (the most popular among beginners), Debian (Ubuntu’s older brother), Red Hat, Mandriva and Crunchbang- some of these, such as Ubuntu, are commercially backed enterprises (although how they make their money is a little beyond me), whilst others are entirely community-run, maintained solely thanks to the dedication, obsession and boundless free time of users across the globe.

If you’re not into all this computer-y geekdom, then there is a lot to dislike about Linux, and many an average computer user would rather use something that will get them sneered at by a minority of elitist nerds but that they know and can rely upon. But, for all of our inner geeks, the spirit, community, inventiveness and joyous freedom of the Linux system can be a wonderful breath of fresh air. Thank you, Mr. Torvalds- you have made a lot of people very happy.

Practical computing

This looks set to be my final post of this series about the history and functional mechanics of computers. Today I want to get onto the nuts & bolts of computer programming and interaction, the sort of thing you might learn as a budding amateur wanting to figure out how to mess around with these things, and who’s interested in exactly how they work (bear in mind that I am not one of these people and am, therefore, likely to get quite a bit of this wrong). So, to summarise what I’ve said in the last two posts (and to fill in a couple of gaps): silicon chips are massive piles of tiny electronic switches, memory is stored in tiny circuits that are either off or on, this pattern of off and on can be used to represent information in memory, memory stores data and instructions for the CPU, the CPU has no actual ability to do anything but automatically delegates through the structure of its transistors to the areas that do, the arithmetic logic unit is a dumb counting machine used to do all the grunt work and is also responsible, through the CPU, for telling the screen how to make the appropriate pretty pictures.

OK? Good, we can get on then.

Programming languages are a way of translating the medium of computer information and instruction (binary data) into our medium of the same: words and language. Obviously, computers do not understand that the buttons we press on our screen have symbols on them, that these symbols mean something to us and that they are so built to produce the same symbols on the monitor when we press them, but we humans do and that makes computers actually usable for 99.99% of the world population. When a programmer brings up an appropriate program and starts typing instructions into it, at the time of typing their words mean absolutely nothing. The key thing is what happens when their data is committed to memory, for here the program concerned kicks in.

The key feature that defines a programming language is not the language itself, but the interface that converts words to instructions. Built into the workings of each is a list of ‘words’ in binary, each word having a corresponding, but entirely different, string of data associated with it that represents the appropriate set of ‘ons and offs’ that will get a computer to perform the correct task. This works in one of two ways: an ‘interpreter’ is an inbuilt system whereby the programming is stored just as words and is then converted to ‘machine code’ by the interpreter as it is accessed from memory, but the most common form is to use a compiler. This basically means that once you have finished writing your program, you hit a button to tell the computer to ‘compile’ your written code into an executable program in data form. This allows you to delete the written file afterwards, makes programs run faster, and gives programmers an excuse to bum around all the time (I refer you here)

That is, basically how computer programs work- but there is one last, key feature, in the workings of a modern computer, one that has divided both nerds and laymen alike across the years and decades and to this day provokes furious debate: the operating system.

An OS, something like Windows (Microsoft), OS X (Apple) or Linux (nerds), is basically the software that enables the CPU to do its job of managing processes and applications. Think of it this way: whilst the CPU might put two inputs through a logic gate and send an output to a program, it is the operating system that will set it up to determine exactly which gate to put it through and exactly how that program will execute. Operating systems are written onto the hard drive, and can, theoretically, be written using nothing more than a magnetized needle, a lot of time and a plethora of expertise to flip the magnetically charged ‘bits’ on the hard disk. They consist of many different parts, but the key feature of all of them is the kernel, the part that manages the memory, optimises the CPU performance and translates programs from memory to screen. The precise translation and method by which this latter function happens differs from OS to OS, hence why a program written for Windows won’t work on a Mac, and why Android (Linux-powered) smartphones couldn’t run iPhone (iOS) apps even if they could access the store. It is also the cause of all the debate between advocates of different operating systems, since different translation methods prioritise/are better at dealing with different things, work with varying degrees of efficiency and are more  or less vulnerable to virus attack. However, perhaps the most vital things that modern OS’s do on our home computers is the stuff that, at first glance seems secondary- moving stuff around and scheduling. A CPU cannot process more than one task at once, meaning that it should not be theoretically possible for a computer to multi-task; the sheer concept of playing minesweeper whilst waiting for the rest of the computer to boot up and sort itself out would be just too outlandish for words. However, a clever piece of software called a scheduler in each OS which switches from process to process very rapidly (remember computers run so fast that they can count to a billion, one by one, in under a second) to give the impression of it all happening simultaneously. Similarly, a kernel will allocate areas of empty memory for a given program to store its temporary information and run on, but may also shift some rarely-accessed memory from RAM (where it is accessible) to hard disk (where it isn’t) to free up more space (this is how computers with very little free memory space run programs, and the time taken to do this for large amounts of data is why they run so slowly) and must cope when a program needs to access data from another part of the computer that has not been specifically allocated a part of that program.

If I knew what I was talking about, I could witter on all day about the functioning of operating systems and the vast array of headache-causing practicalities and features that any OS programmer must consider, but I don’t and as such won’t. Instead, I will simply sit back, pat myself on the back for having actually got around to researching and (after a fashion) understanding all this, and marvel at what strange, confusing, brilliant inventions computers are.

Getting bored with history lessons

Last post’s investigation into the post-Babbage history of computers took us up to around the end of the Second World War, before the computer age could really be said to have kicked off. However, with the coming of Alan Turing the biggest stumbling block for the intellectual development of computing as a science had been overcome, since it now clearly understood what it was and where it was going. From then on, therefore, the history of computing is basically one long series of hardware improvements and business successes, and the only thing of real scholarly interest was Moore’s law. This law is an unofficial, yet surprisingly accurate, model of the exponential growth in the capabilities of computer hardware, stating that every 18 months computing hardware gets either twice as powerful, half the size, or half the price for the same other specifications. This law was based on a 1965 paper by Gordon E Moore, who noted that the number of transistors on integrated circuits had been doubling every two years since their invention 7 years earlier. The modern day figure of an 18-monthly doubling in performance comes from an Intel executive’s estimate based on both the increasing number of transistors and their getting faster & more efficient… but I’m getting sidetracked. The point I meant to make was that there is no point me continuing with a potted history of the last 70 years of computing, so in this post I wish to get on with the business of exactly how (roughly fundamentally speaking) computers work.

A modern computer is, basically, a huge bundle of switches- literally billions of the things. Normal switches are obviously not up to the job, being both too large and requiring an electromechanical rather than purely electrical interface to function, so computer designers have had to come up with electrically-activated switches instead. In Colossus’ day they used vacuum tubes, but these were large and prone to breaking so, in the late 1940s, the transistor was invented. This is a marvellous semiconductor-based device, but to explain how it works I’m going to have to go on a bit of a tangent.

Semiconductors are materials that do not conduct electricity freely and every which way like a metal, but do not insulate like a wood or plastic either- sometimes they conduct, sometimes they don’t. In modern computing and electronics, silicon is the substance most readily used for this purpose. For use in a transistor, silicon (an element with four electrons in its outer atomic ‘shell’) must be ‘doped’ with other elements, meaning that they are ‘mixed’ into the chemical, crystalline structure of the silicon. Doping with a substance such as boron, with three electrons in its outer shell, creates an area with a ‘missing’ electron, known as a hole. Holes have, effectively, a positive charge compared a ‘normal’ area of silicon (since electrons are negatively charged), so this kind of doping produces what is known as p-type silicon. Similarly, doping with something like phosphorus, with five outer shell electrons, produces an excess of negatively-charged electrons and n-type silicon. Thus electrons, and therefore electricity (made up entirely of the net movement of electrons from one area to another) finds it easy to flow from n- to p-type silicon, but not very well going the other way- it conducts in one direction and insulates in the other, hence a semiconductor. However, it is vital to remember that the p-type silicon is not an insulator and does allow for free passage of electrons, unlike pure, undoped silicon. A transistor generally consists of three layers of silicon sandwiched together, in order NPN or PNP depending on the practicality of the situation, with each layer of the sandwich having a metal contact or ‘leg’ attached to it- the leg in the middle is called the base, and the ones at either side are called the emitter and collector.

Now, when the three layers of silicon are stuck next to one another, some of the free electrons in the n-type layer(s) jump to fill the holes in the adjacent p-type, creating areas of neutral, or zero, charge. These are called ‘depletion zones’ and are good insulators, meaning that there is a high electrical resistance across the transistor and that a current cannot flow between the emitter and collector despite usually having a voltage ‘drop’ between them that is trying to get a current flowing. However, when a voltage is applied across the collector and base a current can flow between these two different types of silicon without a problem, and as such it does. This pulls electrons across the border between layers, and decreases the size of the depletion zones, decreasing the amount of electrical resistance across the transistor and allowing an electrical current to flow between the collector and emitter. In short, one current can be used to ‘turn on’ another.

Transistor radios use this principle to amplify the signal they receive into a loud, clear sound, and if you crack one open you should be able to see some (well, if you know what you’re looking for). However, computer and manufacturing technology has got so advanced over the last 50 years that it is now possible to fit over ten million of these transistor switches onto a silicon chip the size of your thumbnail- and bear in mind that the entire Colossus machine, the machine that cracked the Lorenz cipher, contained only ten thousand or so vacuum tube switches all told. Modern technology is a wonderful thing, and the sheer achievement behind it is worth bearing in mind next time you get shocked over the price of a new computer (unless you’re buying an Apple- that’s just business elitism).

…and dammit, I’ve filled up a whole post again without getting onto what I really wanted to talk about. Ah well, there’s always next time…

(In which I promise to actually get on with talking about computers)

So… why did I publish those posts?

So, here I (finally come)- the conclusion of my current theme of sport and fitness. Today I will, once again, return to the world of the gym, but the idea is actually almost as applicable to sport and fitness exercises generally.

Every year, towards the end of December, after the Christmas rush has subsided a little and the chocolates are running low, the western world embarks on the year’s final bizarre annual ritual- New Year’s Resolutions. These vary depending on geography (in Mexico, for example, they list not their new goals for the year ahead, but rather a list of things they hope will happen, generating a similar spirit of soon-to-be-crushed optimism), but there are a few cliched responses. Cut down on food x or y, get to know so and so better, finally sort out whatever you promise to deal with every year, perhaps even write a novel (for the more cocky and adventurous). However, perhaps the biggest cliched New Year’s Resolution is the vague “to exercise more”, or its (often accompanied) counterpart “to start going to the gym”.

Clearly, the world would be a very different place if we all stuck to our resolutions- there’d be a lot more mediocre books out there for starters. But perhaps the gym example is the most amusing, and obvious, example of our collective failure to stick to our own commitments. Every January, without fail, every gym in the land will be offering discounted taster sessions and membership deals, eager to entice their fresh crop of the budding gymgoer. All are quickly swamped with a fresh wave of enthusiasm and flab ready to burn, but by February many will lie practically empty, perhaps 90% of those new recruits having decided to bow out gracefully against the prospect of a lifetime’s slavery to the dumbbell.

So, back to my favourite question- why? What is it about the gym that can so quickly put people off- in essence, why don’t more people use the gym?

One important point to consider is practicality- to use the gym requires a quite significant commitment, and while 2-3 hours (ish) a week of actual exercise might not sound like much, given travelling time, getting changed, kit sorted and trying to fit it around a schedule such a commitment can quickly begin to take over one’s life. The gym atmosphere can also be very off-putting, as I know from personal experience. I am not a superlatively good rugby player, but I have my club membership and am entitled to use their gym for free. The reason I don’t is because trying to concentrate on my own (rather modest) personal aims and achievements can become both difficult and embarrassing when faced with first-teamers who use the gym religiously to bench press 150-odd kilos. All of them are resolutely nice guys, but it’s still an issue of personal embarrassment. It’s even worse if you have the dreaded ‘one-upmanship’ gym atmosphere, with everyone’s condescending smirks keeping the newbies firmly away. Then of course, there’s the long-term commitment to gym work. Some (admittedly naively) will first attend a gym expecting to see recognisable improvement immediately- but improvement takes a long time to notice, especially for the uninitiated and the young, who are likely to not have quite the same level of commitment and technique as the more experienced. The length of time it takes to see any improvement can be frustrating for many who feel like they’re wasting their time, and that can be as good an incentive as any to quit, disillusioned by the experienced.

However, by far the biggest (and ultimately overriding) cause is simply down to laziness- in fact most of the reasons or excuses given by someone dropping their gym routine (including perhaps that last one mentioned) can be traced back to a root cause of simply not wanting to put in the effort. It’s kinda easy to see why- gym work is (and should be) incredibly hard work, and busting a gut to lift a mediocre weight is perhaps not the most satisfying feeling for many, especially if they’re already feeling in a poor mood and/or they’re training alone (that’s a training tip- always train with a friend and encourage one another, but stick to rigid time constraints to ensure you don’t spend all the time nattering). But, this comes despite the fact that everyone (rationally) knows that going to the gym is good for you, and that if we weren’t lazy then we could probably achieve more and do more with ourselves. So, this in and of itself raises another question- why are humans lazy?

Actually, this question is a little bit of a misnomer, simply because of the ‘humans’ part- almost anyone who has a pet knows of their frequent struggles for the ‘most time spent lazing around in bed doing nothing all day’ award (to which I will nominate my own terrier). A similar competition is also often seen, to the disappointment of many a small child, in zoos across the land. It’s a trend seen throughout nature that, give an animal what he needs in a convenient space, he will quite happily enjoy such a bounty without any desire to get up & do more than necessary to get them- which is why zoo keepers often have problems with keeping their charges fit. This is, again, odd, since it seems like an evolutionary disadvantage to not want to do stuff.

However, despite being naturally lazy, this does not mean that people (and animals) don’t want to do stuff. In fact, laziness actually acts as a vital incentive in the progression of the human race. For an answer, ask yourself- why did we invent the wheel? Answer- because it was a lot easier than having to carry stuff around everywhere, and meant stuff took less work, allowing the inventor (and subsequently the human race) to become more and more lazy. The same pattern is replicated in just about every single thing the human race has ever invented (especially anything made by Apple)- laziness acts as a catalyst for innovation and discovery.

Basically, if more people went to the gym, then Thomas Edison wouldn’t have invented the lightbulb. Maybe.