The Epitome of Nerd-dom

A short while ago, I did a series of posts on computing based on the fact that I had done a lot of related research when studying the installation of Linux. I feel that I should now come clean and point out that between the time of that first post being written and now, I have tried and failed to install Ubuntu on an old laptop six times already, which has served to teach me even more about exactly how it works, and how it differs from is more mainstream competitors. So, since I don’t have any better ideas, I thought I might dedicate this post to Linux itself.

Linux is named after both its founder, Linus Torvalds, a Finnish programmer who finished compiling the Linux kernel in 1992, and Unix, the operating system that could be considered the grandfather of all modern OSs and which Torvalds based his design upon (note- whilst Torvald’s first name has a soft, extended first syllable, the first syllable of the word Linux should be a hard, short, sharp ‘ih’ sound). The system has its roots in the work of Richard Stallman, a lifelong pioneer and champion of the free-to-use, open source movement, who started the GNU project in 1983. His ultimate goal was to produce a free, Unix-like operating system, and in keeping with this he wrote a software license allowing anyone to use and distribute software associated with it so long as they stayed in keeping with the license’s terms (ie nobody can use the free software for personal profit). The software compiled as part of the GNU project was numerous (including a still widely-used compiler) and did eventually come to fruition as an operating system, but it never caught on and the project was, in regards to its achieving of its final aims, a failure (although the GNU General Public License remains the most-used software license of all time).

Torvalds began work on Linux as a hobby whilst a student in April 1991, using another Unix clone MINIX to write his code in and basing it on MINIX’s structure. Initially, he hadn’t been intending to write a complete operating system at all, but rather a type of display interface called a terminal emulator- a system that tries to emulate a graphical terminal, like a monitor, through a more text-based medium (I don’t really get it either- it’s hard to find information a newbie like me can make good sense of). Strictly speaking a terminal emulator is a program, existing independent of an operating system and acting almost like one in its own right, directly within the computer’s architecture. As such, the two are somewhat related and it wasn’t long before Torvalds ‘realised’ he had written a kernel for an operating system and, since the GNU operating system had fallen through and there was no widespread, free-to-use kernel out there, he pushed forward with his project. In August of that same year he published a now-famous post on a kind of early internet forum called Usenet, saying that he was developing an operating system that was “starting to get ready”, and asking for feedback concerning where MINIX was good and where it was lacking, “as my OS resembles it somewhat”. He also, interestingly,  said that his OS “probably never will support anything other than AT-harddisks”. How wrong that statement has proved to be.

When he finally published Linux, he originally did so under his own license- however, he borrowed heavily from GNU software in order to make it run properly (so to have a proper interface and such), and released later versions under the GNU GPL. Torvalds and his associates continue to maintain and update the Linux kernel (Version 3.0 being released last year) and, despite some teething troubles with those who have considered it old-fashioned, those who thought MINIX code was stolen (rather than merely borrowed from), and Microsoft (who have since turned tail and are now one of the largest contributors to the Linux kernel), the system is now regarded as the pinnacle of Stallman’s open-source dream.

One of the keys to its success lies in its constant evolution, and the interactivity of this process. Whilst Linus Torvalds and co. are the main developers, they write very little code themselves- instead, other programmers and members of the Linux community offer up suggestions, patches and additions to either the Linux distributors (more on them later) or as source code to the kernel itself. All the main team have to do is pick and choose the features they want to see included, and continually prune what they get to maximise the efficiency and minimise the vulnerability to viruses of the system- the latter being one of the key features that marks Linux (and OS X) over Windows. Other key advantages Linux holds includes its size and the efficiency with which it allocates CPU usage; whilst Windows may command a quite high percentage of your CPU capacity just to keep itself running, not counting any programs running on it, Linux is designed to use your CPU as efficiently as possible, in an effort to keep it running faster. The kernel’s open source roots mean it is easy to modify if you have the technical know-how, and the community of followers surrounding it mean that any problem you have with a standard distribution is usually only a few button clicks away. Disadvantages include a certain lack of user-friendliness to the uninitiated or not computer-literate user since a lot of programs require an instruction typed into the command bar, far fewer  programs, especially commercial, professional ones, than Windows, an inability to process media as well as OS X (which is the main reason Apple computers appear to exist), and a tendency to go wrong more frequently than commercial operating systems. Nonetheless, many ‘computer people’ consider this a small price to pay and flock to the kernel in their thousands.

However, the Linux kernel alone is not enough to make an operating system- hence the existence of distributions. Different distributions (or ‘distros’ as they’re known) consist of the Linux kernel bundled together with all the other features that make up an OS: software, documentation, window system, window manager, and desktop interface, to name but some. A few of these components, such as the graphical user interface (or GUI, which covers the job of several of the above components), or the package manager (that covers program installation, removal and editing), tend to be fairly ubiquitous (GNOME or KDE are common GUIs, and Synaptic the most typical package manager), but different people like their operating system to run in slightly different ways. Therefore, variations on these other components are bundled together with the kernel to form a distro, a complete package that will run as an operating system in exactly the same fashion as you would encounter with Windows or OS X. Such distros include Ubuntu (the most popular among beginners), Debian (Ubuntu’s older brother), Red Hat, Mandriva and Crunchbang- some of these, such as Ubuntu, are commercially backed enterprises (although how they make their money is a little beyond me), whilst others are entirely community-run, maintained solely thanks to the dedication, obsession and boundless free time of users across the globe.

If you’re not into all this computer-y geekdom, then there is a lot to dislike about Linux, and many an average computer user would rather use something that will get them sneered at by a minority of elitist nerds but that they know and can rely upon. But, for all of our inner geeks, the spirit, community, inventiveness and joyous freedom of the Linux system can be a wonderful breath of fresh air. Thank you, Mr. Torvalds- you have made a lot of people very happy.

Advertisements

Finding its feet

My last post on the recent history of western music took us up until the Jazz Age, which although it peaked in the 1920s, continued to occupy a position as the defining music genre of its age right up until the early 1950s. Today’s post takes up this tale for another decade and a half, beginning in 1951.

By this time, a few artists (Goree Carter and Jimmy Preston, for example) had experimented with mixing the various ‘black’ music genres (country and western, R&B and a little gospel being the main ones) to create a new, free rocking sound. However, by the 50s radio, which had been another major force for the spread of jazz, had risen to prominence enough to become a true feature of US life, so when Cleveland DJ Alan Freed first started playing R&B intentionally to a multiracial audience even his small listenership were able to make the event a significant one. Not only that, but the adolescents of the 50s were the first generation to have the free time and disposable income to control their own lives, making them a key consumer market and allowing them to latch onto and fund whatever was new and ‘cool’ to them. They were the first teenagers. These humble beginnings, spreading ‘black’ musical experiments to the masses, would later become the genre that Freed himself would coin a name for- rock and roll.

Rock and roll might have originally been named by Freed, and might have found its first star in Bill Haley (the guy wrote ‘Rock Around The Clock’ in 1955), but it became the riotous, unstoppable musical express train that it was thanks to a young man from Memphis, Tennessee, who walked into Sun Records in 1953 to record a song for personal use. His name was Elvis Presley.

’53 might have been Presley’s first recording experience, but his was not a smooth road. In eighth grade music he is reported to have only got a C and be told that he couldn’t sing, a claim that was repeated when he failed an audition for a local vocal quartet in January 1954. However, in June of that year he recorded a 1946 blues hit ‘That’s All Right’, totally altering what had been a lovelorn lament of a song into a riotous celebration. He, Winfield Moore and Bill Black (the guitarist and bassist he was recording with) had created a new, exciting, free-flowing sound based around Presley’s unique singing style. Three days later, the song aired on local radio for the first time and calls flooded in demanding to know who the new singer was. Many were even more surprised when they found out that it was a straight laced white boy playing what was previously thought of as ‘black music’.

Completely unintentionally, Elvis had rewritten the rulebook about modern music- now you didn’t have to be black, you didn’t have to play the seedy venues, you didn’t have to play slow, old, or boring music, you didn’t have to be ‘good’ by classical standards, and, most important, your real skill was your showmanship. Whilst his two co-performers in the early days were both natural showmen, Presley was a nervous performer to start with and his legs would shake during instrumental sections- the sight of a handsome young man wiggling his legs in wide-cut trousers proving somewhat hysterical for female sections of the audience, and worked the crowd into a frenzy that no previous performer had managed.

Elvis’ later career speaks for itself, but he lost his focus on writing music in around 1960 as, along with the death of Buddy Holly, the golden years of rock ‘n’ roll ended. However, the 50s had thrown up another new innovation into the mix- the electric guitar. Presley and his competitors had used them in their later performances, since they were lighter and easier to manoeuvre on stage and produced a better, louder sound for recorded tracks, but they wouldn’t come to their own until ‘the golden age of rock’ hit in the mid 60s.

By then, rock n roll had softened and mellowed, descending into lighter tunes that were the ancestors of modern pop music (something I’m not sure we should be too thankful to Elvis for), and British acts had begun to be the trailblazers. British acts tended towards a harder sound, and Cliff Richard enjoyed a period of tremendous success in the USA, but even then the passage of rock had eased off slightly. It wasn’t new any more, and people were basically content to carry on listening- there wasn’t much consumer demand for a new sound. But then, the baby boomers hit. The post-war goodwill in the late 40s and early 1950s had resulted in a spike in the birth rate of the developed world, and by around 1963 that generation had began to grow up. A second wave of teenagers hit the world, all desperate to escape the dreary boredom of their parents’ existence and form their own culture, with their own clothing, film interests, and, most importantly, music. The stage was set for something new to revolutionise the world of music, and the product that did was made in Britain.

Numerous bands from all over the country made up the British rock scene of the early 1960s, but the most prolific area was Liverpool. There rock and roll once again underwent a fusion with subgenres such as doo wop, and (again) R&B, formulating itself into another new sound, this time centred around a driving, rhythmic beat based upon the electric guitar and drum kit. These beats formed a key part of the catchy, bouncy, memorable melodies that would become the staple of ‘beat’ music. This had taken over the British music scene by 1963, but by 1964 a British song had made number 1 in the US charts. It was called ‘I Want To Hold Your Hand’, and was written by four Liverpudlians who called themselves The Beatles.

To this day, the Beatles are the most successful musicians ever (sorry fellow Queen fans- it’s true). Their first appearance on the Ed Sullivan show in 1964 set a new record for an American TV audience (over 70 million)- a show they only did because Sullivan’s plane had been forced to circle Heathrow Airport in the middle of the night so that this band he’d never heard of could land first and wade their way through their screaming fans. Sullivan decided then and there he wanted to interview them. Along with other British acts such as The Rolling Stones and The Kinks, beat took the US by storm- but they were only the first. The Beatles’ first and greatest legacy was the structure of a rock band; all band members wrote their own songs based on the drums & electric guitar. All that was left was for acts like the Stones to cement singer/lead guitarist/bassist/drummer as the classic combination and the formula was written. The music world was about to explode; again

And this story looks like taking quite a few more posts to tell…

The Hidden Benefits

Corporations are having a rather rough time of it at the minute in the PR department. This is only to be expected given the current economic climate, and given the fact that almost exactly the same feelings of annoyance and distrust were expressed during the other two major economic downturns of the last 100 years. Big business has always been the all-pervasive face of ‘the man’, and when said man has let us down (either during a downturn or at any point in history when somebody is holding a guitar), they tend to be (often justifiably) the main victims of hatred. In essence, they are ‘the bad guys’.

However, no matter how cynical you are, there are a couple of glaring inconsistencies in this concept- things that can either (depending on your perspective) make the bad guys seem nice, make nice things seem secretly evil, or just make you go “WTF?”. Here we can find the proverbial shades of grey.

Let us consider, for instance, tourism. Nobody who lives anywhere even remotely pretty or interesting likes tourists, and some of the local nicknames for them, especially in coastal areas for some reason, are simultaneously interesting, hilarious and bizarre. They are an annoying bunch of people, seeming always to be asking dumb questions and trailing around places like flocks of lost sheep, and with roughly the same mental agility- although since the rest of us all act exactly the same when we are on holiday, then it’s probably better to tolerate them a little. Then there is the damage they can do to a local area, ranging from footpath erosion and littering to the case o the planet Bethselamin, “which is now so worried about the cumulative erosion of 10 billion visiting tourists a year that any net imbalance between the amount you eat and the amount you excrete whilst on the planet is surgically removed from your body weight when you leave- so every time you go to the lavatory there it is vitally important to get a receipt” (Douglas Adams again). The tourism industry is often accused of stifling local economies in places like Yorkshire or the Lake District, where entire towns can consist of nothing but second homes (sending the local housing market haywire), tea shops and B&B’s, with seemingly no way out of a spiral of dependence upon it.

However, what if I was to tell you that tourism is possibly the single most powerful force acting towards the preservation of biodiversity and the combating of climate change? You might think me mad, but consider this- why is there still Amazonian rainforest left? Why are there vast tracks of national path all over southern Africa? We might (and in fact should) be able to think of dozens of very good reasons for preserving these habitats, not least the benefits to making sure that all of our great planet’s inhabitants are allowed to survive without being crushed under the proverbial bulldozer that is civilisation, and the value to the environment of the carbon sink of the rainforests. But, unfortunately, when viewed from a purely clinical standpoint these arguments do not stand up. Consider the rainforest- depending on your perspective this is either a natural resource that is useful for all sorts of namby-pamby reasons like ensuring the planet doesn’t suffocate, or a source of a potentially huge amount of money. Timber is valuable stuff, especially given the types (such as mahogany) and sizes of trees one gets in the Amazon delta. Factor in that gain with the fact that many of the countries who own such rainforest are desperately poor and badly need the cash, and suddenly the plight of the Lesser Purple-Crested Cockroach seems less important.

And here tourists come to the rescue, for they are the sole financial justification for the preservation  of the rainforests. The idea of keeping all this natural biodiversity for people to have is all well and good, but this idea backed up by the prospect of people paying large sums of money to come and see it becomes doubly attractive, interesting governments in potential long-term financial gain rather than the quick buck that is to be gained from just using up their various natural resources from a purely industrial point of view.

Tourism is not the only industry that props up an entire section of life that we all know and love. Let me throw some names at you: Yahoo, Facebook, Google, Twitter. What do all of those (and many other besides) have in common? Firstly, that all are based on the internet, and secondly that the services offered by all three are entirely free. Contrast that against similarity three, that all are multi-billion dollar companies. How does this work? Answer, similarity 4: all gain their income from the advertising industry.

Advertising and marketing is another sect of modern business that we all hate, as adverts are always annoying by their presence, and can be downright offensively horrible in some cases. Aggressive marketing is basically the reason we can’t have nice things generally, and there is something particularly soulless about an industry whose sole purpose is to sell you things based on what they say, rather than what’s good about whatever they’re selling. They are perhaps the personification of the evils of big business, and yet without it, huge tracts of the internet, the home of the rebellion against modern consumer culture, would simply not be able to exist. Without advertising, the information Facebook has on its hundreds of millions of users would be financially useless, let alone the users themselves, and thus it would not be able to exist as a company or, probably, an entity at all, let alone one that has just completed one of the highest-value stock market flotations in commercial history. Google would exist perhaps merely as a neat idea, something a geek might have thought of in college and never been able to turn into a huge business that deals with a gigantic stake in web traffic as well as running its own social network, email service and even the web browser I am typing this on.

This doesn’t make advertisers and tourism companies suddenly all angels in the light of the world, and they are probably just as deserving of all the cynicism they get (equally deserving, probably, are Facebook and Google, but this would ruin my argument). But it’s worth thinking that, no matter how pushy or annoying they start to get, it may be a small price to pay for the benefits their very existence lends to us.

The Encyclopaedia Webbanica

Once again, today’s post will begin with a story- this time, one about a place that was envisaged over a hundred years ago. It was called the Mundaneum.

The Mundaneum today is a tiny museum in the city of Mons, Belgium, which opened in its current form in 1998. It is a far cry from the original, first conceptualised by Nobel Peace Prize winner Henri la Fontaine and fellow lawyer and pioneer Paul Otlet in 1895. The two men, Otlet in particular, had a vision- to create a place where every single piece of knowledge in the world was housed. Absolutely all of it.

Even in the 19th century, when the breadth of scientific knowledge was a million times smaller than it is today (a 19th century version of New Scientist would be publishable about once a year), this was a huge undertaking, this was a truly gigantic undertaking from a practical perspective. Not only did Otlet and la Fontaine attempt to collect a copy of just about every book ever written in search of information, but went further than any conventional library of the time by also looking through pamphlets, photographs, magazines, and posters in search of data. The entire thing was stored on small 3×5 index cards and kept in a carefully organised and detailed system of files, and this paper database eventually grew to contain over 12 million entries. People would send letters or telegraphs to the government-funded Mundaneum (the name referencing to the French monde, meaning world, rather than mundane as in boring), who in turn would have their staff search through their files in order to give a response to just about any question that could be asked.

However, the most interesting thing of all about Otlet’s operation, quite apart from the sheer conceptual genius of a man who was light-years ahead of his time, was his response to the problems posed when the enterprise got too big for its boots. After a while, the sheer volume of information and, more importantly, paper, meant that the filing system was getting too big to be practical for the real world. Otlet realised that this was not a problem that could ever be resolved by more space or manpower- the problem lay in the use of paper. And this was where Otlet pulled his masterstroke of foresight.

Otlet envisaged a version of the Mundaneum where the whole paper and telegraph business would be unnecessary- instead, he foresaw a “mechanical, collective brain”, through which people of the world could access all the information the world had to offer stored within it via a system of “electric microscopes”. Not only that, but he envisaged the potential for these ‘microscopes’ to connect to one another, and letting people “participate, applaud, give ovations, [or] sing in the chorus”. Basically, a pre-war Belgian lawyer predicted the internet (and, in the latter statement, social networking too).

Otlet has never been included in the pantheon of web pioneers- he died in 1944 after his beloved Mundaneum had been occupied and used to house a Nazi art collection, and his vision of the web as more of an information storage tool for nerdy types is hardly what we have today. But, to me, his vision of a web as a hub for sharing information and a man-made font of all knowledge is envisaged, at least in part, by one huge and desperately appealing corner of the web today: Wikipedia.

If you take a step back and look at Wikipedia as a whole, its enormous success and popularity can be quite hard to understand. Beginning from a practical perspective, it is a notoriously difficult site to work with- whilst accessing the information is very user-friendly, the editing process can be hideously confusing and difficult, especially for the not very computer-literate (seriously, try it). My own personal attempts at article-editing have almost always resulted in failure, bar some very small changes and additions to existing text (where I don’t have to deal with the formatting). This difficulty in formatting is a large contributor to another issue- Wikipedia articles are incredibly text-heavy, usually with only a few pictures and captions, which would be a major turn-off in a magazine or book. The very concept of an encyclopaedia edited and made by the masses, rather than a select team of experts, also (initially) seems incredibly foolhardy. Literally anyone can type in just about anything they want, leaving the site incredibly prone to either vandalism or accidental misdirection (see xkcd.com/978/ for Randall Munroe’s take on how it can get things wrong). The site has come under heavy criticism over the years for this fact, particularly on its pages about people (Dan Carter, the New Zealand fly-half, has apparently considered taking up stamp collecting, after hundreds of fans have sent him stamps based on a Wikipedia entry stating that he was a philatelist), and just letting normal people edit it also leaves bias prone to creep in, despite the best efforts of Wikipedia’s team of writers and editors (personally, I think that the site keeps its editing software deliberately difficult to use to minimise the amount of people who can use it easily and so try to minimise this problem).

But, all that aside… Wikipedia is truly wonderful- it epitomises all that is good about the web. It is a free to use service, run by a not-for-profit organisation that is devoid of advertising and is funded solely by the people of the web whom it serves. It is the font of all knowledge to an entire generation of students and schoolchildren, and is the number one place to go for anyone looking for an answer about anything- or who’s just interested in something and would like to learn more. It is built on the principles of everyone sharing and contributing- even flaws or areas lacking citation are denoted by casual users if they slip up past the editors the first time around. It’s success is built upon its size, both big and small- the sheer quantity of articles (there are now almost four million, most of which are a bit bigger than would have fitted on one of Otlet’s index cards), means that it can be relied upon for just about any query (and will be at the top of 80% of my Google searches), but its small server space, and staff size (less than 50,000, most of whom are volunteers- the Wikimedia foundation employs less than 150 people) keeps running costs low and allows it to keep on functioning despite its user-sourced funding model. Wikipedia is currently the 6th (ish) most visited website in the world, with 12 billion page views a month. And all this from an entirely not-for-profit organisation designed to let people know facts.

Nowadays, the Mundaneum is a small museum, a monument to a noble but ultimately flawed experiment. It original offices in Brussels were left empty, gathering dust after the war until a graduate student discovered it and eventually provoked enough interest to move the old collection to Mons, where it currently resides as a shadow of its former glory. But its spirit lives on in the collective brain that its founder envisaged. God bless you, Wikipedia- long may you continue.