“Oh man, you gotta see this video…”

Everyone loves YouTube, or at least the numbers suggest so; the total number of siteviews they’ve racked up must number in the low trillions, the most popular video on the site (of course it’s still Gangnam Style) has over one billion views, and YouTube has indeed become so ubiquitous that if a video of something cannot be found there then it probably doesn’t exist.

Indeed, YouTube’s ubiquity is perhaps the most surprising, or at least interesting thing about it; YouTube is certainly not the only and wasn’t even the first large-scale video hosting site, being launched in 2005, a year after Vimeo (the only other such site I am familiar with) and well after several others had made efforts at video-sharing. It was the brainchild of three early employees of PayPal, Chad Hurley, Jawed Karim and Steve Chen. A commonly reported story (that is frequently claimed to be not true), the three had recorded video at a dinner party but were having difficulty sharing it online, so being reasonably gifted programmers decided to build the service themselves. What actually happened has never really been confirmed, but the first video (showing Karim at San Diego zoo; yes, perhaps it wasn’t the most auspicious start) went up in April 2005, of course, is history.

To some, YouTube’s meteoric rise might be considered surprising, or simply the result of good fortune favouring them over some other site. Indeed, given that Apple computers used not to be able to display videos using the Adobe Flash video format used by the site, it’s remarkable (and a testament to Microsoft’s dominance of the PC market for so many years) that the site was able to take off as it did. However, if one looks closely then it isn’t hard to identify the hallmarks of a business model that was born to succeed online, and bears striking hallmarks to the story of Facebook; something that started purely as a cool idea for a website, and considered monetisation something of a secondary priority to be dealt with when it came along. The audience was the first priority, and everything was geared to maximising the ability of users to both share and view content freely. Videos didn’t (and still don’t) have to be passed or inspected before being uploaded to the site (although anything flagged by users as inappropriate will be watched and taken down if the moderators see fit to do so), there is no limit on the amount that can be watched or uploaded by a user and there is never any need to pay for anything. YouTube understands the most important thing about the internet; it is a place with an almost infinite supply of stuff and a finite amount of users willing to surf around and look for it. This makes the value of content to a user very low, so everything must be done to attract ‘customers’ before one can worry about such pesky things as money. YouTube is a place of non-regulation, of freedom; no wonder the internet loves it.

The proof of the pudding is, of course, in the money; even as early as November 2005 Sequoia Capital had enough faith in the company (along with superhuman levels of optimism and sheer balls) to invest over $11 million in the company. Less than a year later, YouTube was bought by Google, the past masters at knowing how the internet works- for $1.65 billion. Given that people estimate that Sequoia’s comparatively meagre investment in the company netted them a 30% share in the company by April 2006, this suggests the company’s value increased over 40 times in six months. That ballsy investment has proved a very, very profitable one, but some would argue that even this massive (and very quickly made) whack of cash hasn’t proved worth it in the long run. After all, less than two years after he was offered $500 000 for Facebook, Mark Zuckerberg’s company was worth several billion and still rising (it’s currently valued at $11 billion, after that messy stock market flotation), and YouTube is now, if anything, even bigger.

It’s actually quite hard to visualise just how big a thing YouTube has now managed to be come, but I’ll try; every second, roughly one hour of footage is uploaded to the site, or to put it another way, you would have to watch continually for the next three and a half millennia just to get through the stuff published this year. Even watching just the ones involving cats would be a full-time job. I occasionally visit one channel with more than one and a half thousand videos published by just one guy, each of which is around 20 minutes long, and there are in the region of several thousand people across the world who are able to make a living through nothing more than sitting in front of a camera and showing their antics to the world.

Precisely because of this, the very concept of YouTube has not infrequently come under fire. In much the same way as social networking sites, the free and open nature of YouTube means everything is on show for the whole world to see, so that video you of your mate doing this hilarious thing while drunk one time could, at best, make him the butt of a few jokes among your mates or, at worst, subject him to large-scale public ridicule. For every TomSka, beloved by his followers and able to live off YouTube-related income, there is a Star Wars kid, who (after having the titular video put online without his permission) was forced to seek psychiatric help for the bullying and ridicule he became the victim of and launched a high-profile lawsuit against his antagonists. Like so many things, YouTube is neither beneficial nor detrimental to humanity as a whole on its own; it is merely a tool of our modern world, and to what degree of awesomeness or depravity we exploit it is down purely to us.

Sorry about that, wasn’t really a conclusion was it?

Advertisement

Up one level

In my last post (well, last excepting Wednesday’s little topical deviation), I talked about the real nuts and bolts of a computer, detailing the function of the transistors that are so vital to the workings of a computer. Today, I’m going to take one step up and study a slightly broader picture, this time concerned with the integrated circuits that utilise such components to do the real grunt work of computing.

An integrated circuit is simply a circuit that is not comprised of multiple, separate, electronic components- in effect, whilst a standard circuit might consist of a few bits of metal and plastic connected to one another by wires, in an IC they are all stuck in the same place and all assembled as one. The main advantage of this is that since all the components don’t have to be manually stuck to one another, but are built in circuit form from the start, there is no worrying about the fiddliness of assembly and they can be mass-produced quickly and cheaply with components on a truly microscopic scale. They generally consist of several layers on top of the silicon itself, simply to allow space for all of the metal connecting tracks and insulating materials to run over one another (this pattern is usually, perhaps ironically, worked out on a computer), and the sheer detail required of their manufacture surely makes it one of the marvels of the engineering world.

But… how do they make a computer work? Well, let’s start by looking at a computer’s memory, which in all modern computers takes the form of semiconductor memory. Memory takes the form of millions upon millions of microscopically small circuits known as memory circuits, each of which consists of one or more transistors. Computers are electronic, meaning to only thing they understand is electricity- for the sake of simplicity and reliability, this takes the form of whether the current flowing in a given memory circuit is ‘on’ or ‘off’. If the switch is on, then the circuit is represented as a 1, or a 0 if it is switched off. These memory circuits are generally grouped together, and so each group will consist of an ordered pattern of ones and zeroes, of which there are many different permutations. This method of counting in ones and zeroes is known as binary arithmetic, and is sometimes thought of as the simplest form of counting. On a hard disk, patches of magnetically charged material represent binary information rather than memory circuits.

Each little memory circuit, with its simple on/off value, represents one bit of information. 8 bits grouped together forms a byte, and there may be billions of bytes in a computer’s memory. The key task of a computer programmer is, therefore, to ensure that all the data that a computer needs to process is written in binary form- but this pattern of 1s and 0s might be needed to represent any information from the content of an email to the colour of one pixel of a video. Clearly, memory on its own is not enough, and the computer needs some way of translating the information stored into the appropriate form.

A computer’s tool for doing this is known as a logic gate, a simple electronic device consisting of (you guessed it) yet more transistor switches. This takes one or two inputs, either ‘on’ or ‘off’ binary ones, and translates them into another value. There are three basic types:  AND gates (if both inputs equal 1, output equals 1- otherwise, output equals 0), OR gates (if either input equals 1, output equals 1- if both inputs equal 0, output equals 0), and NOT gates (if input equals 1, output equals 0, if input equals 0, output equals 1). The NOT gate is the only one of these with a single input, and combinations of these gates can perform other functions too, such as NAND (not-and) or XOR (exclusive OR; if either input equals 1, output equals 1, but if both inputs equal 1 or 0, output equals 0) gates. A computer’s CPU (central processing unit) will contain hundreds of these, connected up in such a way as to link various parts of the computer together appropriately, translate the instructions of the memory into what function a given program should be performing, and thus cause the relevant bit (if you’ll pardon the pun) of information to translate into the correct process for the computer to perform.

For example, if you click on an icon on your desktop, your computer will put the position of your mouse and the input of the clicking action through an AND gate to determine that it should first highlight that icon. To do this, it orders the three different parts of each of the many pixels of that symbol to change their shade by a certain degree, and the the part of the computer responsible for the monitor’s colour sends a message to the Arithmetic Logic Unit (ALU), the computer’s counting department, to ask what the numerical values of the old shades plus the highlighting is, to give it the new shades of colour for the various pictures. Oh, and the CPU should also open the program. To do this, its connections send a signal off to the memory to say that program X should open now. Another bit of the computer then searches through the memory to find program X, giving it the master ‘1’ signal that causes it to open. Now that it is open, this program routes a huge amount of data back through the CPU to tell it to change the pattern of pretty colours on the screen again, requiring another slue of data to go through the ALU, and that areas of the screen A, B and C are now all buttons, so if you click there then we’re going to have to go through this business all over again. Basically the CPU’s logical function consists of ‘IF this AND/OR this happens, which signal do I send off to ask the right part of the memory what to do next?’. And it will do all this in a miniscule fraction of a second. Computers are amazing.

Obviously, nobody in their right mind is going to go through the whole business of telling the computer exactly what to do with each individual piece of binary data manually, because if they did nothing would ever get done. For this purpose, therefore, programmers have invented programming languages to translate their wishes into binary, and for a little more detail about them, tune in to my final post on the subject…