The Epitome of Nerd-dom

A short while ago, I did a series of posts on computing based on the fact that I had done a lot of related research when studying the installation of Linux. I feel that I should now come clean and point out that between the time of that first post being written and now, I have tried and failed to install Ubuntu on an old laptop six times already, which has served to teach me even more about exactly how it works, and how it differs from is more mainstream competitors. So, since I don’t have any better ideas, I thought I might dedicate this post to Linux itself.

Linux is named after both its founder, Linus Torvalds, a Finnish programmer who finished compiling the Linux kernel in 1992, and Unix, the operating system that could be considered the grandfather of all modern OSs and which Torvalds based his design upon (note- whilst Torvald’s first name has a soft, extended first syllable, the first syllable of the word Linux should be a hard, short, sharp ‘ih’ sound). The system has its roots in the work of Richard Stallman, a lifelong pioneer and champion of the free-to-use, open source movement, who started the GNU project in 1983. His ultimate goal was to produce a free, Unix-like operating system, and in keeping with this he wrote a software license allowing anyone to use and distribute software associated with it so long as they stayed in keeping with the license’s terms (ie nobody can use the free software for personal profit). The software compiled as part of the GNU project was numerous (including a still widely-used compiler) and did eventually come to fruition as an operating system, but it never caught on and the project was, in regards to its achieving of its final aims, a failure (although the GNU General Public License remains the most-used software license of all time).

Torvalds began work on Linux as a hobby whilst a student in April 1991, using another Unix clone MINIX to write his code in and basing it on MINIX’s structure. Initially, he hadn’t been intending to write a complete operating system at all, but rather a type of display interface called a terminal emulator- a system that tries to emulate a graphical terminal, like a monitor, through a more text-based medium (I don’t really get it either- it’s hard to find information a newbie like me can make good sense of). Strictly speaking a terminal emulator is a program, existing independent of an operating system and acting almost like one in its own right, directly within the computer’s architecture. As such, the two are somewhat related and it wasn’t long before Torvalds ‘realised’ he had written a kernel for an operating system and, since the GNU operating system had fallen through and there was no widespread, free-to-use kernel out there, he pushed forward with his project. In August of that same year he published a now-famous post on a kind of early internet forum called Usenet, saying that he was developing an operating system that was “starting to get ready”, and asking for feedback concerning where MINIX was good and where it was lacking, “as my OS resembles it somewhat”. He also, interestingly,  said that his OS “probably never will support anything other than AT-harddisks”. How wrong that statement has proved to be.

When he finally published Linux, he originally did so under his own license- however, he borrowed heavily from GNU software in order to make it run properly (so to have a proper interface and such), and released later versions under the GNU GPL. Torvalds and his associates continue to maintain and update the Linux kernel (Version 3.0 being released last year) and, despite some teething troubles with those who have considered it old-fashioned, those who thought MINIX code was stolen (rather than merely borrowed from), and Microsoft (who have since turned tail and are now one of the largest contributors to the Linux kernel), the system is now regarded as the pinnacle of Stallman’s open-source dream.

One of the keys to its success lies in its constant evolution, and the interactivity of this process. Whilst Linus Torvalds and co. are the main developers, they write very little code themselves- instead, other programmers and members of the Linux community offer up suggestions, patches and additions to either the Linux distributors (more on them later) or as source code to the kernel itself. All the main team have to do is pick and choose the features they want to see included, and continually prune what they get to maximise the efficiency and minimise the vulnerability to viruses of the system- the latter being one of the key features that marks Linux (and OS X) over Windows. Other key advantages Linux holds includes its size and the efficiency with which it allocates CPU usage; whilst Windows may command a quite high percentage of your CPU capacity just to keep itself running, not counting any programs running on it, Linux is designed to use your CPU as efficiently as possible, in an effort to keep it running faster. The kernel’s open source roots mean it is easy to modify if you have the technical know-how, and the community of followers surrounding it mean that any problem you have with a standard distribution is usually only a few button clicks away. Disadvantages include a certain lack of user-friendliness to the uninitiated or not computer-literate user since a lot of programs require an instruction typed into the command bar, far fewer  programs, especially commercial, professional ones, than Windows, an inability to process media as well as OS X (which is the main reason Apple computers appear to exist), and a tendency to go wrong more frequently than commercial operating systems. Nonetheless, many ‘computer people’ consider this a small price to pay and flock to the kernel in their thousands.

However, the Linux kernel alone is not enough to make an operating system- hence the existence of distributions. Different distributions (or ‘distros’ as they’re known) consist of the Linux kernel bundled together with all the other features that make up an OS: software, documentation, window system, window manager, and desktop interface, to name but some. A few of these components, such as the graphical user interface (or GUI, which covers the job of several of the above components), or the package manager (that covers program installation, removal and editing), tend to be fairly ubiquitous (GNOME or KDE are common GUIs, and Synaptic the most typical package manager), but different people like their operating system to run in slightly different ways. Therefore, variations on these other components are bundled together with the kernel to form a distro, a complete package that will run as an operating system in exactly the same fashion as you would encounter with Windows or OS X. Such distros include Ubuntu (the most popular among beginners), Debian (Ubuntu’s older brother), Red Hat, Mandriva and Crunchbang- some of these, such as Ubuntu, are commercially backed enterprises (although how they make their money is a little beyond me), whilst others are entirely community-run, maintained solely thanks to the dedication, obsession and boundless free time of users across the globe.

If you’re not into all this computer-y geekdom, then there is a lot to dislike about Linux, and many an average computer user would rather use something that will get them sneered at by a minority of elitist nerds but that they know and can rely upon. But, for all of our inner geeks, the spirit, community, inventiveness and joyous freedom of the Linux system can be a wonderful breath of fresh air. Thank you, Mr. Torvalds- you have made a lot of people very happy.

Advertisement

The Science of Iron

I have mentioned before that I am something of a casual gymgoer- it’s only a relatively recent hobby, and only in the last couple of months have I given any serious thought and research to my regime (in which time I have also come to realise that some my advice in previous posts was either lacking in detail or partially wrong- sorry, it’s still basically useful). However, whilst the internet is, as could be reasonably expected, inundated with advice about training programs, tips on technique & exercises to work different muscle groups (often wildly disagreeing with one another), there is very little available information concerning the basic science behind building muscle- it’s just not something the average gymgoer knows. Since I am fond of a little research now and then, I thought I might attempt an explanation of some of the basic biology involved.

DISCLAIMER: I am not a biologist, and am getting this information via the internet and a bit of ad libbing, so don’t take this as anything more than a basic guideline

Everything in your body is made up of tiny, individual cells, each a small sac consisting of a complex (and surprisingly ‘intelligent’) membrane, a nucleus to act as its ‘brain’ (although no-one is entirely sure exactly how they work) and a lot of watery, chemical-y stuff called cytoplasm squelching about and reacting with things. It follows from this that to increase the size of an organ or tissue requires these cells to do one of two things; increase in number (hyperplasia) or in size (hypertrophy). The former case is mainly associated with growths such as neoplasia (tumours), and has only been shown to have an impact on muscles in response to the injection of growth hormones, so when we’re talking about strength, fitness and muscle building we’re really interested in going for hypertrophy.

Hypertrophy itself is still a fairly broad term biologically, and only two aspects of it are interesting from an exercise point of view; muscular and ventricular hypertrophy. As the respective names suggest, the former case relates to the size of cells in skeletal muscle increasing, whilst the latter is concerned with the increase in size & strength of the muscles making up the walls of the heart (the largest chambers of which are called the ventricles). Both are part of the body’s long-term response to exercise, and for both the basic principle is the same- but before I get onto that, a quick overview of exactly how muscles work may be in order.

A muscle cell (or muscle fibre) is on of the largest in the body, vaguely tubular in shape and consisting in part of many smaller structures known as myofibrils (or muscle fibrils). Muscle cells are also unusual in that they contain multiple cell nuclei, as a response to their size & complex function, and instead of cytoplasm contain another liquid called sarcoplasm (more densely packed with glycogen fuel and proteins to bind oxygen, and thus enabling the muscles to respire more quickly & efficiently in response to sudden & severe demand). These myofibrils consist of multiple sections called myofilaments, (themselves made of a family of proteins called myosins) joined end-to-end as repeating units known as sarcomeres. This structure is only present in skeletal, rather than smooth muscle cells (giving the latter a more regular, smoothly connected structure when viewed under the microscope, hence the name) and are responsible for the increased strength available to skeletal muscles. When a muscle fibril receives an electrical impulse from the brain or spinal cord, certain areas or ‘bands’ making up the sarcomeres shrink in size, causing the muscle as a whole to contract. When the impulse is removed, the muscle relaxes; but it cannot extend itself, so another muscle working with it in what is known as an antagonistic pair will have to pull back on it to return it to its original position.

Now, when that process is repeated a lot in a small time frame, or when a large load is placed on the muscle fibre, the fibrils can become damaged. If they are actually torn then a pulled muscle results, but if the damage is (relatively) minor then the body can repair it by shipping in more amino acids (the building blocks of the proteins that make up our bodies) and fuel (glycogen and, most importantly, oxygen). However, to try and safeguard against any future such event causing damage the body does its bit to overcompensate on its repairs, rebuilding the protein structures a little more strongly and overcompensating for the lost fuel in the sarcoplasm. This is the basic principle of muscular hypertrophy; the body’s repair systems overcompensating for minor damage.

There are yet more subdivisions to consider, for there are two main types of muscular hypertrophy. The first is myofibrillated hypertrophy, concerning the rebuilding of the myofibrils with more proteins so they are stronger and able to pull against larger loads. This enables the muscle to lift larger weights & makes one stronger, and is the prominent result of doing few repetitions of a high load, since this causes the most damage to the myofibrils themselves. The other type is sarcoplasmic hypertrophy, concerning the packing of more sarcoplasm into the muscle cell to better supply the muscle with fuel & oxygen. This helps the muscle deal better with exercise and builds a greater degree of muscular endurance, and also increases the size of the muscle, as the increased liquid in it causes it to swell in volume. It is best achieved by doing more repetitions on a lower load, since this longer-term exercise puts more strain on the ability of the sarcoplasm to supply oxygen. It is also advisable to do fewer sets (but do them properly) of this type of training since it is more tiring; muscles get tired and hurt due to the buildup of lactic acid in them caused by an insufficient supply of oxygen requiring them to respire anaerobically. This is why more training on a lower weight feels like harder work, but is actually going to be less beneficial if you are aiming to build muscular strength.

Ventricular (or cardiac) hypertrophy combines both of these effects in a response to the increased load placed on the muscles in the heart from regular exercise. It causes the walls of the ventricles to thicken as a result of sarcoplasmic hypertrophy, and also makes them stronger so that the heart has to beat less often (but more powerfully) to supply blood to the body. In elite athletes, this has another effect; in response to exercise the heart’s response is not so much to beat more frequently, but to do so more strongly, swelling more in size as it pumps to send more blood around the body with each beat. Athletic heart syndrome, where the slowing of the pulse and swelling of heart size are especially magnified, can even be mistaken for severe heart disease by an ill-informed doctor.

So… yeah, that’s how muscle builds (I apologise, by the way, for my heinous overuse of the word ‘since’ in the above explanation). I should point out quickly that this is not a fast process; each successive rebuilding of the muscle only increases the strength of that muscle by a small amount, even for serious weight training, and the body’s natural tendency to let a muscle degrade over time if it is not well-used means that hard work must constantly be put in to maintain the effect of increased muscular size, strength and endurance. But then again, I suppose that’s partly what we like about the gym; the knowledge that we have earned our strength, and that our willingness to put in the hard work is what is setting us apart from those sitting on the sofa watching TV. If that doesn’t sound too massively arrogant.