…but some are more equal than others

Seemingly the key default belief of any modern, respectable government and, indeed, a well brought-up child of the modern age, is that of egalitarianism- that all men are born equal. Numerous documents, from the US Declaration of Independence to the UN Bill of Human Rights, have proclaimed this as a ‘self-evident truth’, and anyone who still blatantly clings onto the idea that some people are born ‘better’ than others by virtue of their family having more money is dubbed out of touch at best, and (bizarrely) a Nazi at worst. And this might be considered surprising given the amount of approval and the extent to which we set store by a person’s rank or status.

I mean, think about it. A child from a well-respected, middle class family with two professional parents will invariably get more opportunities in life, and will frequently be considered more ‘trustworthy’, than a kid born into a broken home with a mother on benefits and a father in jail, particularly if his accent (especially) or skin colour (possibly to a slightly lesser extent in Europe than the US) suggests this fact. Someone with an expensive, tailored suit can stand a better chance at a job interview to a candidate with an old, fading jacket and worn knees on his trousers that he has never been rich enough to replace, and I haven’t even started on the wage and job availability gap between men and women, despite that there are nowadays more female university graduates than males. You get the general idea. We might think that all are born equal, but that doesn’t mean we treat them like that.

Some have said that this, particularly in the world of work, is to do with the background and age of the people concerned. Particularly in large, old and incredibly valuable corporate enterprises such as banks, the average age of senior staff and shareholders tends to be on the grey end of things, the majority of them are male and many of them will have had the top-quality private education that allowed them to get there, so the argument put forward is that these men were brought up surrounded by this sort of ‘public schoolers are fantastic and everyone else is a pleb’ mentality. And it is without doubt true that very few companies have an average age of a board member below 50, and many above 65; in fact the average age of a CEO in the UK has recently gone up from a decade-long value of 51 to nearly 53.  However, the evidence suggests that the inclusion of younger board members and CEOs generally benefits a company by providing a fresher understanding of the modern world; data that could only be gathered by the fact that there are a large number of young, high-ranking businesspeople to evaluate. And anyway; in most job interviews, it’s less likely to be the board asking the questions than it is a recruiting officer of medium business experience- this may be an issue, but I don’t think it’s the key thing here.

It could well be possible that the true answer is that there is no cause at all, and the whole business is nothing more than a statistical blip. In Freakonomics, an analysis was done to find the twenty ‘blackest’ and ‘whitest’ boy’s names in the US (I seem to remember DeShawn was the ‘blackest’ and Jake the ‘whitest’), and then compared the job prospects of people with names on either of those two lists. The results suggested that people with one of the ‘white’ names did better in the job market than those with ‘black’ names, perhaps suggesting that interviewers are being, subconsciously or not, racist. But, a statistical analysis revealed this to not, in fact, be the case; we must remember that black Americans are, on average, less well off than their white countrymen, meaning they are more likely to go to a dodgy school, have problems at home or hang around with the wrong friends. Therefore, black people do worse, on average, on the job market because they are more likely to be not as well-qualified as white equivalents, making them, from a purely analytical standpoint, often worse candidates. This meant that Jake was more likely to get a job than DeShawn because Jake was simply more likely to be a better-educated guy, so any racism on the part of job interviewers is not prevalent enough to be statistically significant. To some extent, we may be looking at the same thing here- people who turn up to an interview with cheap or hand-me-down clothes are likely to have come from a poorer background to someone with a tailored Armani suit, and are therefore likely to have had a lower standard of education and make less attractive candidates to an interviewing panel. Similarly, women tend to drop their careers earlier in life if they want to start a family, since the traditional family model puts the man as chief breadwinner, meaning they are less likely to advance up the ladder and earn the high wages that could even out the difference in male/female pay.

But statistics cannot quite cover anything- to use another slightly tangential bit of research, a study done some years ago found that teachers gave higher marks to essays written in neat handwriting than they did to identical essays that were written messier. The neat handwriting suggested a diligent approach to learning, a good education in their formative years, making the teacher think the child was cleverer, and thus deserving of more marks, than a scruffier, less orderly hand. Once again, we can draw parallels to our two guys in their different suits. Mr Faded may have good qualifications and present himself well, but his attire suggests to his interviewers that he is from a poorer background. We have a subconscious understanding of the link between poorer backgrounds and the increased risk of poor education and other compromising factors, and so the interviewers unconsciously link our man to the idea that he has been less well educated than Mr Armani, even if the evidence presented before them suggests otherwise. They are not trying to be prejudiced, they just think the other guy looks more likely to be as good as his paperwork suggests. Some of it isn’t even linked to such logical connections; research suggests that interviewers, just as people in everyday life, are drawn to those they feel are similar to them, and they might also make the subconscious link that ‘my wife stays at home and looks after the kids, there aren’t that many women in the office, so what’s this one doing here?’- again, not deliberate discrimination, but it happens.

In many ways this is an unfortunate state of affairs, and one that we should attempt to remedy in everyday life whenever and wherever we can. But a lot of the stuff that to a casual observer might look prejudiced, might be violating our egalitarian creed, we do without thinking, letting out brain make connections that logic should not. The trick is not to ‘not judge a book by it’s cover’, but not to let your brain register that there’s a cover at all.

Advertisement

Practical computing

This looks set to be my final post of this series about the history and functional mechanics of computers. Today I want to get onto the nuts & bolts of computer programming and interaction, the sort of thing you might learn as a budding amateur wanting to figure out how to mess around with these things, and who’s interested in exactly how they work (bear in mind that I am not one of these people and am, therefore, likely to get quite a bit of this wrong). So, to summarise what I’ve said in the last two posts (and to fill in a couple of gaps): silicon chips are massive piles of tiny electronic switches, memory is stored in tiny circuits that are either off or on, this pattern of off and on can be used to represent information in memory, memory stores data and instructions for the CPU, the CPU has no actual ability to do anything but automatically delegates through the structure of its transistors to the areas that do, the arithmetic logic unit is a dumb counting machine used to do all the grunt work and is also responsible, through the CPU, for telling the screen how to make the appropriate pretty pictures.

OK? Good, we can get on then.

Programming languages are a way of translating the medium of computer information and instruction (binary data) into our medium of the same: words and language. Obviously, computers do not understand that the buttons we press on our screen have symbols on them, that these symbols mean something to us and that they are so built to produce the same symbols on the monitor when we press them, but we humans do and that makes computers actually usable for 99.99% of the world population. When a programmer brings up an appropriate program and starts typing instructions into it, at the time of typing their words mean absolutely nothing. The key thing is what happens when their data is committed to memory, for here the program concerned kicks in.

The key feature that defines a programming language is not the language itself, but the interface that converts words to instructions. Built into the workings of each is a list of ‘words’ in binary, each word having a corresponding, but entirely different, string of data associated with it that represents the appropriate set of ‘ons and offs’ that will get a computer to perform the correct task. This works in one of two ways: an ‘interpreter’ is an inbuilt system whereby the programming is stored just as words and is then converted to ‘machine code’ by the interpreter as it is accessed from memory, but the most common form is to use a compiler. This basically means that once you have finished writing your program, you hit a button to tell the computer to ‘compile’ your written code into an executable program in data form. This allows you to delete the written file afterwards, makes programs run faster, and gives programmers an excuse to bum around all the time (I refer you here)

That is, basically how computer programs work- but there is one last, key feature, in the workings of a modern computer, one that has divided both nerds and laymen alike across the years and decades and to this day provokes furious debate: the operating system.

An OS, something like Windows (Microsoft), OS X (Apple) or Linux (nerds), is basically the software that enables the CPU to do its job of managing processes and applications. Think of it this way: whilst the CPU might put two inputs through a logic gate and send an output to a program, it is the operating system that will set it up to determine exactly which gate to put it through and exactly how that program will execute. Operating systems are written onto the hard drive, and can, theoretically, be written using nothing more than a magnetized needle, a lot of time and a plethora of expertise to flip the magnetically charged ‘bits’ on the hard disk. They consist of many different parts, but the key feature of all of them is the kernel, the part that manages the memory, optimises the CPU performance and translates programs from memory to screen. The precise translation and method by which this latter function happens differs from OS to OS, hence why a program written for Windows won’t work on a Mac, and why Android (Linux-powered) smartphones couldn’t run iPhone (iOS) apps even if they could access the store. It is also the cause of all the debate between advocates of different operating systems, since different translation methods prioritise/are better at dealing with different things, work with varying degrees of efficiency and are more  or less vulnerable to virus attack. However, perhaps the most vital things that modern OS’s do on our home computers is the stuff that, at first glance seems secondary- moving stuff around and scheduling. A CPU cannot process more than one task at once, meaning that it should not be theoretically possible for a computer to multi-task; the sheer concept of playing minesweeper whilst waiting for the rest of the computer to boot up and sort itself out would be just too outlandish for words. However, a clever piece of software called a scheduler in each OS which switches from process to process very rapidly (remember computers run so fast that they can count to a billion, one by one, in under a second) to give the impression of it all happening simultaneously. Similarly, a kernel will allocate areas of empty memory for a given program to store its temporary information and run on, but may also shift some rarely-accessed memory from RAM (where it is accessible) to hard disk (where it isn’t) to free up more space (this is how computers with very little free memory space run programs, and the time taken to do this for large amounts of data is why they run so slowly) and must cope when a program needs to access data from another part of the computer that has not been specifically allocated a part of that program.

If I knew what I was talking about, I could witter on all day about the functioning of operating systems and the vast array of headache-causing practicalities and features that any OS programmer must consider, but I don’t and as such won’t. Instead, I will simply sit back, pat myself on the back for having actually got around to researching and (after a fashion) understanding all this, and marvel at what strange, confusing, brilliant inventions computers are.

Icky stuff

OK guys, time for another multi-part series (always a good fallback when I’m short of ideas). Actually, this one started out as just an idea for a single post about homosexuality, but when thinking about how much background stuff I’d have to stick in for the argument to make sense, I thought I might as well dedicate an entire post to background and see what I could do with it from there. So, here comes said background: an entire post on the subject of sex.

The biological history of sex must really start by considering the history of biological reproduction. Reproduction is a vital part of the experience of life for all species, a necessary feature for something to be classified ‘life’, and among some thinkers is their only reason for existence in the first place. In order to be successful by any measure, a species must exist; in order to exist, those of the species who die must be replaced, and in order for this to occur, the species must reproduce. The earliest form of reproduction, occurring amongst the earliest single-celled life forms, was binary fission, a basic form of asexual reproduction whereby the internal structure of the organism is replicated, and it then splits in two to create two organisms with identical genetic makeup. This is an efficient way of expanding a population size very quickly, but it has its flaws. For one thing, it does not create any variation in the genetics of a population, meaning what kills one stands a very good chance of destroying the entire population; all genetic diversity is dependent on random mutations. For another, it is only really suitable for single-celled organisms such as bacteria, as trying to split up a multi-celled organism once all the data has been replicated is a complicated geometric task. Other organisms have tried other methods of reproducing asexually, such as budding in yeast, but about 1 billion years ago an incredibly strange piece of genetic mutation must have taken place, possibly among several different organisms at once. Nobody knows exactly what happened, but one type of organism began requiring the genetic data from two, rather than one, different creatures, and thus was sexual reproduction, both metaphorically and literally, born.

Just about every complex organism alive on Earth today now uses this system in one form or another (although some can reproduce asexually as well, or self-fertilise), and it’s easy to see why. It may be a more complicated system, far harder to execute, but by naturally varying the genetic makeup of a species it makes the species as a whole far more resistant to external factors such as disease- natural selection being demonstrated at its finest. Perhaps is most basic form is that adopted by aquatic animals such as most fish and lobster- both will simply spray their eggs and sperm into the water (usually as a group at roughly the same time and place to increase the chance of conception) and leave them to mix and fertilise one another. The zygotes are then left to grow into adults of their own accord- a lot are of course lost to predators, representing a huge loss in terms of inputted energy, but the sheer number of fertilised eggs still produces a healthy population. It is interesting to note that this most basic of reproductive methods, performed in a similar matter by plants, is performed by such complex animals as fish (although their place on the evolutionary ladder is both confusing and uncertain), whilst supposedly more ‘basic’ animals such as molluscs have some of the weirdest and most elaborate courtship and mating rituals on earth (seriously, YouTube ‘snail mating’. That shit’s weird)

Over time, the process of mating and breeding in the animal kingdom has grown more and more complicated. Exactly why the male testes & penis and the female vagina developed in the way they did is unclear from an evolutionary perspective, but since most animals appear to use a broadly similar system (males have an appendage, females have a depository) we can presume this was just how it started off and things haven’t changed much since. Most vertebrates and insects have distinct sexes and mate via internal fertilisation of a female’s eggs, in many cases by several different males to enhance genetic diversity. However, many species also take the approach that ensuring they care for their offspring for some portion of their development is a worthwhile trade-off in terms of energy when compared to the advantages of giving them the best possible chance in life. This care generally (but not always, perhaps most notably in seahorses) is the role of the mother, males having usually buggered off after mating to leave mother & baby well alone, and the general ‘attitude’ of such an approach gives a species, especially females, a vested interest in ensuring their baby is as well-prepared as possible. This manifests itself in the process of a female choosing her partner prior to mating. Natural selection dictates that females who pick characteristics in males that result in successful offspring, good at surviving, are more likely to pass on their genes and the same attraction towards those characteristics, so over time these traits become ‘attractive’ to all females of a species. These traits tend to be strength-related, since strong creatures are generally better at competing for food and such, hence the fact that most pre-mating procedures involve a fight or physical contest of some sort between males to allow them to take their pick of available females. This is also why strong, muscular men are considered attractive to women among the human race, even though these people may not always be the most suitable to father their children for various reasons (although one could counter this by saying that they are more likely to produce children capable of surviving the coming zombie apocalypse). Sexual selection on the other hand is to blame for the fact that sex is so enjoyable- members of a species who enjoy sex are more likely to perform it more often, making them more likely to conceive and thus pass on their genes, hence the massive hit of endorphins our bodies experience both during and post sexual activity.

Broadly speaking then, we come to the ‘sex situation’ we have now- we mate by sticking penises in vaginas to allow sperm and egg to meet, and women generally tend to pick men who they find ‘attractive’ because it is traditionally an evolutionary advantage, as is the fact that we find sex as a whole fun. Clearly, however, the whole situation is a good deal more complicated than just this… but what is a multi parter for otherwise?