One Foot In Front Of The Other

According to many, the thing that really sets human beings apart from the rest of the natural world is our mastery of locomotion; the ability to move faster, further and with heavier loads than any other creature typically does (never mind that our historical method of doing this was strapping several other animals to a large heap of wood and nails) across every medium our planet has to throw at us; land, sky, sea, snow, whatever. Nowadays, this concept has become associated with our endeavours in powered transport (cars, aeroplanes and such), but the story of human locomotion begins with a far more humble method of getting about that I shall dedicate today’s post to; walking.

It is thought that the first walkers were creatures that roughly approximate to our modern-day crustaceans; the early arthropods. In the early days of multicellular life on earth, these creatures ruled the seas (where all life had thus far been based) and fossils of the time show a wide variety of weird and wonderful creatures. The trilobites that one can nowadays buy as tourist souvenirs in Morocco are but one example; the top predators of the time were massive things, measuring several metres in length with giant teeth and layers of armour plate. All had bony exoskeletons, like the modern insects that are their descendants, bar a few small fish-like creatures a few millimetres in length who had developed the first backbones; in time, the descendants of these creatures would come to dominate life on earth. Since it was faster and allowed a greater range of motion, most early arthropods swam to get about; but others, like the metre-long Brontoscorpio (basically a giant underwater scorpion) preferred the slightly slower, but more efficient, idea of walking about on the seabed. Here, food was relatively plentiful in the form of small ‘grazers’ and attempting to push oneself through the water was wasteful of energy compared to trundling along the bottom. However, a new advantage also presented itself before too long; these creatures were able to cross land over short distances to reach prey- by coincidence, their primitive ‘lungs’ (that collected dissolved oxygen from water in much the same fashion as modern fish gills, but with a less fragile structure) worked just as well at harvesting oxygen from air as water, enabling them to survive on land. As plant life began to venture out onto land to better gain access to the air and light needed to survive, so the vertebrates (in the form of early amphibians) and arthropods began to follow the food, until the land was well and truly colonised by walking life forms.

Underwater, walking was significantly easier than on land; water is a far more dense fluid than air (hence why we can swim in the former but not the latter), and the increased buoyancy this offered meant that early walkers’ legs did not have to support so much of their body’s weight as they would do on land. This made it easier for them to develop the basic walking mechanic; one foot (or whatever you call the end of a scorpion’s leg) is pressed against the ground, before being held stiff and solid as the rest of the body is rotated around it’s joint, moving the creature as a whole forward slightly as it pivots. In almost all invertebrates, and early vertebrates, the creature’s legs are positioned at the side of the body, meaning that as the creature walks they tend to swing from side to side. Invertebrates typically partially counter this problem by having a lot of legs and stepping them in such an order to help them travel in a constant direction, and by having multi-jointed legs that can flex and translate the lateral components of motion into more forward-directed movement, preventing them from swinging from side to side. However, this doesn’t work so well at high speed when the sole priority is speed of movement of one’s feet, which is why most reconstructions of the movement of vertebrates circa 300 million years ago (with just four single-jointed legs stuck out to the side of the body) tends to show their body swinging dramatically from side to side, spine twisting this way and that.  This all changed with the coming of the dinosaurs, whose revolutionary evolutionary advantage was a change in construction of the hip that allowed their legs to point underneath the body, rather than sticking out at the side. Now, the pivoting action of the leg produces motion in the vertical, rather than horizontal direction, so no more spine-twisting mayhem. This makes travelling quickly easier and allows the upper body to be kept in a more stable position, good for striking at fleeing prey, as well as being more energy efficient. Such an evolutionary advantage would soon prove so significant that, during the late Triassic period, it allowed dinosaurs to completely take over from the mammal-like reptiles who had previously dominated the world. It would take more than 150 million years, a hell of a lot of evolution and a frickin’ asteroid to finally let these creatures’ descendants, in the form of mammals, finally prevail over the dinosaurs (by which time they had discovered the whole ‘legs pointing down’ trick).

When humankind were first trying to develop walking robots in the mid-twentieth century, the mechanics of the process were poorly understood, and there are a great many funny videos of prototype sets of legs completely failing. These designers had been operating under the idea that the role of the legs when walking was not just to keep a body standing up, but also to propel them forward, each leg pulling on the rest of the body when placed in front. However, after a careful study of new slow-motion footage of bipedal motion, it was realised that this was not the case at all, and we instead have gravity to thank for pushing us forward. When we walk, we actually lean over our frontmost foot, in effect falling over it before sticking our other leg out to catch ourselves, hence why we tend to go face to floor if the other leg gets caught or stuck. Our legs only really serve to keep us off the ground, pushing us upwards so we don’t actually fall over, and our leg muscles’ function here is to simply put each foot in front of the other (OK, so your calves might give you a bit of an extra flick but it’s not the key thing). When we run or climb, our motion changes; our legs bend, before our quadriceps extend them quickly, throwing us forward. Here we lean forward still further, but this is so that the motion of our quads is directed in the forward, rather than upward direction. This form of motion is less energy efficient, but covers more ground. This is the method by which we run, but does not define running itself; running is simply defined as the speed at which every step incorporates a bit of time where both feet are off the ground. Things get a little more complicated when we introduce more legs to the equation; so for four legged animals, such as horses, there are four footspeeds. When walking there are always three feet on the ground at any one time, when trotting there are always two, when cantering at least one, and when galloping a horse spends the majority of its time with both feet off the ground.

There is one downside to walking as a method of locomotion, however. When blogging about it, there isn’t much of a natural way to end a post.

Advertisement

The Churchill Problem

Everybody knows about Winston Churchill- he was about the only reason that Britain’s will to fight didn’t crumble during the Second World War, his voice and speeches are some of the most iconic of all time, and his name and mannerisms have been immortalised by a cartoon dog selling insurance. However, some of his postwar achievements are often overlooked- after the war he was voted out of the office of Prime Minister in favour of a revolutionary Labour government, but he returned to office in the 50’s with the return of the Tories. He didn’t do quite as well this time round- Churchill was a shameless warmonger who nearly annihilated his own reputation during the First World War by ordering a disastrous assault on Gallipoli in Turkey, and didn’t do much to help it by insisting that everything between the two wars was an excuse for another one- but it was during this time that he made one of his least-known but most interesting speeches. In it he envisaged a world in which the rapidly accelerating technological advancement of his age would cause most of the meaningful work to be done by machines, and changing our concept of the working week. He suggested that we would one day be able to “give the working man what he’s never had – four days’ work and then three days’ fun”- basically, Winston Churchill was the first man to suggest the concept of a three day weekend.

This was at a time when the very concept of the weekend itself was actually a very new one- the original idea of one part of the week being dedicated to not working comes, of course, from the Sabbath days adopted by most religions. The idea of no work being done on a Sunday is, in the Western and therefore historically Christian world, an old one, but the idea of expanding it to Saturday as well is far newer. This was partly motivated by the increased proportion and acceptance of Jewish workers, whose day of rest fell on Saturday, and was also part of a general trend in decreasing work hours during the early 1900’s. It wasn’t until 1938 that the 5 day working week became ratified in US law, and it appeared to be the start of a downward trend in working hours as trade unions gained power, workers got more free time, and machines did all the important stuff. All of this appeared to lead to Churchill’s promised world- a world of the 4-day working week and perhaps, one day, a total lap of luxury whilst we let computers and androids do everything.

However, recently things have started to change. The trend of shortening working hours and an increasingly stressless existence has been reversed, with the average working week getting longer dramatically- since 1970, the  number of hours worked per capita has risen by 20%. A survey done a couple of winters ago found that of our weekend, we only spend an average of 15 hours and 17 minutes of it out of the work mindset (between 12:38am and 3:55pm on Sunday when we start worrying about Monday again), and that over half of us are too tired to enjoy our weekends properly. Given that this was a survey conducted by a hotel chain it may not be an entirely representative sample, but you get the idea. The weekend itself is in some ways under threat, and Churchill’s vision is disappearing fast.

So what’s changed since the 50’s (other than transport, communications, language, technology, religion, science, politics, the world, warfare, international relations, and just about everything else)? Why have we suddenly ceased to favour rest over work? What the hell is wrong with us?

To an extent, some of the figures are anomalous-  employment of women has increased drastically in the last 50 years and as such so has the percentage of the population who are unemployed. But this is not enough to explain away all of the stats relating to ‘the death of the weekend’.Part of the issue is judgemental. Office environments can be competitive places, and can quickly develop into mindsets where our emotional investment is in the compiling of our accounts document or whatever. In such an environment, people’s priorities become more focused on work, and somebody taking a day extra out on the weekend would just seem like laziness- especially of the boss who has deadlines to meet and really doesn’t appreciate slackers, as well as having control of your salary. We also, of course, judge ourselves, unwilling to feel as if we are letting the team down and causing other people inconvenience. There’s also the problem of boredom- as any schoolchild will tell you, the first few days of holiday after a long term are blissful relaxation, but it’s only a matter of time before a parent hears that dreaded phrase: “I’m booooooored”. The same thing can be said to apply to having nearly half your time off every single week. But these are features of human nature, which certainly hasn’t changed in the past 50 years, so what could the root of the change in trends be?

The obvious place to start when considering this is in the changes in work over this time. The last half-century has seen Britain’s manufacturing economy spiral downwards, as more and more of us lay down tools and pick up keyboards- the current ‘average job’ for a Briton involves working in an office somewhere. Probably in Sales, or Marketing. This kind of job involves chiefly working our minds, crunching numbers, thinking through figures and making it far harder for us to ‘switch off’ from our work mentality than if it were centred on how much our muscles hurt. It also makes it far easier to justify staying for overtime and to ‘just finish that last bit’, partly because not being physically tired makes it easier and also because the kind of work given to an office worker is more likely to be centred around individual mini-projects than simply punching rivets or controlling a machine for hours on end. And of course, as some of us start to stay for longer, so our competitive instinct causes the rest of us to as well.

In the modern age, switching off from a modern work mindset has been made even harder since the invention of the laptop and, especially, the smartphone. The laptop allowed us to check our emails or work on a project at home, on a train or wherever we happened to be- the smartphone has allowed us to keep in touch with work at every single waking moment of the day, making it very difficult for us to ‘switch work off’. It has also made it far easier to work at home, which for the committed worker can make it even harder to formally end the day when there are no colleagues or bosses telling you it’s time to go home. This spread of technology into our lives is thought to lead to an increase in levels of dopamine, a sort of pick-me-up drug the body releases after exposure to adrenaline, which can frazzle our pre-frontal cortex and leave someone feeling drained and unfocused- obvious signs of being overworked

Then there is the issue of competition. In the past, competition in industry would usually have been limited to a few other industries in the local area- in the grand scheme of things, this could perhaps be scaled up to cover an entire country. The existence of trade unions helped prevent this competition from causing problems- if everyone is desperate for work, as occurred with depressing regularity during the Great Depression in the USA, they keep trying to offer their services as cheaply as possible to try and bag the job, but if a trade union can be use to settle and standardise prices then this effect is halted. However, in the current age of everywhere being interconnected, competition in big business can occur from all over the world. To guarantee that they keep their job, people have to try to work as hard as they can for as long as they can, lengthening the working week still further. Since trade unions are generally limited to a single country, their powers in this situation are rather limited.

So, that’s the trend as it is- but is it feasible that we will ever live the life of luxury, with robots doing all our work, that seemed the summit of Churchill’s thinkings. In short: no. Whilst a three-day weekend is perhaps not too unfeasible, I just don’t think human nature would allow us to laze about all day, every day for the whole of our lives and do absolutely nothing with it, if only for the reasons explained above. Plus, constant rest would simply sanitise us to the concept, it becoming so normal that we simply could not envisage the concept of work at all. Thus, all the stresses that were once taken up with work worries would simply be transferred to ‘rest worries’, resulting in us not being any happier after all, and defeating the purpose of having all the rest in the first place. In short, we need work to enjoy play.

Plus, if robots ran everything and nobody worked them, it’d only be a matter of time before they either all broke down or took over.

The Inevitable Dilemma

And so, today I conclude this series of posts on the subject of alternative intelligence (man, I am getting truly sick of writing that word). So far I have dealt with the philosophy, the practicalities and the fundamental nature of the issue, but today I tackle arguably the biggest and most important aspect of AI- the moral side. The question is simple- should we be pursuing AI at all?

The moral arguments surrounding AI are a mixed bunch. One of the biggest is the argument that is being thrown at a steadily wider range of high-level science nowadays (cloning, gene analysis and editing, even synthesis of new artificial proteins)- that the human race does not have the moral right, experience or ability to ‘play god’ and modify the fundamentals of the world in this way. Our intelligence, and indeed our entire way of being, has evolved over thousands upon millions of years of evolution, and has been slowly sculpted and built upon by nature over this time to find the optimal solution for self-preservation and general well being- this much scientists will all accept. However, this argument contends that the relentless onward march of science is simply happening too quickly, and that the constant demand to make the next breakthrough, do the next big thing before everybody else, means that nobody is stopping to think of the morality of creating a new species of intelligent being.

This argument is put around a lot with issues such as cloning or culturing meat, and it’s probably not helped matters that it is typically put around by the Church- never noted as getting on particularly well with scientists (they just won’t let up about bloody Galileo, will they?). However, just think about what could happen if we ever do succeed in creating a fully sentient computer. Will we all be enslaved by some robotic overlord (for further reference, see The Matrix… or any other of the myriad sci-fi flicks based on the same idea)? Will we keep on pushing and pushing to greater endeavours until we build a computer with intelligence on all levels infinitely superior to that of the human race? Or will we turn robot-kind into a slave race- more expendable than humans, possibly with programmed subservience? Will we have to grant them rights and freedoms just like us?

Those last points present perhaps the biggest other dilemma concerning AI from a purely moral standpoint- at what point will AI blur the line between being merely a machine and being a sentient entity worthy of all the rights and responsibilities that entails? When will a robot be able to be considered responsible for its own actions? When will be able to charge a robot as the perpetrator of a crime? So far, only one person has ever been killed by a robot (during an industrial accident at a car manufacturing plant), but if such an event were ever to occur with a sentient robot, how would we punish it? Should it be sentenced to life in prison? If in Europe, would the laws against the death penalty prevent a sentient robot from being ‘switched off’? The questions are boundless, but if the current progression of AI is able to continue until sentient AI is produced, then they will have to be answered at some point.

But there are other, perhaps more worrying issues to confront surrounding advanced AI. The most obvious non-moral opposition to AI comes from an argument that has been made in countless films over the years, from Terminator to I, Robot- namely, the potential that if robot-kind are ever able to equal or even better our mental faculties, then they could one day be able to overthrow us as a race. This is a very real issue when confronting the stereotypical issue of a war robot- that of an invincible metal machine capable of wanton destruction on par with a medium sized tank, and who is easily able to repair itself and make more of itself. It’s an idea that is reasonably unlikely to ever become real, but it actually raises another idea- one that is more likely to happen, more likely to build unnoticed, and is far, far more scary. What if the human race, fragile little blobs of fairly dumb flesh that we are, were ever to be totally superseded as an entity by robots?

This, for me, is the single most terrifying aspect of AI- the idea that I may one day become obsolete, an outdated model, a figment of the past. When compared to a machine’s ability to churn out hundreds of copies of itself simply from a blueprint and a design, the human reproductive system suddenly looks very fragile and inefficient by comparison. When compared to tough, hard, flexible modern metals and plastics that can be replaced in minutes, our mere flesh and blood starts to seem delightfully quaint. And if the whirring numbers of a silicon chip are ever able to become truly intelligent, then their sheer processing capacity makes our brains seem like outdated antiques- suddenly, the organic world doesn’t seem quite so amazing, and certainly more defenceless.

But could this ever happen? Could this nightmare vision of the future where humanity is nothing more than a minority race among a society ruled by silicon and plastic ever become a reality? There is a temptation from our rational side to say of course not- for one thing, we’re smart enough not to let things get to that stage, and that’s if AI even gets good enough for it to happen. But… what if it does? What if they can be that good? What if intelligent, sentient robots are able to become a part of a society to an extent that they become the next generation of engineers, and start expanding upon the abilities of their kind? From there on, one can predict an exponential spiral of progression as each successive and more intelligent generation turns out the next, even better one. Could it ever happen? Maybe not. Should we be scared? I don’t know- but I certainly am.

Artificial… what, exactly?

OK, time for part 3 of what I’m pretty sure will finish off as 4 posts on the subject of artificial intelligence. This time, I’m going to branch off-topic very slightly- rather than just focusing on AI itself, I am going to look at a fundamental question that the hunt for it raises: the nature of intelligence itself.

We all know that we are intelligent beings, and thus the search for AI has always been focused on attempting to emulate (or possibly better) the human mind and our human understanding of intelligence. Indeed, when Alan Turing first proposed the Turing test (see Monday’s post for what this entails), he was specifically trying to emulate human conversational and interaction skills. However, as mentioned in my last post, the modern-day approach to creating intelligence is to try and let robots learn for themselves, in order to minimise the amount of programming we have to give them ourselves and thus to come close to artificial, rather than programmed, intelligence. However, this learning process has raised an intriguing question- if we let robots learn for themselves entirely from base principles, could they begin to create entirely new forms of intelligence?

It’s an interesting idea, and one that leads us to question what, on a base level, intelligence is. When one thinks about it, we begin to realise the vast scope of ideas that ‘intelligence’ covers, and this is speaking merely from the human perspective. From emotional intelligence to sporting intelligence, from creative genius to pure mathematical ability (where computers themselves excel far beyond the scope of any human), intelligence is an almost pointlessly broad term.

And then, of course, we can question exactly what we mean by a form of intelligence. Take bees for example- on its own, a bee is a fairly useless creature that is most likely to just buzz around a little. Not only is it useless, but it is also very, very dumb. However, a hive, where bees are not individuals but a collective, is a very different matter- the coordinated movements of hundreds and thousands of bees can not only form huge nests and turn sugar into the liquid deliciousness that is honey, but can also defend the nest from attack, ensure the survival of the queen at all costs, and ensure that there is always someone to deal with the newborns despite the constant activity of the environment surround it. Many corporate or otherwise collective structures can claim to work similarly, but few are as efficient or versatile as a beehive- and more astonishingly, bees can exhibit an extraordinary range of intelligent behaviour as a collective beyond what an individual could even comprehend. Bees are the archetype of a collective, rather than individual, mind, and nobody is entirely sure how such a structure is able to function as it does.

Clearly, then, we cannot hope to pigeonhole or quantify intelligence as a single measurement- people may boast of their IQ scores, but this cannot hope to represent their intelligence across the full spectrum. Now, consider all these different aspects of intelligence, all the myriad of ways that we can be intelligent (or not). And ask yourself- now, have we covered all of them?

It’s another compelling idea- that there are some forms of intelligence out there that our human forms and brains simply can’t envisage, let alone experience. What these may be like… well how the hell should I know, I just said we can’t envisage them. This idea that we simply won’t be able to understand what they could be like if we ever experience can be a tricky one to get past (a similar problem is found in quantum physics, whose violation of common logic takes some getting used to), and it is a real issue that if we do ever encounter these ‘alien’ forms of intelligence, we won’t be able to recognise them for this very reason. However, if we are able to do so, it could fundamentally change our understanding of the world around us.

And, to drag this post kicking and screaming back on topic, our current development of AI could be a mine of potential to do this in (albeit a mine in which we don’t know what we’re going to find, or if there is anything to find at all). We all know that computers are fundamentally different from us in a lot of ways, and in fact it is very easy to argue that trying to force a computer to be intelligent beyond its typical, logical parameters is rather a stupid task, akin to trying to use a hatchback to tow a lorry. In fact, quite a good way to think of computers or robots is like animals, only adapted to a different environment to us- one in which their food comes via a plug and information comes to them via raw data and numbers… but I am wandering off-topic once again. The point is that computers have, for as long as the hunt for AI has gone on, been our vehicle for attempting to reach it- and only now are we beginning to fully understand that they have the potential to do so much more than just copy our minds. By pushing them onward and onward to the point they have currently reached, we are starting to turn them not into an artificial version of ourselves, but into an entirely new concept, an entirely new, man-made being.

To me, this is an example of true ingenuity and skill on behalf of the human race. Copying ourselves is no more inventive, on a base level, than making iPod clones or the like. Inventing a new, artificial species… like it or loath it, that’s amazing.

The Problems of the Real World

My last post on the subject of artificial intelligence was something of a philosophical argument on its nature- today I am going to take on a more practical perspective, and have a go at just scratching the surface of the monumental challenges that the real world poses to the development of AI- and, indeed, how they are (broadly speaking) solved.

To understand the issues surrounding the AI problem, we must first consider what, in the strictest sense of the matter, a computer is. To quote… someone, I can’t quite remember who: “A computer is basically just a dumb adding machine that counts on its fingers- except that it has an awful lot of fingers and counts terribly fast”. This, rather simplistic model, is in fact rather good for explaining exactly what it is that computers are good and bad at- they are very good at numbers, data crunching, the processing of information. Information is the key thing here- if something can be inputted into a computer purely in terms of information, then the computer is perfectly capable of modelling and processing it with ease- which is why a computer is very good at playing games. Even real-world problems that can be expressed in terms of rules and numbers can be converted into computer-recognisable format and mastered with ease, which is why computers make short work of things like ballistics modelling (such as gunnery tables, the US’s first usage of them), and logical games like chess.

However, where a computer develops problems is in the barrier between the real world and the virtual. One must remember that the actual ‘mind’ of a computer itself is confined exclusively to the virtual world- the processing within a robot has no actual concept of the world surrounding it, and as such is notoriously poor at interacting with it. The problem is twofold- firstly, the real world is not a mere simulation, where rules are constant and predictable; rather, it is an incredibly complicated, constantly changing environment where there are a thousand different things that we living humans keep track of without even thinking. As such, there are a LOT of very complicated inputs and outputs for a computer to keep track of in the real world, which makes it very hard to deal with. But this is merely a matter of grumbling over the engineering specifications and trying to meet the design brief of the programmers- it is the second problem which is the real stumbling block for the development of AI.

The second issue is related to the way a computer processes information- bit by bit, without any real grasp of the big picture. Take, for example, the computer monitor in front of you. To you, it is quite clearly a screen- the most notable clue being the pretty pattern of lights in front of you. Now, turn your screen slightly so that you are looking at it from an angle. It’s still got a pattern of lights coming out of it, it’s still the same colours- it’s still a screen. To a computer however, if you were to line up two pictures of your monitor from two different angles, it would be completely unable to realise that they were the same screen, or even that they were the same kind of objects. Because the pixels are in a different order, and as such the data’s different, the two pictures are completely different- the computer has no concept of the idea that the two patterns of lights are the same basic shape, just from different angles.

There are two potential solutions to this problem. Firstly, the computer can look at the monitor and store an image of it from every conceivable angle with every conceivable background, so that it would be able to recognise it anywhere, from any viewpoint- this would however take up a library’s worth of memory space and be stupidly wasteful. The alternative requires some cleverer programming- by training the computer to spot patterns of pixels that look roughly similar (either shifted along by a few bytes, or missing a few here and there), they can be ‘trained’ to pick out basic shapes- by using an algorithm to pick out changes in colour (an old trick that’s been used for years to clean up photos), the edges of objects can be identified and separate objects themselves picked out. I am not by any stretch of the imagination an expert in this field so won’t go into details, but by this basic method a computer can begin to step back and begin to look at the pattern of a picture as a whole.

But all that information inputting, all that work…  so your computer can identify just a monitor? What about all the myriad of other things our brains can recognise with such ease- animals, buildings, cars? And we haven’t even got on to differentiating between different types of things yet… how will we ever match the human brain?

This idea presented a big setback for the development of modern AI- so far we have been able to develop AI that allows one computer to handle a few real-world tasks or applications very well (and in some cases, depending on the task’s suitability to the computational mind, better than humans), but scientists and engineers were presented with a monumental challenge when faced with the prospect of trying to come close to the human mind (let alone its body) in anything like the breadth of tasks it is able to perform. So they went back to basics, and began to think of exactly how humans are able to do so much stuff.

Some of it can be put down to instinct, but then came the idea of learning. The human mind is especially remarkable in its ability to take in new information and learn new things about the world around it- and then take this new-found information and try to apply it to our own bodies. Not only can we do this, but we can also do it remarkably quickly- it is one of the main traits which has pushed us forward as a race.

So this is what inspires the current generation of AI programmers and robotocists- the idea of building into the robot’s design a capacity for learning. The latest generation of the Japanese ‘Asimo’ robots can learn what various objects presented to it are, and is then able to recognise them when shown them again- as well as having the best-functioning humanoid chassis of any existing robot, being able to run and climb stairs. Perhaps more excitingly are a pair of robots currently under development that start pretty much from first principles, just like babies do- first they are presented with a mirror and learn to manipulate their leg motors in such a way that allows them to stand up straight and walk (although they aren’t quite so good at picking themselves up if they fail in this endeavour). They then face one another and begin to demonstrate and repeat actions to one another, giving each action a name as they do so.  In doing this they build up an entirely new, if unsophisticated, language with which to make sense of the world around them- currently, this is just actions, but who knows what lies around the corner…