The Sting

I have twice before used this blog to foray into the strange world of film reviewing; something that I enjoy, given that I enjoy cinema, but am usually unable to make a stable source of material since I don’t generally have the time (or, given a lot of the films that get released in my local cinema, inclination) to see too many of them. My first foray was a rather rambling (and decidedly rubbish) examination of The Hunger Games, with a couple of nods to the general awesomeness of The Shawshank Redemption, whilst I felt compelled to write my second just to articulate my frustration after seeing The Dark Knight Rises. Today, I wish to return to the magical fairy kingdom of the big screen, this time concerning something that I would ordinarily have never seen at all; 70s crime flick ‘The Sting’

The Sting is quite clearly a film from another era of filmmaking; I am not old enough to remember the times when a stock ‘thump’ sound byte was inserted into the footage every time an object is put onto a table, but this film contains such cinematic anachronisms in spades. Similarly, this is the first film I have ever seen starring Robert Redford and my first from director George Roy Hill, but age should be no barrier to quality entertainment if it’s there to shine through and thankfully it’s basic plot and premise lend it to a graceful aging process.

The plot can be fairly summarily described as uncomplicated; a young confidence trickster who ends up accidentally making a small fortune from a fairly routine con is pursued by the mob boss whose money he has now lost, so teams up with an experienced ‘old head’ to bring him down. So Ocean’s Eleven with a simpler character base and more realistic motivations. Where the two differ, however, is in their dedication to their subject material; whilst the Ocean’s films are generally content to follow some rather formulaic Hollywood scriptwriting, placing their emphasis heavily on interpersonal relationships and love interests, The Sting goes out of its way to be a true crime story to its very core. Set in the golden age of organised crime (1930s prohibition-era Illinois, real-life home of Al Capone) with a memorable ragtime soundtrack to match, every stage (illustrated explicitly through the use of old-fashioned title cards) of the film’s overarching ‘big con’ plot takes the form of a classic confidence trick, from an old-fashioned money switch to a large-scale rigged betting house, incorporating along the way possibly the finest played (and cheated) game of poker ever to appear on screen. Every feature, facet and subplot from the cheated cop to the seemingly out-of-place love interest all has its place in the big con, and there was nothing there that didn’t have a very good reason to be. Not only did this create a rollercoaster of a focused, central plot without unnecessary distractions, but the authenticity of the tricks, characters and terminology used built a believable, compelling world to immerse oneself in and enjoy. Combine that with a truly stellar portrayal of the seen-it-all genius conman Henry Gondorff by Paul Newman, and Robert Redford’s evident gift for building a very real, believable character in the form of naive youngster Johnny Hooker, and we have the makings of an incredibly immersive story that you often have to remind yourself isn’t actually real.

However, by putting such focus on its central con, The Sting puts itself under an awful lot of pressure, for without any extraneous components (hell, there aren’t even any proper action scenes, despite the not infrequent bouts of gunfire) it has got nowhere to fall if its central plot fails. Thus, the success of the film very much rests on the success of the con it centres around, not just in terms of execution itself but in making its execution fit its style. The Sting is not about coming up with something on the fly, about something unexpected coming up and winning through on the day- it is an homage to planning, to the skill of the con, of hooking in the mark and making them think they’ve won, before turning the ace in the hole. To turn successful planning, what was intended to happen happening, into compelling drama is a task indeed for a filmmaker.

And yet, despite all the odds, The Sting pulls it off, thanks to the extraordinary depth director Hill packs into his seemingly simplistic plot. Each subplot put into play is like adding another dot to the puzzle, and it is left to the viewer to try and join them all to formulate the finished picture- or alternatively watch to see the film do so all with staggering aplomb. Every element is laid out on the table, everyone can see the cards, and it’s simply a matter of the film being far smarter than you are in revealing how it pulls its trick, just like a conman and his mark. You, the viewer, have been stung just as much as Robert Shaw’s mob boss of a mark, except that you can walk out of the room with your wallet full and a smile on your face.

This is not to say that the film doesn’t have problems. Whilst the basic premise is simple and well-executed enough to be bulletproof, its ‘setup’ phase (as the title cards called it) spends an awful lot of time on world-, scenario- and character-building, filling the early parts of the film with enough exposition to make me feel decidedly lukewarm about it- it’s all necessary to remove plot holes and to build the wonderful air of depth and authenticity, but something about its execution strikes me as clunky. It also suffers Inception’s problem of being potentially confusing to anyone not keeping a very close track of what’s going on, and one or two of the minor characters suffer from having enough of a role to be significant but not enough characterisation to seem especially real. That said, this film won seven Oscars for a reason, and regardless of how slow it may seem to begin with, it’s definitely worth sticking it out to the end. I can promise you it will be worth it.


A Continued History

This post looks set to at least begin by following on directly from my last one- that dealt with the story of computers up to Charles Babbage’s difference and analytical engines, whilst this one will try to follow the history along from there until as close to today as I can manage, hopefully getting in a few of the basics of the workings of these strange and wonderful machines.

After Babbage’s death as a relatively unknown and unloved mathematician in 1871, the progress of the science of computing continued to tick over. A Dublin accountant named Percy Ludgate, independently of Babbage’s work, did develop his own programmable, mechanical computer at the turn of the century, but his design fell into a similar degree of obscurity and hardly added anything new to the field. Mechanical calculators had become viable commercial enterprises, getting steadily cheaper and cheaper, and as technological exercises were becoming ever more sophisticated with the invention of the analogue computer. These were, basically a less programmable version of the difference engine- mechanical devices whose various cogs and wheels were so connected up that they would perform one specific mathematical function to a set of data. James Thomson in 1876 built the first, which could solve differential equations by integration (a fairly simple but undoubtedly tedious mathematical task), and later developments were widely used to collect military data and for solving problems concerning numbers too large to solve by human numerical methods. For a long time, analogue computers were considered the future of modern computing, but since they solved and modelled problems using physical phenomena rather than data they were restricted in capability to their original setup.

A perhaps more significant development came in the late 1880s, when an American named Herman Hollerith invented a method of machine-readable data storage in the form of cards punched with holes. These had been around for a while to act rather like programs, such as the holed-paper reels of a pianola or the punched cards used to automate the workings of a loom, but this was the first example of such devices being used to store data (although Babbage had theorised such an idea for the memory systems of his analytical engine). They were cheap, simple, could be both produced and read easily by a machine, and were even simple to dispose of. Hollerith’s team later went on to process the data of the 1890 US census, and would eventually become most of IBM. The pattern of holes on these cards could be ‘read’ by a mechanical device with a set of levers that would go through a hole if there was one present, turning the appropriate cogs to tell the machine to count up one. This system carried on being used right up until the 1980s on IBM systems, and could be argued to be the first programming language.

However, to see the story of the modern computer truly progress we must fast forward to the 1930s. Three interesting people and acheivements came to the fore here: in 1937 George Stibitz, and American working in Bell Labs, built an electromechanical calculator that was the first to process data digitally using on/off binary electrical signals, making it the first digital. In 1936, a bored German engineering student called Konrad Zuse dreamt up a method for processing his tedious design calculations automatically rather than by hand- to this end he devised the Z1, a table-sized calculator that could be programmed to a degree via perforated film and also operated in binary. His parts couldn’t be engineered well enough for it to ever work properly, but he kept at it to eventually build 3 more models and devise the first programming language. However, perhaps the most significant figure of 1930s computing was a young, homosexual, English maths genius called Alan Turing.

Turing’s first contribution to the computing world came in 1936, when he published a revolutionary paper showing that certain computing problems cannot be solved by one general algorithm. A key feature of this paper was his description of a ‘universal computer’, a machine capable of executing programs based on reading and manipulating a set of symbols on a strip of tape. The symbol on the tape currently being read would determine whether the machine would move up or down the strip, how far, and what it would change the symbol to, and Turing proved that one of these machines could replicate the behaviour of any computer algorithm- and since computers are just devices running algorithms, they can replicate any modern computer too. Thus, if a Turing machine (as they are now known) could theoretically solve a problem, then so could a general algorithm, and vice versa if it couldn’t. Not only that, but since modern computers cannot multi-task on the. These machines not only lay the foundations for computability and computation theory, on which nearly all of modern computing is built, but were also revolutionary as they were the first theorised to use the same medium for both data storage and programs, as nearly all modern computers do. This concept is known as a von Neumann architecture, after the man who first pointed out and explained this idea in response to Turing’s work.

Turing machines contributed one further, vital concept to modern computing- that of Turing-completeness. A Turing-complete system was defined as a single Turing machine (known as a Universal Turing machine) capable of replicating the behaviour of any other theoretically possible Turing machine, and thus any possible algorithm or computable sequence. Charles Babbage’s analytical engine would have fallen into that class had it ever been built, in part because it was capable of the ‘if X then do Y’ logical reasoning that characterises a computer rather than a calculator. Ensuring the Turing-completeness of a system is a key part of designing a computer system or programming language to ensure its versatility and that it is capable of performing all the tasks that could be required of it.

Turing’s work had laid the foundations for nearly all the theoretical science of modern computing- now all the world needed was machines capable of performing the practical side of things. However, in 1942 there was a war on, and Turing was being employed by the government’s code breaking unit at Bletchley Park, Buckinghamshire. They had already cracked the German’s Enigma code, but that had been a comparatively simple task since they knew the structure and internal layout of the Enigma machine. However, they were then faced by a new and more daunting prospect: the Lorenz cipher, encoded by an even more complex machine for which they had no blueprints. Turing’s genius, however, apparently knew no bounds, and his team eventually worked out its logical functioning. From this a method for deciphering it was formulated, but it required an iterative process that took hours of mind-numbing calculation to get a result out. A faster method of processing these messages was needed, and to this end an engineer named Tommy Flowers designed and built Colossus.

Colossus was a landmark of the computing world- the first electronic, digital, and partially programmable computer ever to exist. It’s mathematical operation was not highly sophisticated- it used vacuum tubes containing light emission and sensitive detection systems, all of which were state-of-the-art electronics at the time, to read the pattern of holes on a paper tape containing the encoded messages, and then compared these to another pattern of holes generated internally from a simulation of the Lorenz machine in different configurations. If there were enough similarities (the machine could obviously not get a precise matching since it didn’t know the original message content) it flagged up that setup as a potential one for the message’s encryption, which could then be tested, saving many hundreds of man-hours. But despite its inherent simplicity, its legacy is simply one of proving a point to the world- that electronic, programmable computers were both possible and viable bits of hardware, and paved the way for modern-day computing to develop.