The Rugby Challenge

I frequently feel like apologising on this blog for my all too frequent excursions into the worlds of my two main leisure activities; rugby and videogames. Today, however, inspired by the recent release of Rugby Challenge 2, I’m going to combine the two, highlighting the problems oh-so-frequently encountered when designing a rugby game.

Rugby is not a sport that can be said to have made a grand splash in the gaming world; unlike the likes of football and basketball which get big-budget EA Sports releases every year, EA’s only attempt at a rugby game in the last 5 years was a half-hearted and lazily designed attempt to cash in on the 2011 World Cup. It’s even worse for rugby league fans, who I believe have only had two games ever made concerning their sport. The reason for this is depressingly simple; money. FIFA sells because football is massively popular across pretty much the entire globe, resulting in a massive market and the popularity of American Football sells copies of Madden by the bucketload in the wealthy USA. Rugby, however, has no such market ready and waiting for it; worldwide, around 4.5 million people are registered rugby players, and a rather optimistic guess could put the number of fans of the sport at perhaps 15 million. Of those, there is a fairly broad spread of ages between 7 and 70; yet the majority of gamers fit into the 14-25ish age bracket. And not all of those are going to end up buying a rugby game anyway.

Put simply, a rugby game has nowhere near the potential market of many other sports, so whilst FIFA 2013 was able to sell more than 3.3 million units in its first week, Sidhe’s 2011 game Jonah Lomu Rugby Challenge sold just 430,000 units lifetime across PS3 and Xbox 360 combined. Admittedly it was competing against the RWC 2011 game (which sold roughly the same, and probably to the same people), but even in a year when rugby’s profile was at its highest, such comparatively meagre sales represent a big problem when game development and the acquisition of team licenses are so massively expensive.

However, there’s very little that can be done about this until more people get interested in rugby/videogames respectively, so I’m now going to focus on the content games themselves, which presents a whole host of other issues. The two major figures in terms of representing rugby in the virtual world are EA Sports’ Rugby 08 (which, despite being 5 years old and barely different from the two previous EA incarnations, still has its adherents) and the previously mentioned Rugby Challenge. Having owned and played both (although not yet Rugby Challenge 2, hence why it doesn’t feature here), I feel confident in saying that, whilst both are pretty good in their own way, both have sizeable flaws; Rugby 08 is full of little cheats (such as a way of automatically winning every kickoff) that make the game far too easy once you are sufficiently experienced, the players have no personality or realism about their movements, its knowledge of the laws is dodgy in places, the difficulty settings are blunt as hell, the preset attacking moves are rubbish, lineouts are oversimplified, manually attacking produces highly unrealistic gameplay, the defensive moves it bangs on about in the promo material make no difference whatsoever, players frequently run through one another’s falling bodies and for some reason the player has to manually select when he or she wants to cheat (which, given how unrealistically good the ref is, is a totally dumb idea). Rugby Challenge is, to my mind, a superior game, but it is no less flawed; in their efforts to make the game more free-flowing, the developers have almost completely done away with any semblance of structure as every move degenerates into one long spree of offloads, with no preset moves to help offset this issue. To frustratingly contrast with this, the rucking system guarantees a constant stream of annoyingly slow ball, lineouts and scrums are dysfunctional and just plain unrealistic respectively, the goal kicking is dumb, the player ratings unrealistic (particularly for northern hemisphere players), you can’t take quick throw-ins and the commentary is nowhere near as good as in Rugby 08. And, just to compound the annoyance, neither has a realistic career mode, which severely damages replay value (an issue thankfully dealt with in this year’s Rugby Challenge 2)

Phew. Right, rant over, now to actually address the causes.

Aside from most studios being unwilling or unable to invest large amounts of money in developing a rugby game due to the limited market size, the main issue facing any rugby game concerns the nature of the game itself. Rugby is a game of a myriad of different battlegrounds and ways of playing the game, with players having to function both as a team player and as an individual both on and off the ball. This makes controlling it from the perspective of the guy with the ball, as all previous games have done, inherently difficult and unrealistic; in rugby, it is just as important if not more so who runs the dummy lines and provides a threat to the defence as the person who ends up with the try. This practice of each player’s individual work adding up to a concerted team effort is incredibly difficult to program, and to simulate it properly would require an incredibly sophisticated AI system beyond anything seen in any other sporting game. And that’s just considering the work done by the backs; accurately simulating forward play would be a nigh-on impossible task, so complex is the technique and decision-making that, in a real game, is responsible for the rucking and scrummaging victories that can turn a match. The other issue is the level of control that should be allowed to the player; a more complex, detailed game would be more realistic, but would seriously risk either swamping the player with decisions and information as they tried to control fifteen players at once, killing the immersion, or automating everything and taking the player out of the equation too much, so that their individual skill level ceased to matter. Finding a suitable middle ground between the Scylla and Charybdis of these two extremes would be a difficult, dangerous task for any game developer.

Still, despite all these problems and more, I personally think that it is far from impossible to make a great game that (relatively) accurately portrays modern rugby. And to find out exactly how I’d go about designing such a game, you can read my next post, in which I will tell you all about it…*

*That sounded way creepier than intended. Sorry

One final note; due to developments in my personal life, posts are now only going to come twice a week, on Wednesday and Saturday. This may change again in the future.

Advertisements

What we know and what we understand are two very different things…

If the whole Y2K debacle over a decade ago taught us anything, it was that the vast majority of the population did not understand the little plastic boxes known as computers that were rapidly filling up their homes. Nothing especially wrong or unusual about this- there’s a lot of things that only a few nerds understand properly, an awful lot of other stuff in our life to understand, and in any case the personal computer had only just started to become commonplace. However, over 12 and a half years later, the general understanding of a lot of us does not appear to have increased to any significant degree, and we still remain largely ignorant of these little feats of electronic witchcraft. Oh sure, we can work and operate them (most of us anyway), and we know roughly what they do, but as to exactly how they operate, precisely how they carry out their tasks? Sorry, not a clue.

This is largely understandable, particularly given the value of ‘understand’ that is applicable in computer-based situations. Computers are a rare example of a complex system that an expert is genuinely capable of understanding, in minute detail, every single aspect of the system’s working, both what it does, why it is there, and why it is (or, in some cases, shouldn’t be) constructed to that particular specification. To understand a computer in its entirety, therefore, is an equally complex job, and this is one very good reason why computer nerds tend to be a quite solitary bunch, with quite few links to the rest of us and, indeed, the outside world at large.

One person who does not understand computers very well is me, despite the fact that I have been using them, in one form or another, for as long as I can comfortably remember. Over this summer, however, I had quite a lot of free time on my hands, and part of that time was spent finally relenting to the badgering of a friend and having a go with Linux (Ubuntu if you really want to know) for the first time. Since I like to do my background research before getting stuck into any project, this necessitated quite some research into the hows and whys of its installation, along with which came quite a lot of info as to the hows and practicalities of my computer generally. I thought, then, that I might spend the next couple of posts or so detailing some of what I learned, building up a picture of a computer’s functioning from the ground up, and starting with a bit of a history lesson…

‘Computer’ was originally a job title, the job itself being akin to accountancy without the imagination. A computer was a number-cruncher, a supposedly infallible data processing machine employed to perform a range of jobs ranging from astronomical prediction to calculating interest. The job was a fairly good one, anyone clever enough to land it probably doing well by the standards of his age, but the output wasn’t. The human brain is not built for infallibility and, not infrequently, would make mistakes. Most of these undoubtedly went unnoticed or at least rarely caused significant harm, but the system was nonetheless inefficient. Abacuses, log tables and slide rules all aided arithmetic manipulation to a great degree in their respective fields, but true infallibility was unachievable whilst still reliant on the human mind.

Enter Blaise Pascal, 17th century mathematician and pioneer of probability theory (among other things), who invented the mechanical calculator aged just 19, in 1642. His original design wasn’t much more than a counting machine, a sequence of cogs and wheels so constructed as to able to count and convert between units, tens, hundreds and so on (ie a turn of 4 spaces on the ‘units’ cog whilst a seven was already counted would bring up eleven), as well as being able to work with currency denominations and distances as well. However, it could also subtract, multiply and divide (with some difficulty), and moreover proved an important point- that a mechanical machine could cut out the human error factor and reduce any inaccuracy to one of simply entering the wrong number.

Pascal’s machine was both expensive and complicated, meaning only twenty were ever made, but his was the only working mechanical calculator of the 17th century. Several, of a range of designs, were built during the 18th century as show pieces, but by the 19th the release of Thomas de Colmar’s Arithmometer, after 30 years of development, signified the birth of an industry. It wasn’t a large one, since the machines were still expensive and only of limited use, but de Colmar’s machine was the simplest and most reliable model yet. Around 3,000 mechanical calculators, of various designs and manufacturers, were sold by 1890, but by then the field had been given an unexpected shuffling.

Just two years after de Colmar had first patented his pre-development Arithmometer, an Englishmen by the name of Charles Babbage showed an interesting-looking pile of brass to a few friends and associates- a small assembly of cogs and wheels that he said was merely a precursor to the design of a far larger machine: his difference engine. The mathematical workings of his design were based on Newton polynomials, a fiddly bit of maths that I won’t even pretend to understand, but that could be used to closely approximate logarithmic and trigonometric functions. However, what made the difference engine special was that the original setup of the device, the positions of the various columns and so forth, determined what function the machine performed. This was more than just a simple device for adding up, this was beginning to look like a programmable computer.

Babbage’s machine was not the all-conquering revolutionary design the hype about it might have you believe. Babbage was commissioned to build one by the British government for military purposes, but since Babbage was often brash, once claiming that he could not fathom the idiocy of the mind that would think up a question an MP had just asked him, and prized academia above fiscal matters & practicality, the idea fell through. After investing £17,000 in his machine before realising that he had switched to working on a new and improved design known as the analytical engine, they pulled the plug and the machine never got made. Neither did the analytical engine, which is a crying shame; this was the first true computer design, with two separate inputs for both data and the required program, which could be a lot more complicated than just adding or subtracting, and an integrated memory system. It could even print results on one of three printers, in what could be considered the first human interfacing system (akin to a modern-day monitor), and had ‘control flow systems’ incorporated to ensure the performing of programs occurred in the correct order. We may never know, since it has never been built, whether Babbage’s analytical engine would have worked, but a later model of his difference engine was built for the London Science Museum in 1991, yielding accurate results to 31 decimal places.

…and I appear to have run on a bit further than intended. No matter- my next post will continue this journey down the history of the computer, and we’ll see if I can get onto any actual explanation of how the things work.

The Dark Knight Rises

OK, I’m going to take a bit of a risk on this one- I’m going to dip back into the world of film reviewing. I’ve tried this once before over the course of this blog (about The Hunger Games) and it went about as well as a booze-up in a monastery (although it did get me my first ever comment!). However, never one to shirk from a challenge I thought I might try again, this time with something I’m a little more overall familiar with: Christopher Nolan’s conclusion to his Batman trilogy, The Dark Knight Rises.

Ahem

Christopher Nolan has never been one to make his plots simple and straightforward (he did do Inception after all), but most of his previous efforts have at least tried to focus on only one or two things at a time. In Dark Knight Rises however, he has gone ambitious, trying to weave no less than 6 different storylines into one film. Not only that, but 4 of those are trying to explore entirely new characters and a fifth pretty much does the whole ‘road to Batman’ origins story that was done in Batman Begins. That places the onus of the film firmly on its characters and their development, and trying to do that properly to so many new faces was always going to push everyone for space, even in a film that’s nearly 3 hours long.

So, did it work? Well… kind of. Some characters seem real and compelling pretty much from the off, in the same way that Joker did in The Dark Knight- Anne Hathaway’s Selina Kyle (not once referred to as Catwoman in the entire film) is a little bland here and there and we don’t get to see much of the emotion that supposedly drives her, but she is (like everyone else) superbly acted and does the ‘femme fakickass’ thing brilliantly, whilst Joseph Gordon Levitt’s young cop John Blake (who gets a wonderful twist to his character right at the end) is probably the most- and best-developed character of the film, adding some genuine emotional depth. Michael Caine is typically brilliant as Alfred, this time adding his own kick to the ‘origins’ plot line, and Christian Bale finally gets to do what no other Batman film has done before- make Batman/Bruce Wayne the most interesting part of the film.

However, whilst the main good guys’ story arcs are unique among Batman films by being the best parts of the film, some of the other elements don’t work as well. For someone who is meant to be a really key part of the story, Marion Cotillard’s Miranda Tate gets nothing that gives her character real depth- lots of narration and exposition, but we see next to none of her for huge chunks of the film and she just never feels like she matters very much. Tom Hardy as Bane suffers from a similar problem- he was clearly designed in the mould of Ducard (Liam Neeson) in Begins, acting as an overbearing figure of control and power that Batman simply doesn’t have (rather than the pure terror of Joker’s madness), but his actual actions never present him as anything other just a device to try and give the rest of the film a reason to happen, and he never appears to have any genuinely emotional investment or motivation in anything he’s doing. Part of the problem is his mask- whilst clearly a key feature of his character, it makes it impossible to see his mouth and bunches up his cheeks into an immovable pair of blobs beneath his eyes, meaning there is nothing visible for him to express feeling with, effectively turning him into a blunt machine rather than a believable bad guy. There’s also an entire arc concerning Commissioner Gordon (Gary Oldman) and his guilt over letting Batman take the blame for Harvey Dent’s death that is barely explored at all, but thankfully it’s so irrelevant to the overall plot that it might as well not be there at all.

It is, in many ways, a crying shame, because there are so many things the film does so, so right. The actual plot is a rollercoaster of an experience, pushing the stakes high and the action (in typical Nolan fashion) through the roof. The cinematography is great, every actor does a brilliant job in their respective roles and a lot of the little details- the pit & its leap to freedom, the ‘death by exile’ sequence and the undiluted awesome that is The Bat- are truly superb. In fact if Nolan had just decided on a core storyline and focus and then stuck with it as a solid structure, then I would probably still not have managed to wipe the inane grin off my face. But by being as ambitious as he has done, he has just squeezed screen time away from where it really needed to be, and turned the whole thing into a structural mess that doesn’t really know where it’s going at times. It’s a tribute to how good the good parts are that the whole experience is still such good fun, but it’s such a shame to see a near-perfect film let down so badly.

The final thing I have to say about the film is simply: go and see it. Seriously, however bad you think this review portrays it as, if you haven’t seen the film yet and you at all liked the other two (or any other major action blockbuster with half a brain), then get down to your nearest cinema and give it a watch. I can’t guarantee that you’ll have your greatest ever filmgoing experience there, but I can guarantee that it’ll be a really entertaining way to spend a few hours, and you certainly won’t regret having seen it.