The FPS is Dead, Long Live the FPS

 

What do Condemned, Bioshock, The Chronicles of Riddick and Metroid Prime have in common? They’re all first person shooters- that aren’t. Yes they all play out from a first person perspective and yes they all involve shooting, but gunplay is not the primary focus of any of these titles. The Chronicles of Riddick for example, has more in common with Splinter Cell than Halo, despite the science fiction setting. Since the release of Doom, the first person shooter has been a pillar of the games industry. This generation in particular has seen it become one of the best selling genres with many of the biggest franchises, including Halo, Call of Duty and Battlefield, consistently topping best seller lists on both sides of the Atlantic. This popularity however, has come at a cost. In an attempt to appeal to a broader audience, games that in reality bare only a passing resemblance to the traditional FPS are branded and marketed in such a way so as to play up their similarities. Some games do escape such gross generalisations. Both Bioshock and Metroid Prime were both able to sell themselves on the strength of their environments and atmospheric design. However these are the exceptions to the rule.

Part of the problem extends from the times the terms were coined in. The last fifteen years have seen games become increasingly complex, both in terms of technology and design. Early games in the genre focused solely on shooting enemies from a first person perspective with the odd keycard to find to break up the pacing. However, it wasn’t long before developers were finding new ways to exploit this perspective. The lead character could a be an empty vessel, a cipher for the player to inhabit in the game world and in just a few short years, games like the original Half Life were starting to push the possibilities of such a unique viewpoint, using it to draw in players as characters engage them directly. These games became the prism that caused the genre to split and diversify to the point that even ten years ago, we saw releases such Deus Ex, which were almost impossible to pigeon hole into any one genre. Since then, there has been a conscious effort among many developers to create games that can stand toe to toe with films and literature with regard to narrative and character design. This certainly doesn’t mean that any first person shooter that has strong art design or engaging characters strays outside the definition of the genre. Recent iterations of the Call of Duty franchise have included several scenes that challenge more than precision with an assault rifle. However it is still very much a first person shooter, a direct evolution in a line that can trace its ancestry back to Doom and Wolfenstein as every interaction with the world around you is done through the medium of bullets, with the occasional grenade.

So if the genre label of the first person shooter is no longer sufficient to cover the array of games that fall under its umbrella, how should the genre be approached? One solution would be to create specific subsets within the genre, define each game as a member of a species. The problem is then becomes drawing a line between accuracy and pedantism. Is Modern Warfare a first person shooter? Or is it a military action game played out from a first person perspective with RPG elements within the multiplayer? Does Halo become a first person science fiction action game with third person vehicular elements? This type of subsetting is far too unwieldy and would ultimately over complicate the situation. Another option would be to follow in the direction of cinema and books, defining the genre through the theme in the narrative and setting. Condemned becomes a horror, Call of Duty an action adventure and so on. However, while genre labels may be appropriate for passive entertainment, the manner of interaction, the game mechanics cannot be ignored. Both Metroid and Halo belong to science fiction, yet the offer very different experiences within this setting due to the manner in which the player interacts with the world.

So what is to come of the indomitable first person shooter?  As the industry expands and as developers find new ways to exploit the first person perspective, how will we continue to classify and categorise their results? A medium is evaluated by how it expresses itself, how it connects with the audience. This true in all cases and can be judged through in the plot, art direction, the quality of its characters and yes, the manner in which the audience interacts with it, be it through reading, watching or with a controller. It is not necessary to quantify completely every variable and deviation from a genre label, indeed strict adherence to a template can only stifle creative expression and stagnate the industry. At the same time, these descriptors are necessary for comparison and discussion, and a good indicator of how games are developing as time moves forward. While the claim that current genre labels are becoming defunct across the board can indeed stand, it is within the first person genre that the greatest diversity in experiences is being corralled into the narrowest of definitions. The language of videogames must keep pace with the development and growth of the medium and if that means rethinking how we categorise our games, well then so be it.

This post originally appeared here

A Difficult Question

The most common choice offered to players upon starting a new game is the difficulty; Easy, normal, hard or some variant are the most frequent options, though “casual” and “insane” often get dropped in depending on the title. The issue of difficulty in games tends to be raised periodically among gaming communities, but with this being the week that Dark Souls, a game whose marketing has centred around the numerous ways it will destroy your face, I thought it might be timely to consider difficulty, gaming and their relationship as it is today.

For some people, the choice of difficulty is simple; hit normal and away you go. After all, it stands to reason that normal would be the mode the developers intended their game to be played. For others, myself included, the difficulty screen is the first major hurdle in starting a new game. What if halfway through, the game turns out to be too easy or worse, you hit a brickwall? This is an issue easily solved with the ability to change the difficulty at any time, an ability that many games still lack. However, it could be argued (and damn it I’ll play Devil’s Advocate and argue it!) that having the choice of difficulty in the first place is an oddity, both from a historical and practical approach to game design.

Let’s look at history first. It’s no secret that video games were originally designed to take your pocket change and leave you poor. The trick was to find a level of difficulty that would pummel the player into submission, while giving them the impression that with just one more try they may prevail. Not surprisingly, a choice of difficulty would make little sense as, should the player pick easy, there was the risk they could beat the game for a few pieces of shrapnel. This imperative in game design has mostly passed by, with people paying up front for the “full experience” (excepting DLC and subscription based models, which is a topic for another post). To this end, developers now want players to see all that their game has to offer, which is no surprise, considering the cost and effort that is involved in modern games development.

 Added to this, developers want players to see their games in the best light possible. Games are unique in that, unlike television or books, they actively prevent you from experiencing what they have to offer. You don’t have to beat a book or a TV show, simply progress along a linear narrative path until the conclusion. That’s not to say they are passive experiences; far from it. Books, for example, require the reader to draw out its world in their mind, using their imagination to convert text on a page into a living, breathing reality. Games however require active input from the player, where they are forced to make decisions in order to progress. In some games, this element is elevated to include deciding how the narrative plays out, but every game makes demands on the player, even in the most linear narrative. Take cover, fall back, which gun to use, which direction to walk… the list is endless. When you boil it down, difficulty in games is about consequence. Say for example, if you’re playing Gears of War or Uncharted on the lowest difficulty setting, and decide to rush the enemy. In this instance, it is likely you will succeed. However, bump up the notch to the highest setting and try the manoeuvre again. The odds swing back against the player. In the former instance, there is no real consequence for the player, while the harder difficulties punish poor decision making with death. This fundamentally alters the players relationship with the on screen character, either removing the vulnerability the narrative dictates they must have, or reducing a legendary soldier to a rookie unable to take on a small group of militia. As a result, after clearing an entire temple of soldiers and helicopters, Drake can trigger a cutscene and be overwhelmed by one or two lackeys.

My intention isn’t to criticise game design by arguing that the difficulty feature should be removed, as
I’ve said already, I think the choice should be offered to players at any point during the game, not just at the beginning, before they have a good idea of where the balance lies. However, it should highlight the oddity of having the choice in the first place. Even if you are just along for the ride, enjoying the spectacle, the reaction elicited by said spectacle is altered depending on the players struggle to reach that point. Which brings me back to Dark Souls, a game where the designers have deliberately set the bar high and removed the option to change the difficulty should the going get tough. Overcoming adversity is core to the game’s message, a message that is diminished if the game could be set to “walk in the medieval fantasy park”. It is a reactionary move from a team that feels that games have increasingly pandered to the player’s whims, patting them on the head with every minor victory. They are part of a school of thought that argues that this trend robs the players from a true sense of achievement. In life, overcoming true challenge is where the greatest satisfaction lies, and so it is with games as well. They have taken this notion to its logical extreme and the result is a game with the tagline “You will die”.

At times, I wonder if this is an elitist attitude to games. Certainly it would appear so in some cases; die too often in Ninja Gaiden and the game offers to bump down the difficulty complete with a little pink ribbon for Ryu, so the player is constantly aware they are doing it ‘wrong’, that they are subpar. Wanted renamed its easy difficulty “Pussy” mode which, although it is in keeping with the tone of the movie and game, remains offensive for a variety of reasons.

Although the vast majority of decisions lie in the hands of the game designers, the degree to which players wish to struggle is often dictated by the players themselves. However, hopefully this feature has highlighted, not just the impact the choice can have on the player’s perception of a game, but also how it seems to run counter to the designers wish to guide that perception. In some cases, for example Halo or Operation Flashpoint, playing the hardest setting is almost like playing a different game; old tactics become redundant and once tiny challenges become mountains. So next time you start up a new game, spare a second on that difficulty screen and ask how a simple choice could alter your entire opinion on the upcoming adventure.

This post originally appeared here

The Issue of Control

In the beginning, there was the D-pad. The player could move their character across a 2-D plane with ease… and it was good. With Microsoft and Sony both set to follow Nintendo into world of motion control, it seems that the industry is standing on the brink of great change.

The limits of what a controller can do have been stretched. The analogue stick, the staple of gaming since the Playstation fully embraced the third dimension half through its life cycle (so much so that we soon demanded one for each thumb) has been declared now as only half of the equation and in the case of Kinect, it doesn’t even figure. The stick, surely the greatest addition to games since the ubiquitous D-pad (which still graces the controllers, despite the 360’s efforts to sully its name with a spongy mass of plastic), is so  perfectly fit to navigate a 3D space to the point that it’s addition to the way we control games was never really questioned. However, with motion control there are a number of avenues available to exploit and the definitive method is far from being locked down as is ably demonstrated by the fact that each of the big three companies all employ very different technology to track motion. It is easy to suggest that this is the first time the industry has come to such a crossroad. However, this suggestion, as sensible as it sounds would be wrong.

In the beginning was the D-pad. Well, not really. In the beginning were a number of different ways to control games. The Magnavox Odyssey, the system considered to be the first legitimate home games console, had two knobs on the side of a box, the Atari, various joysticks, but also a little wheel controller. The Sinclair Spectrum had a keyboard. The 80’s were a time of discovery for videogames. Back then, there were few industry standards, many systems had multiple controllers, like the Commodore 64, which featured keyboard and joystick options.

These control choices had two main avenues of influence: The arcade and the home computer. The joystick had its home in the arcades. The most adaptable of the arcade control systems (many of which were game specific), it made sense to adopt it as the method of control should the company be attempting to create the “home arcade” experience. The keyboard? The result of trying to market machines such as the Commodore 64 as home computers that could help children with their homework. Clever, but parents never reckoned on Paperboy…

The Magnavox Odyssey
                                                                                 The Magnavox Odyssey

It was the NES that got it just right. The classic controller, the first that many of today’s gamers became familiar with, a D-pad with two buttons; a and b. With the NES, the home console got a control method to call its own. The NES didn’t mark the cross’s first appearance, it had several incarnations before industry legend (and Game Boy creator) Gunpei Yokoi designed the now familiar design and added into the Game & Watch series. It’s also worth noting that the NES actually predated a number of the systems quoted above. It took time for the design to filter down and become standard, though tellingly the Sega Master System, released a year after the NES, featured a very similar pad as standard on the controller. However, by the time the 80’s had wrapped up, almost every system, home and hand held alike, featured the four way directional pad.

It’s easy now to look back and see that the D-pad was the obvious choice to carry forward. More practical and than a keyboard and more precise than a joy stick, it’s easy now to wonder how it wasn’t thought of and adopted sooner. However, it is a situation in many ways similar to this that we are presented with today: The issue of control. Where are we going next? Right now, motion control seems to be the safe bet, though it’s possible the industry could veer off on another direction entirely given four or five more years (however, given the massive financial commitment involved within all parties on developing motion control it’s unlikely).  It’s possible that in twenty years time, the games industry will look back and say with a smug smile that of course Kinect was never going to work, that its reach exceeded its technological grasp, that the Sony Move was too little too late for a company that needed something to separate them from its competitors or that the Wii did little behind highlight the technical and practical limitations of motion control.

While it would certainly be unfair to suggest that what lies before us is a repeat of the infamous Super Scope, hype is a cruel mistress. In the end it will most certainly fall to software. The Wii had Wii Sports but failed to follow up with anything that suggested the remote could stir imagination beyond clever tech demos. Should Sony or Microsoft release a title that people feel must be played, they will have legitimised their respective approaches to motion control.

Remember, even the D-pad need Super Mario Bros.

This post originally appeared here

How the 3DS could learn from the PSP

There’s no question that, when it comes to handheld gaming, Nintendo owns the field. With the DS they did the impossible and made a device more popular than the GameBoy. The DS was like nothing the general consumer (and I use that word deliberately instead of ‘gamer’) had ever seen before, with the touch screen allowing interactions that needed no explaining or lengthy tutorials before almost anybody was comfortable with the stylus. Nintendo built on the strong foundation of the DS with a strong line up of games to support it and it is largely down to the success of games like Brain Training and Nintendogs, as well as a fantastic hardware revision in the DS Lite, that the handheld wunderkind became such a megahit.

Prior to it’s release, all signs pointed to the PSP being the future of handheld gaming. The idea of being able to play console quality games on the move seemed to good to be true. The ironic part of the story of the PSP is that for the most part, it met its claims of a portable home console experience. Unfortunately for Sony however, it turned out that most people were happy with the home console games staying at home.

To label the PSP a failure is a massive exaggeration and while it certainly never approached the success of Nintendo’s handheld, it certainly held it’s own for a number of years. It also did a number of things that Nintendo should take note of, specifically in the realm of online service. No matter the ferocity of your admiration for the DS, it’s tough to argue that it has capitalised on its wifi capabilities. With the 3DS, Nintendo has the opportunity to learn some lessons from the PSP. One of the most exciting feature that Nintendo could ‘borrow’ is the ability to share content between the home system and the handheld. The ability to download titles from the virtual console and choose to play them on the Wii or the 3DS would be a really cool addition. E3 proved that the 3DS is ideal for handling N64 ports so all of the Virtual Console titles could see the light of day on the handheld. Ideally the virtual console would appear on the 3DS separately, but if content cannot be shared, either by SD card or wirelessly, between the Wii and the 3DS, then the downloads will remain attached to my Wii.

We know from E3 that movies are coming to the 3DS and the most obvious way to get them there would be through an online store. Much like Apple, Sony has an online destination where PSP minis, movies and even full PSP games can be downloaded straight to the machine and really, if Nintendo is hoping for the 3DS to keep up on all fronts, it will need a service that is much more robust than is currently available. Following on from this, of course, is proper online multiplayer. Yes it’s available on the DS but friend codes need to be rethought. Unless it’s possible to connect painlessly with friends then my Nintendo handheld will remain a single player platform.

The online service is only one aspect of the PSP that Nintendo would do well to look at. There are certainly others and I’d love to hear what you think, so drop a comment and let me know what you think.