Tuesday, October 25, 2011

Doing it Wrong - Obscure: The Aftermath

I can't say this was the best $10 I ever spent. If nothing else, though, Obscure: The Aftermath (for the Wii) perfectly illustrates a concept we've discussed in class: the difference between giving players a choice and a problem.
I haven't played the original Obscure (or the other sequel I think it had), but The Aftermath is basically a decent survival horror/adventure concept that turned into a campy horror flop. It's not totally awful, but there is a lot wrong with it; too much to go on about here, so I'll cut to the chase.

One of the key mechanics in this game is its system of teamwork. You have a cast of characters at your disposal, each with unique abilities that can help you progress. The game's central campaign is more or less a single-player one, but it encourages you strongly to try 2-player co-op. Otherwise, you'll be playing one character while the second follows you, occasionally switching between them.

Here's the trouble: each of the 4 or so characters (I think you meet as many as 8 later in the game) has a unique ability that you need in order to pass certain obstacles or solve certain puzzles. So, whatever problem lies in your (completely linear) path determines what 2 characters you choose to be active characters. If you choose wrong, or more likely encounter a barrier you didn't expect, you'll have to go back to the group, pick a new 2-person party and retrace your steps. You know what's virtually never fun? Having to run through the same empty halls over and over.

Plus, that's the only thing that distinguishes how the different characters play. They all run the same speed. They all swing melee weapons the same awkward way. They all are awful shots with a gun. The only thing that might make a player want to choose one over another anyway would be personal preference (they all at least have some personality), but as I said, that gets trumped by whatever barrier or puzzle is in your path. Hence, teams in this game are not a choice; just a problem.

As much as I lament that mechanic, though, this game may yet be salvageable from the sea of crap games to the land of mediocre, or even decent. I'll probably revisit it in a future post after I've played through further.

Saturday, October 15, 2011

Quicktime Events - Pros and Cons


Quicktime Events attempt a balancing act between creating good cinematics and giving the player control. Sometimes it results in a fun experience, but the approach is not without flaws.


God of War was hailed by many as something of a pioneer in Quicktime Events. The game actually utilized both QTE and traditional non-interactive cutscenes. The latter may or may not have been a good decision, as the dramatic difference in graphics quality between cutscenes and gameplay may have been great cinematically, but certainly broke immersion. 


The reason QTE worked in God of War is because it let the player act out brutal, visceral kill cinematics on in-game enemies while still feeling really involved in them. They not only watched it happen and heard the sound effects, they felt the vibration of the controller in their hands as they pressed the buttons that made it happen; direct feedback for their actions. This hit the player with the aesthetic impact of those over-the-top cinematics without breaking their sense of engagement in the action. It was a level of complex sensory experience most players had rarely seen before in games, and merely watching something can't compare to that.


Note my main point above was that QTE worked well in gameplay, not in a cutscene. As I mentioned in a previous post, players sometimes really want cutscenes to be a break in immersion, either because they need to stop and take a breather from the action, or else they are just used to not being quite as alert once the cutscene starts rolling. Enter Resident Evil 4:
This game presented cutscenes that looked exactly like traditional cutscenes, but would sometimes throw in a few Quicktime Events. The main problem with this is that the scenes caught most gamers off-guard, resulting in Leon dying and the player having to start the scene over. Having to start over again always breaks immersion, so if the makers of RE4 were hoping to keep the player fully engaged, they dropped the ball there.
Is there anything I can say that Yahtzee didn't already say better (and more quickly)?


Game developers generally want a game that has good cinematic quality while still keeping the player engaged, so in many cases a form of QTE may seem like a good idea. QTEs can be effective sometimes, but they don't carry the level of engagement of normal gameplay, and many players are just getting sick of seeing them, so developers would be well advised to start looking into alternatives.

Cutscenes vs. Active narrative

Storytelling in games in the past has apparently taken a page from other forms of media by utilizing "cutscenes," where control is taken away from the player in order to continue the story. This practice can break immersion for the player, though, and it's certainly not the only option game developers have for driving stories in their games.

For contrast, let's reexamine Portal for a minute. Portal doesn't use the cutscene technique for its storytelling. (The very ending after the final boss is one exception) The narrative is carried forward through the items and events the player encounters, as well as through the ever-present voice of GLaDOS. Direct control is never really taken away from the player until the game is over, and this makes the experience much more engaging, allowing the story to carry forward without breaking the "flow" of the game.

Just to play devil's advocate for a moment, though, it's worth noting that some players really do like cutscenes in games. If gameplay is fast-paced or frustrating, the opportunity to just sit back and enjoy watching a cutscene can seem like a welcome break from the action, or even a reward for conquering a challenge. (It's also part of the reason some players hate Quicktime Event scenes like in Resident Evil 4, but I'll revisit that later) It still breaks immersion, but sometimes gamers are okay with that. This is by no means an excuse to justify taking control away from the player, but it's something else to think about.

Games like Portal prove that narrative in a game need not be done purely through non-interactive cutscenes. For the most part, players want to be involved in a game, not casually observing it. Games are an interactive medium, and should usually be designed as such, whether they are narratively-driven or not.

Tuesday, October 4, 2011

On Flow - Dynamic vs. Chosen Difficulty

While I will be discussing the game FlOw, I want to discuss the larger concept of Flow in games more. This concept of Flow, as I understand it, is basically finding a perfect balance of difficulty that engages the player without being too difficult (frustrating) or too easy (boring). Many games struggle with this, as different gamers have different ability levels, and learn at different rates.


FlOw seems to be widely hailed as an addicting game that creates Flow well. It has dynamic difficulty adjustment, but even more than that creates an aesthetic quality that relaxes the player and immerses them in the experience. The evolution aspect was rather clever, because while evolving your creature could make it look cooler and stronger, which was usually pleasing to the player, it also made it a bigger, easier target for enemies to hit. Players who wanted an easier time of it may actually have been well advised to avoid eating much of anything and just race through the depths as fast as they could.


I would criticize a few things in FlOw. Despite striving for that perfect DDA that'll match the game to the player, it still occasionally causes frustration, especially in the lower depths. When you fail an encounter with an enemy, your creature gets sent back up to a previous level, but this didn't feel like going easier on the player so much as just creating a setback. If you ate all the creatures on the previous level, they don't come back, so there's really nothing to do there except pluck up the courage to dive back down again.

Other than that, I felt the minimalist, explain-nothing approach of the game may be something of a double-edged sword. It may help immerse the player early on, but it also meant the player had no direct control of the difficulty. The natural inclination of eating things and diving deeper would gradually increase the difficulty, and the player won't realize this until after they start getting roughed up by enemies.

In talking about DDA, I'm reminded of an old favorite game of mine called God Hand. The goal of this action/adventure game was simply to beat up every bad guy that crossed your path. Many gamers loved (or hated) God Hand for its difficulty, but this game not only offered a traditional choice of difficulty level to the player, it also made an attempt at DDA at a time when such a thing was uncommon in such combat-centric games. The dynamic difficulty changes and the player-chosen difficulty changes worked hand-in-hand, or at least tried to.
God Hand employed a difficulty level system, consisting of four levels: Level 1, 2, 3, and Level DIE (which for most people was appropriately named). For every blow the player dealt on enemies, they'd slowly increase a gauge on the left of the screen. That same gauge would decrease if the player took a hit. When the gauge filled, it would increase the level by one, or if it was emptied by the player taking a beating, the level would drop.

So what did this level do (besides awarding more gold per enemy defeated)? Quite a lot. When the level increased, enemies seemingly got smarter. They reacted and counter-attacked more frequently, moved more quickly, and even the animations of their attacks sped up, meaning the player had to act and react more quickly. At Level 1, enemies would square off against the player one at a time. At higher levels, they ganged up on the player all at once, and also attacked the player from behind (which you couldn't see coming because of the camera's fixed over-the-shoulder view).

Now, the system wasn't fully automatic. Aside from the choice of Easy, Normal or Hard when starting a new game, for Normal and Easy modes, there was a move the player could do where they'd grovel at the enemies' feet, forcing the difficulty down to Level 1. They could also taunt enemies to make them briefly faster and more aggressive.

My point: the system wasn't perfect, but we should really see more of this in gaming. The DDA is great, but giving the player some direct control over the difficulty allows them to steer gameplay in the direction they want in case the DDA fails to fully engage them. A truly perfect DDA would be nice, but it's usually unrealistic to hope for.

Defying Genre - Portal

So, we've discussed in class how the common genres used to classify games are not very well defined. This may be in large part because games themselves are often difficult to really define in terms of one another. You can identify common elements, but to strictly classify them is tricky.

For example, let's examine Portal, which I had the pleasure of playing for the first time last week. Technically, it would be accurate to describe as a first-person shooter. The game is played in first-person perspective, and the player has a gun they shoot with. It just so happens the gun shoot portals. The reason we shy away from the term "first-person shooter" in this case is that most games that fall under that banner are all about killing things, whereas Portal is about solving puzzles, getting past obstacles and reaching the exit.

There are the turrets which will try to kill you, but the strategy to defeat them is VERY different from what someone who plays a lot of first-person shooters would expect. You can't directly attack the turrets. Instead, the player has to sneakily use portals to knock the turrets over, drop things onto them, or otherwise disable them indirectly.

This discussion brought up the question in my mind: Why even bother with genres then? The ultimate reason seems to be that we as human beings feel more comfortable when we can define something, quantify it, and place it under a category. Things that defy definition bother us. Besides, grouping similar things together tends to be a good business model. That's why Amazon and others will suggest similar products after you look at or buy something.

While the concept of genre may be useful for gaming, it seems that games are too diverse to outright define in such a way. The same could be said of other forms of media, for that matter. It seems the right approach we should take is not to define the game as a whole, but rather identify elements it contains.

Portal is not a first-person shooter, but it contains first-person shooter elements.

This method of description is more accurate, and still allows us to group it with other games that have similar mechanics. It may not allow us to completely define the game inside and out, but given how games like Portal can surprise us with what they offer, we really shouldn't be doing that anyway.