The Case for Piracy

Ha-ha, just kidding. I’m not going to be arguing for piracy, but I do want to make one observation about how our industry is dealing with this issue. Some commentators have been talking recently about the massive piracy afflicting the launch of Demigod. According to Stardock’s Brad Wardell, of the 140,000 connections to the main server during the game’s first week, only 18,000 were by legitimate customers. This ratio compares favorably (or is that unfavorably?) with the 90% piracy rate reported by the developers of World of Goo. I don’t want to comment on the viability of various DRM schemes, but – needless to say – unless a game is server-based (WoW, EVE, EverQuest), piracy is a bracing reality for game developers. However, I have an unscientific theory about the root of the problem:

For any given game, only around 10% of players are ever willing to purchase an original retail product.

Obviously, this proposition holds up for PC gaming, but I believe the same is true for console gaming (housemates sharing a copy, renting from Blockbuster over a weekend, friends loaning each other games, grabbing a cheap used copy from Gamestop) and even board games (one gamer buying a copy for a group of friends). This reality is immutable – if DRM was perfect, the percentage would go up somewhat, but it would never come anywhere near 100%. Too many games exist for consumers to afford even a small percentage of them, and – more importantly – players’ individual interests in a specific game are always on a continuum. People on one extreme found a website about their favorite game while gamers on the other end might play it only once on a random Tuesday night at a friend’s house.

The interesting thing about this percentage is that it mirrors another important percentage – the number of players willing to pay money on so-called free-to-play game usually hovers near 5-10%. (RuneScape, a F2P example on the higher-end, has reported figures around 12%.) The important thing about the free-to-play movement is that the business model turns the theory I posited above into a founding principle. In fact, the smartest F2P games use a dual-currency system so that the 5-10% cash-rich players can subsidize the time-rich ones. Ultimately, this model works because a place exists for everyone on the continuum, from gamers who just want to dip a toe to ones willing to drop thousands on microtransactions. Launching a traditional retail game and hoping to change your “piracy conversion” rate is fighting the current; launching a free-to-play game built from the start with multiple levels of player commitment is sailing with the current.

Christopher Tin Inteview

Here’s a good interview with my friend Chris where he talks about his experience with Civ4‘ s Baba Yetu and his upcoming album, Calling All Dawns. Must have been fun to record in Abbey Road! Here’s a good quote:

Liontamer: You’ve actually been to Video Games Live performances at both the Hollywood Bowl in LA and The Kennedy Center in DC. How often have you able to attend shows within the tour? As part of the regular composer meet-and-greets there, do you have any memorable stories of meeting with fans or fellow composers?

Chris: I try to attend the California ones; the only exception is the Kennedy Center show, which I thought was too good of an opportunity to pass up. On the whole, though, I don’t have a lot of time to be going to a lot of the concerts. As for stories from the Meet And Greets, my favorite is when I was sitting between my friend Soren Johnson (designer for Civ IV, currently on Spore) and Will Littlejohn (Guitar Hero). Will turned to us and said, “Hey guys, we just wanted you to know that while we were working on Guitar Hero, during all our lunch breaks we would play Civ IV.” To which Soren replied: “That’s funny, because during all our breaks on Civ, we would all play Guitar Hero!” That was a great little moment, and I think it speaks well to our close-knit community.

GD Column 5: Sid’s Rules

The following was published in the January 2009 issue of Game Developer magazine…

Most game developers are familiar with Sid’s dictum that “a good game is a series of interesting choices.” In fact, my co-columnist Damion Schubert started his recent article on player choice (October 2008) by referencing this famous quote. However, over the course of his career, Sid has developed a few other general rules of game design, which I heard him discuss many times during my seven years (2000-2007) at his studio, Firaxis Games. As these insights are quite practical lessons for designers, they are also worthy of discussion.

Double it or Cut it by Half

Good games can rarely be created in a vacuum, which is why many designers advocate an iterative design process, during which a simple prototype of the game is built very early and then iterated on repeatedly until the game becomes a shippable product. Sid called this process “finding the fun,” and the probability of success is often directly related to the number of times a team can turn the crank on the loop of developing an idea, play-testing the results, and then adjusting based on feedback. As the number of times a team can go through this cycle is finite, developers should not waste time with small changes. Instead, when making gameplay adjustments, developers should aim for significant changes that will provoke a tangible response.

If a unit seems too weak, don’t lower its cost by 5%; instead, double its strength. If players feel overwhelmed by too many upgrades, try removing half of them. In the original Civilization, the gameplay kept slowing down to a painful crawl, which Sid solved by shrinking the map in half. The point is not that the new values are likely to be correct – the goal is to stake out more design territory with each successive iteration.

Imagine the design space of a new game to be an undiscovered world. The designers may have a vague notion of what exists beyond the horizon, but without experimentation and testing, these assumptions remain purely theoretically. Thus, each radical change opens up a new piece of land for the team to consider before settling down for the final product.

One Good Game is Better than Two Great Ones

Sid liked to call this one the “Covert Action Rule,” a reference to a not-altogether-successful spy game he made in the early ’90s:

The mistake I made was actually having two games competing with each other. There was an action game where you break into a building and do all sorts of picking up clues and things like that, and then there was the story which involved a plot where you had to figure out who the mastermind was and what cities they were in, and it was an involved mystery-type plot. Individually, each part could have been a good game. Together, they fought with each other. You would have this mystery that you were trying to solve, then you would be facing this action sequence, and you’d do this cool action thing, and you’d get out of the building, and you’d say, “What was the mystery I was trying to solve?” Covert Action integrated a story and action poorly because the action was actually too intense – you’d spend ten minutes or so of real time in a mission, and by the time you got out, you had no idea of what was going on in the world.

In other words, even though both sections of the game were fun on their own, their co-existence ruined the experience because the player could not focus her attention on one or the other. This rule points to a larger issue, which is that all design choices only have value in relation to one another, each coming with their own set of cost/benefit trade-offs. Choosing to make a strategic game also means choosing not to make a tactical one. Thus, an idea may be “fun” on its own but still not make the game better if it distracts the player from the target experience. Indeed, this rule is clearly the reason why the Civ franchise has never dabbled with in-depth, tactical battles every time combat occurs.

However, sometimes multiple games can co-exist in harmony with each other. Sid’s own Pirates! is an example of a successful game built out of a collection of fighting, sailing, and dancing mini-games. However, these experiences were always very short – a few minutes at the most – leaving the primary focus on the meta-game of role-playing a pirate. Each short challenge was a tiny step along a more important larger path, of plundering all Spanish cities or rescuing your long-lost relatives.

Another example of a successful mix of separate sub-games is X-Com, which combined a tactical, turn-based, squad-level combat game with a strategic, real-time, resource-management game. As with Pirates!, what makes X-Com work is that the game chose a focus – in this case, the compelling tactical battles between your marines and the invading aliens. The high-level, strategic meta-game exists only to provide a loose framework in which these battles – which could take as long as a half hour each – actually matter. One doesn’t fight the aliens to get to manage resources later; instead, one manages resources to get to perform better – and have more fun – in future battles.

Do your Research after the Game is Done

Many of the most successful games of all time – SimCityGrand Theft Auto, Civilization, Rollercoaster Tycoon, The Sims – have real-world themes, which broadens their potential audience by building the gameplay around concepts familiar to everyone. However, creating a game about a real topic can lead to a natural but dangerous tendency to cram the product full of bits of trivia and obscure knowledge to show off the amount of research the designer has done. This tendency spoils the very reason why real-world themes are so valuable – that players come to the game with all the knowledge they already need. Everybody knows that gunpowder is good for a strong military, that police stations reduce crime, and that carjacking is very illegal. As Sid puts it, “the player shouldn’t have to read the same books the designer has read in order to be able to play.”

Games still have great potential to educate, just not in the ways that many educators expect. While designers should still be careful not to include anything factually incorrect, the value of an interactive experience is the interplay of simple concepts, not the inclusion of numerous facts and figures. Many remember that the world’s earliest civilizations sprang up along river valleys – the Nile, the Tigris/Euphrates, the Indus – but nothing gets that concept across as effectively as a few simple rules in Civilization governing which tiles produce the most food during the early stages of agriculture. Furthermore, once the core work is done, research can be a very valuable way to flesh out a game’s depth, perhaps with historical scenarios, flavor text, or graphical details. Just remember that learning a new game is an intimidating experience, so don’t throw away the advantages of an approachable topic by expecting the player to already know all the details when the game starts.

The Player Should Have the Fun, not the Designer or the Computer

Creating story-based games can be an intoxicating experience for designers, many of whom go overboard with turgid back stories full of proper nouns, rarely-used consonants, and apostrophes. Furthermore, games based on complex, detailed simulations can be especially opaque if the mysterious inner workings of the algorithmic model remain hidden from view. As Sid liked to say, with these games, either the designer or the computer was the one having the fun, not the player.

For example, during the development of Civilization 4, we experimented with government types that gave significant productivity bonuses but also took away the player’s ability to pick which technologies were researched, what buildings were constructed, and which units were trained, relying instead on a hidden, internal model to simulate what the county’s people would choose on their own. The algorithms were, of course, very fun to construct and interesting to discuss outside of the game. The players, however, felt left behind – the computer was having all the fun – so we cut the feature.

Further, games require not just meaningful choices but also meaningful communication to feel right. Giving players decisions that have consequence but which they cannot understand is no fun. Role-playing games commonly fail at making this connection, such as when players are required to choose classes or skills when “rolling” a character before experiencing even a few seconds of genuine gameplay. How are players supposed to decide between being a Barbarian, a Fighter, or a Paladin before understanding how combat actually works and how each attribute performs in practice? Choice is only interesting when it is both impactful and informed.

Thus, in Sid’s words, the player must “always be the star.” As designers, we need to be the player’s greatest advocate during a game’s development, always considering carefully how design decisions affect both the player’s agency in the world and his understanding of the underlying mechanics.

The Hidden Benefit of OnLive

One of the biggest stories to emerge from GDC 2009 was the emergence of OnLive, a server-based gaming platform which would allow any PC or Mac, including bare-bones ones, with a fast network connection to play any game by running all the code – including the graphics rendering – on the server instead of on the local machine. In many ways, this service is a return to the “dumb terminal” model of the ’70s where no calculations were run on the user’s computer itself. So far, reactions have been mixed. Osma Ahvenlampi argues that, due to network lag, this model could never work; Adam Martin claims that it could work if the servers are located intelligently. Keith Boesky points out that the actual business model is simply acquisition.

I don’t claim to know if OnLive’s specific tech will work or not, but I would like to talk about the implications of this potential shift to server-based games. (Even if OnLive doesn’t make it work, clearly this technology will arrive at some point.) Of course, we already have server-based games – World of Warcraft runs on numerous servers spread around the world, with appropriate bits of game info set to thin clients running on local machines. However, a client is still a tricky piece of software, and as Raph Koster like to remind us, “The client is in the hands of the enemy.”

With OnLive, the client is so thin, I’m not sure if it’s appropriate even to call it a client. It’s more like a video-player. In fact, while the phrase “YouTube for Games” always refers to user-generated content, one should recall that YouTube had a second, perhaps more important, innovation: regardless of how a video was created, as long as the viewers had Flash, they could watch it immediately. The same concept hold for OnLive – as long as you have their app, you can play any game capable of running on their servers.

The implications of this change are huge – simply put, it spells the end of client-server architecture. Developers no longer need to optimize what data is sent to the client and what is kept back. Or worry about cheating. Or piracy, for that matter. While these advantages are huge, of course, what really interests me is that making a game multi-player is now, essentially, trivial. Put another way, the set of developers making one-man MMO’s will now be larger than just Eskil Steenberg.

Writing multi-player games is very, very hard. Trying to keep everything in-sync between servers and clients in a safe, responsive, fair, and accurate manner is no small challenge. With a system like OnLive, these issues evaporate because there are no clients anymore. Developers simply write one game, run it on some server, and update it based on user actions fed in from the network. If such a technology existed when we made Civ4, not only could we have saved man-years of development time and testing, but we could have easily implemented advanced features (games-of-the-day, mod sharing, massive player counts, asynchronous play, democracy-game support, etc.) with very little effort. Of course, I don’t know if OnLive will be the one to do it, but – from a developer’s point-of-view – the importance of this change cannot be overstated.

The Case for Metacritic

Over the last few years, Metacritic has become a popular whipping boy within the games industry. A recent example would be Adam Sessler’s bit at GDC’s journalist rant session. At the risk of beginning to sound like a reactionary contrarian, I feel a case needs to be made for Metacritic. Unlike my argument for used games (or, rather, for thinking critically about what we are trying to sell consumers for $60), I feel much less conflicted in this case, so let me state my thesis very clearly: Metacritic has been a incredible boon for consumers and the games industry industry in general. The core reason is simple – publishers need a metric for quality.

What should executives do if they want to objectively raise the quality bar at their companies? They certainly don’t have enough time to play and judge their games for themselves. Even if they did, they would invariably overvalue their own tastes and opinions. Should they instead rely on their own internal play-testers? Trust the word of the developers? Simply listen to the market? I’ve been in the industry for ten years now, and when I started, the only objective measuring stick we had for “quality” was sales. Is that really what we want to return to?

Yes, I know translating all ratings onto a 100-point scale distorts them – a C is not a 60 is not three stars – but we need to not let the perfect be the enemy of the good. What are the odds that we can get every outlet onto the same scoring scale? Not likely. Can Metacritic improve the way it converts non-numeric ratings into scores? Absolutely. However, the whole point of an aggregator is that these issues come out in the wash. When 50 opinions are being thrown into the machine, a 74 is actually different from a 73.

I use Metacritic all the time, and I love it. It’s changed my game-buying (and movie-watching and music-listening) habits for the better, which of course funnels money into the pockets of deserving developers and encourages publishers to aim for critically-acclaimed products. Have we gotten so jaded that we have lost sight of what a wonderous thing this is? Metacritic puts an army of critics at our fingertips. Further, consumers are not morons who can’t judge a score within a larger context. We all realize that, due to the tastes of the average professional reviewer, some games are going to be over-rated and some will be under-rated.

Ultimately, the argument against Metacritic seems to revolve around whether publishers should take these numbers seriously. Some contracts are even beginning to include clauses tying bonuses to Metacritic scores. Others are concerned that publishers are too obsessed with raising their Metacritic averages. Actually, let’s think about that last sentence in detail. Note that when I just wrote “others,” I was referring to journalists, not to investors. As John Riccitiello famously said, “I don’t think investors give a shit about our quality.” How bizarre is it that once the game industry starts taking journalists’ work seriously, they complain about it?

I’ll give my own perspective on this issue. Over the years, I have seen many great ideas shut down becomes someone in charge thinks they won’t impact sales. However, when I am in an EA meeting in which we talk about the need to raise our Metacritic scores – and the concrete steps or extra development time thus required – I’ll tell you what I feel like doing. I feel like jumping for joy. How incredible is it to work for a publisher who cares about improving the quality of our games in the eyes of critics and uses an independent metric to prove it.

As for the renumeration issue, isn’t it a good thing that there is a second avenue for rewarding developers who have made a great game? Certainly, contracts are not going to stop favoring high game sales, so – hopefully – Metacritic clauses can ensure that a few developers with overlooked but highly-rated games will still be compensated. Now, if a game doesn’t have high sales and also doesn’t get a good Metacritic score, well, there’s a name for that type of game, and these developers should not be protesting. Further, developers also need to stop complaining that a few specific reviews are dragging down their Metacritic scores. Besides the fact that both good and bad reviews are earned, in a world without Metacritic, one low score from GameSpot, GameSpy, 1Up, or IGN becomes a disaster. Score aggregation, by definition, protects developers from too much power being in the hands of one critic.

Journalists also need to have the guts to give games a score and stick by it. Putting a score on a review doesn’t take away the ability to add nuance to one’s criticism. My favorite music book is the Third Edition of the Rolling Stone Album Guide. As the reviews were written by just four critics, I have learned to understand the exact difference between five and four-and-a-half stars (or, for that matter, between two-and-a-half and three stars). If you are a great reviewer, the score you give a game helps me place it in context with everything else you have rated. Moreover, your score lets you contribute, via Metacritic and all the other aggregators, to the meta-critique of games on the Net. What exactly is the problem here?

Two Thoughts on GDC 2009

2009 will not go down as my favorite GDC. In many ways, this year may have been the worst of the eight I have attended. However, to paraphrase Woody Allen, even when GDC is bad, it’s still pretty incredible. The problem was not one with organization or speaker selection or much anything else that could have been controlled by the people in charge. Indeed, GDC 2009 was more notable for what was not said instead of what actually was. More specifically, nothing even semi-official about the next console generation was mentioned anywhere. Even the rumor mill was pretty dry. Compare this year to, say, GDC 2004, and you’ll see a huge difference as all three manufacturers were already beginning to jockey for position. Furthermore, the online revolutions which have made GDC so fascinating lately (free-to-play, casual MMO’s, virtual goods, web-based gaming, social networks, etc.) are, at least from the conference’s perspective, old news now. Finally, an actual, profitable indie market is no longer a theoretical concept to be taken on faith – the success of Braid, N+, Desktop Tower Defense, Castle Crashers, and World of Goo proves the viability of micro-studios. Our industry can once again support the idiosyncratic visions of the type of single designer/programmers that served us so well in the ‘80s (Bunten, Meier, Wright, Molyneux). Cleary, these transitions are still just beginning, but there are few left who would deny that massive changes are underway. The problem for GDC, perhaps, is that with so many new avenues open, most developers are now simply focused on execution. Hopefully, we should have some fascinating post-mortem in a few years.

One final note should be made about GDC’s current format, one which I haven’t seen mentioned elsewhere. For years and years, the conventional wisdom was that the first two days of the conference were a waste of time, composed entirely of bloated tutorials that stretched single topics thin over a numbing 8 hours. However, over the last three years or so, the organizers have nurtured a collection of summits – for casual games, for virtual worlds, for indies, for mobile, for AI, and so on – that are now a smorgasbord of interesting speakers and topics jammed into flexible time slots (sometimes only 30 minutes). Instead of the paltry four or five talks per day available during the main conference, one can see eight, nine, even ten presentations a day by jumping from summit to summit depending on one’s personal preferences. This mixing and matching is aided by the reduced size of the conference on those days – most of the summits were all located along a single hallway in the North Hall. The growth and development of these summits has lead to an interesting inversion of GDC’s traditional balance – today, the first two days of GDC are actually more interesting than the “real” Wednesday-Friday conference.

This Will Surely End Badly

So, I have finally joined the pseudo-masses and am now on Twitter. I’m not sure how this will all play out – perhaps my blog will someday report that I lasted twittered 283 days ago – but it’s worth a try. I’ll be twittering my GDC thoughts this week (assuming I can make it work from my BlackBerry). Come to think of it, this might actually allow me to record the GDC notes I’ve always wanted to take but never did (or misplaced). Also, by twittering, I’ll get to skip writing my annual, three-months-late GDC summary! So, there’s that…

Mind the Gap

I am on a GDC panel this year on the overlaps, conflicts, and parallels between AI and game design. We’ve got a mix of designers (Alex and Josh) and AI programmers (Adam and Tara), so it should be an interesting conversation. Here’s the info:

(307) AI and Designers: Mind the Gap
Speaker: Soren Johnson (Designer & Programmer, EA Maxis), Alex Hutchinson (Creative Director, Electronic Arts Montreal), Joshua Mosqueira (Creative Director, Ubisoft Montreal), Adam Russell (Lecturer, Derby University), Tara Teich (Programmer, Double Fine)
Date/Time: Monday (March 23, 2009)   3:00pm — 4:00pm
Location (room): Room 2018, West Hall
Track: AI Summit
Format: 60-minute Panel
Experience Level: Intermediate

Session Description
Game design and AI development have always been close relatives. Indeed, defining a line that separates the two is almost impossible as one cannot exist with the other – a feature that the AI cannot handle, for example, is worthless, and the behavior of the AI itself is core to a game’s pacing, challenge, and feel. Thus, almost every decision an AI programmer makes is essentially a gameplay decision, yet AI developers are neither hired as nor trained to be designers. On the other hand, pure designers are often at the mercy of AI programmers to turn their broad strokes concerning AI behavior into reality and have few options if the outcome is wrong. In the panel, we will explore ways to manage this gap between designers and AI programmers to help establish better practices for this important (and inevitable) collaboration.

The Perfect Strategy Podcast

It’s called Three Moves Ahead, and the cast of characters is Tom Chick, Bruce Geryk, Julian Murdoch, and Troy S. Goodfellow, all familiar names for strategy gamers who like to read about their hobby.

In the latest episode, they tackle an issue that I am going to be addressing in my next GD column (#5), which is whether separate strategic and tactical sub-games can live together happily in one single title. They use the Total War series as the primary data point, and I’m not surprised. I’ve never enjoyed the tactical battles of the series – too slow and ponderous for my RTS tastes weaned on StarCraft and Age of Kings – while also finding the strategic levels too vague and opaque to enjoy, with simply not enough meat on the bone. Having the top-level game be constantly interupted by the unwelcome tactical battles certainly didn’t help matters either. However, the mix can be done well, as games like Lord of the Realms or X-Com prove. The trick, in my mind, is to make sure that one half of the game is always subservient to the other half. X-Com, for example, is clearly about the tactical combat while Lord of the Realm is clearly about the strategic level. Still, in general, I’m not sure it’s a challenge that is worth tackling. It’s a lot easier to make one great game than two good (let alone great) ones that actually fit together.

Why Multiplayer is So Important

I was on Amazon the other day, and it struck me how well some older titles are holding their price points, especially older titles with a compelling multiplayer component. These games are still making significant profits for their publishers over a year and a half after their release. Perhaps the most important reason is that gamers tend to hold onto games with fun multiplayer – not giving GameStop a glut of used copies to drive the price down across all retailers. Consider these prices:

PS3 – Call of Duty 4: $56.99 (’07) vs. Metal Gear Solid 4: $39.99 (’08)

360 – Halo 3: $36.99 (’07) vs. Mass Effect: $19.99 (’07)