Great Trip Interview

GameSetWatch recently posted an interview with the always interesting Trip Hawkins, founder of EA, 3DO, and Digital Chocolate. Here’s a good quote on the difference between the core gamer (who buys PC/console games) and the social gamer (who plays Web/Facebook games):

I think the hardcore gamer wants to pay for the game, and if they can, they like to pay for it once. And then they want the game to be really deep, really immersive. They want to play it for hours and hours, and they want to really master it.

And if they happen to be playing with other people, they want to beat them. They want to compete, and they need to win. I think for that hardcore gamer — and of course, I am one — for me that part of gaming has always been about wanting to prove that I’m competent. You know, I don’t want somebody to beat me because they spend more money on virtual items, right?

And also, I don’t want to feel like I’m stupid, so I don’t want to pay every month. I think I should be able to buy the game and just play it once, you know? Switch to this omni gamer, somebody that’s really not that competitive about it. They don’t have the time to spend a lot of time on a particular game. They don’t want to be overwhelmed about it.

They kind of like it to be free. They’re much more interested in the potential social connections they’re making with other people. And when they make those social connections, they don’t want to have somebody come in and crush them that’s viciously competitive.

They want to have it be a much more casual experience. And that is the audience that’s more likely to pay for the virtual items when they decide that the items give them style or allow them to be more competitive without having to make the time investment.

Of course, that’s something that really irritates the [World Of] Warcraft customer, and that’s why it’s such a battle for Blizzard, trying to figure out, “Well, what do we do about the fact that Warcraft is so successful. We’re attracting this more mainstream audience that doesn’t want to spend all the hours doing gold farming in the game. They want to just go buy some gold and get on with it.

The Case for Piracy

Ha-ha, just kidding. I’m not going to be arguing for piracy, but I do want to make one observation about how our industry is dealing with this issue. Some commentators have been talking recently about the massive piracy afflicting the launch of Demigod. According to Stardock’s Brad Wardell, of the 140,000 connections to the main server during the game’s first week, only 18,000 were by legitimate customers. This ratio compares favorably (or is that unfavorably?) with the 90% piracy rate reported by the developers of World of Goo. I don’t want to comment on the viability of various DRM schemes, but – needless to say – unless a game is server-based (WoW, EVE, EverQuest), piracy is a bracing reality for game developers. However, I have an unscientific theory about the root of the problem:

For any given game, only around 10% of players are ever willing to purchase an original retail product.

Obviously, this proposition holds up for PC gaming, but I believe the same is true for console gaming (housemates sharing a copy, renting from Blockbuster over a weekend, friends loaning each other games, grabbing a cheap used copy from Gamestop) and even board games (one gamer buying a copy for a group of friends). This reality is immutable – if DRM was perfect, the percentage would go up somewhat, but it would never come anywhere near 100%. Too many games exist for consumers to afford even a small percentage of them, and – more importantly – players’ individual interests in a specific game are always on a continuum. People on one extreme found a website about their favorite game while gamers on the other end might play it only once on a random Tuesday night at a friend’s house.

The interesting thing about this percentage is that it mirrors another important percentage – the number of players willing to pay money on so-called free-to-play game usually hovers near 5-10%. (RuneScape, a F2P example on the higher-end, has reported figures around 12%.) The important thing about the free-to-play movement is that the business model turns the theory I posited above into a founding principle. In fact, the smartest F2P games use a dual-currency system so that the 5-10% cash-rich players can subsidize the time-rich ones. Ultimately, this model works because a place exists for everyone on the continuum, from gamers who just want to dip a toe to ones willing to drop thousands on microtransactions. Launching a traditional retail game and hoping to change your “piracy conversion” rate is fighting the current; launching a free-to-play game built from the start with multiple levels of player commitment is sailing with the current.

The Hidden Benefit of OnLive

One of the biggest stories to emerge from GDC 2009 was the emergence of OnLive, a server-based gaming platform which would allow any PC or Mac, including bare-bones ones, with a fast network connection to play any game by running all the code – including the graphics rendering – on the server instead of on the local machine. In many ways, this service is a return to the “dumb terminal” model of the ’70s where no calculations were run on the user’s computer itself. So far, reactions have been mixed. Osma Ahvenlampi argues that, due to network lag, this model could never work; Adam Martin claims that it could work if the servers are located intelligently. Keith Boesky points out that the actual business model is simply acquisition.

I don’t claim to know if OnLive’s specific tech will work or not, but I would like to talk about the implications of this potential shift to server-based games. (Even if OnLive doesn’t make it work, clearly this technology will arrive at some point.) Of course, we already have server-based games – World of Warcraft runs on numerous servers spread around the world, with appropriate bits of game info set to thin clients running on local machines. However, a client is still a tricky piece of software, and as Raph Koster like to remind us, “The client is in the hands of the enemy.”

With OnLive, the client is so thin, I’m not sure if it’s appropriate even to call it a client. It’s more like a video-player. In fact, while the phrase “YouTube for Games” always refers to user-generated content, one should recall that YouTube had a second, perhaps more important, innovation: regardless of how a video was created, as long as the viewers had Flash, they could watch it immediately. The same concept hold for OnLive – as long as you have their app, you can play any game capable of running on their servers.

The implications of this change are huge – simply put, it spells the end of client-server architecture. Developers no longer need to optimize what data is sent to the client and what is kept back. Or worry about cheating. Or piracy, for that matter. While these advantages are huge, of course, what really interests me is that making a game multi-player is now, essentially, trivial. Put another way, the set of developers making one-man MMO’s will now be larger than just Eskil Steenberg.

Writing multi-player games is very, very hard. Trying to keep everything in-sync between servers and clients in a safe, responsive, fair, and accurate manner is no small challenge. With a system like OnLive, these issues evaporate because there are no clients anymore. Developers simply write one game, run it on some server, and update it based on user actions fed in from the network. If such a technology existed when we made Civ4, not only could we have saved man-years of development time and testing, but we could have easily implemented advanced features (games-of-the-day, mod sharing, massive player counts, asynchronous play, democracy-game support, etc.) with very little effort. Of course, I don’t know if OnLive will be the one to do it, but – from a developer’s point-of-view – the importance of this change cannot be overstated.

The Case for Metacritic

Over the last few years, Metacritic has become a popular whipping boy within the games industry. A recent example would be Adam Sessler’s bit at GDC’s journalist rant session. At the risk of beginning to sound like a reactionary contrarian, I feel a case needs to be made for Metacritic. Unlike my argument for used games (or, rather, for thinking critically about what we are trying to sell consumers for $60), I feel much less conflicted in this case, so let me state my thesis very clearly: Metacritic has been a incredible boon for consumers and the games industry industry in general. The core reason is simple – publishers need a metric for quality.

What should executives do if they want to objectively raise the quality bar at their companies? They certainly don’t have enough time to play and judge their games for themselves. Even if they did, they would invariably overvalue their own tastes and opinions. Should they instead rely on their own internal play-testers? Trust the word of the developers? Simply listen to the market? I’ve been in the industry for ten years now, and when I started, the only objective measuring stick we had for “quality” was sales. Is that really what we want to return to?

Yes, I know translating all ratings onto a 100-point scale distorts them – a C is not a 60 is not three stars – but we need to not let the perfect be the enemy of the good. What are the odds that we can get every outlet onto the same scoring scale? Not likely. Can Metacritic improve the way it converts non-numeric ratings into scores? Absolutely. However, the whole point of an aggregator is that these issues come out in the wash. When 50 opinions are being thrown into the machine, a 74 is actually different from a 73.

I use Metacritic all the time, and I love it. It’s changed my game-buying (and movie-watching and music-listening) habits for the better, which of course funnels money into the pockets of deserving developers and encourages publishers to aim for critically-acclaimed products. Have we gotten so jaded that we have lost sight of what a wonderous thing this is? Metacritic puts an army of critics at our fingertips. Further, consumers are not morons who can’t judge a score within a larger context. We all realize that, due to the tastes of the average professional reviewer, some games are going to be over-rated and some will be under-rated.

Ultimately, the argument against Metacritic seems to revolve around whether publishers should take these numbers seriously. Some contracts are even beginning to include clauses tying bonuses to Metacritic scores. Others are concerned that publishers are too obsessed with raising their Metacritic averages. Actually, let’s think about that last sentence in detail. Note that when I just wrote “others,” I was referring to journalists, not to investors. As John Riccitiello famously said, “I don’t think investors give a shit about our quality.” How bizarre is it that once the game industry starts taking journalists’ work seriously, they complain about it?

I’ll give my own perspective on this issue. Over the years, I have seen many great ideas shut down becomes someone in charge thinks they won’t impact sales. However, when I am in an EA meeting in which we talk about the need to raise our Metacritic scores – and the concrete steps or extra development time thus required – I’ll tell you what I feel like doing. I feel like jumping for joy. How incredible is it to work for a publisher who cares about improving the quality of our games in the eyes of critics and uses an independent metric to prove it.

As for the renumeration issue, isn’t it a good thing that there is a second avenue for rewarding developers who have made a great game? Certainly, contracts are not going to stop favoring high game sales, so – hopefully – Metacritic clauses can ensure that a few developers with overlooked but highly-rated games will still be compensated. Now, if a game doesn’t have high sales and also doesn’t get a good Metacritic score, well, there’s a name for that type of game, and these developers should not be protesting. Further, developers also need to stop complaining that a few specific reviews are dragging down their Metacritic scores. Besides the fact that both good and bad reviews are earned, in a world without Metacritic, one low score from GameSpot, GameSpy, 1Up, or IGN becomes a disaster. Score aggregation, by definition, protects developers from too much power being in the hands of one critic.

Journalists also need to have the guts to give games a score and stick by it. Putting a score on a review doesn’t take away the ability to add nuance to one’s criticism. My favorite music book is the Third Edition of the Rolling Stone Album Guide. As the reviews were written by just four critics, I have learned to understand the exact difference between five and four-and-a-half stars (or, for that matter, between two-and-a-half and three stars). If you are a great reviewer, the score you give a game helps me place it in context with everything else you have rated. Moreover, your score lets you contribute, via Metacritic and all the other aggregators, to the meta-critique of games on the Net. What exactly is the problem here?

Two Thoughts on GDC 2009

2009 will not go down as my favorite GDC. In many ways, this year may have been the worst of the eight I have attended. However, to paraphrase Woody Allen, even when GDC is bad, it’s still pretty incredible. The problem was not one with organization or speaker selection or much anything else that could have been controlled by the people in charge. Indeed, GDC 2009 was more notable for what was not said instead of what actually was. More specifically, nothing even semi-official about the next console generation was mentioned anywhere. Even the rumor mill was pretty dry. Compare this year to, say, GDC 2004, and you’ll see a huge difference as all three manufacturers were already beginning to jockey for position. Furthermore, the online revolutions which have made GDC so fascinating lately (free-to-play, casual MMO’s, virtual goods, web-based gaming, social networks, etc.) are, at least from the conference’s perspective, old news now. Finally, an actual, profitable indie market is no longer a theoretical concept to be taken on faith – the success of Braid, N+, Desktop Tower Defense, Castle Crashers, and World of Goo proves the viability of micro-studios. Our industry can once again support the idiosyncratic visions of the type of single designer/programmers that served us so well in the ‘80s (Bunten, Meier, Wright, Molyneux). Cleary, these transitions are still just beginning, but there are few left who would deny that massive changes are underway. The problem for GDC, perhaps, is that with so many new avenues open, most developers are now simply focused on execution. Hopefully, we should have some fascinating post-mortem in a few years.

One final note should be made about GDC’s current format, one which I haven’t seen mentioned elsewhere. For years and years, the conventional wisdom was that the first two days of the conference were a waste of time, composed entirely of bloated tutorials that stretched single topics thin over a numbing 8 hours. However, over the last three years or so, the organizers have nurtured a collection of summits – for casual games, for virtual worlds, for indies, for mobile, for AI, and so on – that are now a smorgasbord of interesting speakers and topics jammed into flexible time slots (sometimes only 30 minutes). Instead of the paltry four or five talks per day available during the main conference, one can see eight, nine, even ten presentations a day by jumping from summit to summit depending on one’s personal preferences. This mixing and matching is aided by the reduced size of the conference on those days – most of the summits were all located along a single hallway in the North Hall. The growth and development of these summits has lead to an interesting inversion of GDC’s traditional balance – today, the first two days of GDC are actually more interesting than the “real” Wednesday-Friday conference.

Mind the Gap

I am on a GDC panel this year on the overlaps, conflicts, and parallels between AI and game design. We’ve got a mix of designers (Alex and Josh) and AI programmers (Adam and Tara), so it should be an interesting conversation. Here’s the info:

(307) AI and Designers: Mind the Gap
Speaker: Soren Johnson (Designer & Programmer, EA Maxis), Alex Hutchinson (Creative Director, Electronic Arts Montreal), Joshua Mosqueira (Creative Director, Ubisoft Montreal), Adam Russell (Lecturer, Derby University), Tara Teich (Programmer, Double Fine)
Date/Time: Monday (March 23, 2009)   3:00pm — 4:00pm
Location (room): Room 2018, West Hall
Track: AI Summit
Format: 60-minute Panel
Experience Level: Intermediate

Session Description
Game design and AI development have always been close relatives. Indeed, defining a line that separates the two is almost impossible as one cannot exist with the other – a feature that the AI cannot handle, for example, is worthless, and the behavior of the AI itself is core to a game’s pacing, challenge, and feel. Thus, almost every decision an AI programmer makes is essentially a gameplay decision, yet AI developers are neither hired as nor trained to be designers. On the other hand, pure designers are often at the mercy of AI programmers to turn their broad strokes concerning AI behavior into reality and have few options if the outcome is wrong. In the panel, we will explore ways to manage this gap between designers and AI programmers to help establish better practices for this important (and inevitable) collaboration.

Why Multiplayer is So Important

I was on Amazon the other day, and it struck me how well some older titles are holding their price points, especially older titles with a compelling multiplayer component. These games are still making significant profits for their publishers over a year and a half after their release. Perhaps the most important reason is that gamers tend to hold onto games with fun multiplayer – not giving GameStop a glut of used copies to drive the price down across all retailers. Consider these prices:

PS3 – Call of Duty 4: $56.99 (’07) vs. Metal Gear Solid 4: $39.99 (’08)

360 – Halo 3: $36.99 (’07) vs. Mass Effect: $19.99 (’07)

Back to School

This coming Wednesday, I am going back to Stanford, not as a student but as a judge for the annual CS 248 (Introduction to Computer Graphics) Video Game Competition. This event has been run by Prof. Marc Levoy since 1999, and it is traditionally judged by industry professionals. In fact, in the inaugural year, a game Julie Chin and I submitted called GridRunner was selected as a Finalist by judges from EA. The project was a networked, multi-player light-cycles game, and the fun story about that one was that I had never actually tried a four-player game until I gave the demo for the competition. I had multi-boxed it, of course, just for sanity’s sake, but as we were a team of two, and the game was Unix-based, we didn’t have much time or reach for beta-testing. But it worked! Needless to say, I wouldn’t demo something like that nowadays. Here’s a pic for posterity; as you can see, graphics were never my strong point…

They Say Ideas Are a Dime a Dozen

Typically, when a young developer/student comes up and says s/he has a great idea for a game, the conventional wisdom is to respond by saying it’s all about execution, not the idea itself. Great game ideas are supposedly a dime a dozen, and it’s all about building a great team or learning how to iterate on feedback or having the commitment to finish a project. However, I think this response always sells short the value of pure ideas. Here is a good example of what I mean:


The Unfinished Swan – Tech Demo 9/2008 from Ian Dallas on Vimeo.

Now, the team may or may not make build a good game around this concept, but I think it is nonetheless clear that the idea of exploration-via-paintball is a great one. Wish I had thought of it!

Here’s the link to their game page. Apparently, the project is being prototyped in XNA, so it’s nice to see that initiative bearing more fruit. Are they planning on releasing it as an Xbox Live Community Game? I hope so! At any rate, good luck to the team…