GD Column 7: Our Cheatin’ Hearts

The following was published in the May 2009 issue of Game Developer magazine…

The designers of Puzzle Quest have a frustrating burden to bear – everyone thinks they are a bunch of dirty cheaters. The game centers on a competitive version of Bejewelled, in which players duel with an AI to create the most “match-3” colored patterns.

The problem comes from how the pieces on the gameboard are created – when, for example, a column of three green orbs is lined up and removed from play, new pieces fall in to take their place. However, sometimes, these three new pieces happen to be of all the same type, which means that a new match is automatically made, and the player scores again. The odds of such a result are low (around 2% for getting three of the same colors in a row), but they are still high enough that a player will see it many times with enough games played.

Of course, the AI is playing the same game, so the player will see this lucky match fall into the enemy’s lap as well. At this point, human psychology takes over. Because the new pieces are hidden from view, how does the player know that the computer is not conducting some funny business and giving itself some free matches?

The human mind is notoriously bad at grasping probability, so many players are convinced that the AI is cheating. The developers have pledged over and over again that everything is fair and even, but whether they like it or not, the player experience has been affected by the simply possibility of cheating.

Trust Me

Games do not start with a player’s trust – this trust needs to be earned over time. Our audience is well aware that we can make a game do whatever we want under the hood, so the transparency and consistency of a game’s rules contribute significantly to player immersion. The worst feeling for a player is when they perceive – or just suspect – that a game is breaking its own rules and treating the human unfairly.

This situation is especially challenging for designers of symmetrical games, in which the AI is trying to solve the same problems as the human is. For asymmetrical games, cheating is simply bad game design – imagine the frustration which would result from enemies in Half-Life warping around the map to flank the player or guards in Thief instantly spotting a player hiding in the shadows.

However, under symmetrical conditions, artificial intelligence often needs to cheat just to be able to compete with the player. Accordingly, designers must learn what cheats feel fair to a player and what cheats do not. As the Puzzle Quest team knows, games need to avoid situations in which players even suspect that the game is cheating on them.

Cheating is not the same thing as difficulty levels – by which the players are asking the game to provide extra challenges for them. Cheating is whether a game is treating the player “fairly” – rewarding them for successful play and not arbitrarily punishing them just to maintain the challenge. Unfortunately, in practice, the distinction between difficulty levels and cheating is not so clear.

Show the Mechanics

Fans of racing games are quite familiar with this gray area. A common tactic employed by AI programmers to provide an appropriate level of challenge is to “rubberband” the cars together. In other words, the code ensures that if the AI cars fall too far behind the human, they will speed up. On the other hand, if the human falls behind, the AI slows down to allow the player to recover.

The problem is that this tactic is often obvious to the players, which either dulls their sense of accomplishment when they win or raises suspicions when they lose. Ironically, games which turn rubberbanding into an explicit game mechanic often becomes more palatable to their players.

For example, the Mario Kart series has long disproportionately divvied out rewards from the mystery item boxes sprinkled around the tracks relative to the riders’ current standings. While the first-place racer might receive a shell only useful for attacking other lead cars, players in the rear might get a speed bullet which automatically warps them to the middle of the pack.

These self-balancing mechanics are common to board games – think of the robber blocking the leader’s tiles in Settlers of Catan – and they don’t feel like cheating because the game is so explicit about how the system works. Thus, players understand that the bonuses available to the AI will also be available to themselves if they fall behind. With cheating, perception becomes reality, so transparency is the antidote to suspicion and distrust.

Cheating in Civilization

Sometime, however, hidden bonuses and cheats are still necessary to provide the right challenge for the player. The Civilization series provides plenty of examples of how this process can go awry and drive players crazy with poorly-handled cheating.

Being turn-based, the developers could not rely on a human’s natural limitations within a real-time environment. Instead, Civilization gives out a progressive series of unit, building, and technology discounts for the AI as the levels increase (as well as penalties at the lowest levels). Because of their incremental nature, these cheats have never earned much ire from the players. Their effect is too small to notice on a turn-by-turn basis, and players who pry into the details usually understand why these bonuses are necessary.

On the other hand, many other cheats have struck players as unfair. In the original version of the game, the AI could create units for free under the fog-of-war, a situation which clearly showed how the computer was playing by different rules from the human. Also, AI civilizations would occasionally receive free “instant” Wonders, often robbing a player of many turns of work. While an AI beating the human to a Wonder using the slow drip of steady bonuses was acceptable, granting it the Wonder instantly felt entirely different.

How a cheat will be perceived has much more to do with the inconsistencies and irrationality of human psychology than any attempt to measure up to some objective standard of fairness. Indeed, while subtle gameplay bonuses might not bother a player, other, legitimate strategies could drive players crazy, even if they know that a fellow human might pursue the exact same path as the AI has.

For example, in the original Civ, the AI was hard-wired to declare war on the human if the player was leading the game by 1900AD. This strategy felt unfair to players – who felt that the AI was ganging up on the human – even though most of them would have followed the same strategy without a second thought in a multi-player game.

In response, by the time of Civ3, we guaranteed that the AI did not consider whether an opponent was controlled by a human or a computer when conducting diplomacy. However, these changes still did not inoculate us against charges of unfairness. Civ3 allowed open trading – such as technology for maps or resources for gold. An enterprising human player would learn when to demand full price for their technologies and when to take whatever they could get – from a weak opponent with very little wealth, for example.

We adapted the AI to follow this same tactic, so that it would be able to take whatever gold it could from a backwards neighbor. To the players, however, the AI’s appeared to be once again ganging up against the human. Because the AI civs were fairly liberal with trading, they all tended to be around the same technology level, which led the player to believe that they were forming their own non-human trading cartel, spreading technologies around like candy (or, in the parlance of our forums, “tech-whoring”).

Perception is Reality

Once again, perception is reality. The question is not whether the AI is playing “fairly” but what is the game experience for the player? If questions of fairness keep creeping into the player’s mind, the game needs to be changed. Thus, for Civ4, we intentionally crippled the AI’s ability to trade with one another to ensure that a similar situation did not develop.

The computer is still a black box to players, so single events based on hidden mechanics need to be handled with great care. Sports game developers, for example, need to be very sensitive to how often a random event hurts the player, such as a fumble, steal, or ill-timed error. The dangers of perceived unfairness are simply too great.

Returning to our original example, the developers of Puzzle Quest actually should have considered cheating, but only in favor of the player. The game code could ensure that fortunate drops only happen for the human and never for the AI. The ultimate balance of the game could still be maintained by tweaking the power of the AI’s equipment and spells – changes which appear “fair” because they are explained explicitly to the player. The overall experience would thus be improved by the removal of these negative outliers that only serve to stir up suspicion. When the question is one of fairness, the player is always right.

15 thoughts on “GD Column 7: Our Cheatin’ Hearts

  1. It’s always interesting to observe what players consider “fair” and what they consider “cheating.” In particular, computers are able to do things that human players simply cannot, such as micro-manage multiple units simultaneously in an RTS. Is it “fair” to implement an AI behavior that takes advantage of this?

    As a player, one of my pet peeves is the tendency for developers to substitute mathematical advantage for intelligence. My main complaint is that those mathematical advantages break my expectations of how the game rules work. That is, an AI player could defeat me using a strategy that would be unfeasible for a human opponent.

  2. Talking of the appearance of “fair” in Civ4, did the AI players know where advanced resources would be early on? A few times when I was playing the AI seemed to build cities in places that just didn’t make sense at the time (once in a tiny hole in my territory), but turned out to have oil or uranium deposits close by.

    Again, it could have just been luck, but it didn’t seem “fair” to me at the time.

  3. Soren, I’ve long suspected that Civ3 uses “rubberbanding”. As you said, humans are terrible at understanding probability. But it seems to me that when I’m steamrolling over an AI civilization, the battles go in their favor far more often than they should. Sometimes I even think I can detect a specific turn when the AI flips this switch. It’s as if spearman-defeats-tank has become an AI tactic. Is this actually happening, or am I just paranoid?

  4. Interesting article; and FWIW I think Civ IV did a pretty good job on this.

    Also… I think you probably mean ‘Our Cheatin’ Hearts’ in the title there.

  5. Interesting article. Civ usually gets away with cheating because the manual was often explicit about most of the examples, and it did have a lot to deal with. It is frustrating to be confronted with cheating you did not expect when it breaks strategies. In starcraft skirmish mode, the computer doesn’t actually really care about how much resources it can acquire, as far as I can tell, so tactics that would be useful against humans become useless against the AI

  6. I’ve lost so many fights in Civ4 of those 40 vs. 2.5, 99.something% probability of winning, that if the AI isn’t cheating, you’re certainly doing very bad explaining in the game.

    It certainly happens to me more frequently than 1 time out of 500. And almost always against the AI.

    The game is great, but that made me crazy. Now I also play PBEM with human opponents, and I almost never see that happening.

    Oh, and yes, I read how the combat works.

  7. BTW, in a PBEM game I had a puny barbarian warrior conquer and RAZE my capital, defended by a longbow with strength II, 100% culture and walls.

    That day I almost BURNED the game. It certainly took me a long time to play it again. Since then I learned that the only way against barbarians is to put in whatever you’re defending more units that they can kill in a turn, because they’re certainly going to kill some, whatever the odds!

  8. Pingback: רמאות ותפיסה | משחק בתיאוריה

  9. Speaking of cheating FOR the player, it’s interesting analysing the drops in Bejewelled Blitz (Facebook version) where you will NEVER have a lockout, whereas in Bejewelled 2 that is the game-ending criteria. My guess is that the harshness of a lockout from purely random drops would grate in the casual minute-long gamespace, whereas the same randomness generating multiple combos (and ultimately a large percentage of the player’s score) is somehow attributed to the player.

  10. Another problem would be the lottery effect. If you have several million people play a game, it’s extremely likely that some people are going to experience an extremely unlikely event. I think people equate ‘very very unlikely’ with ‘impossible’, and when they see the computer beat them at a million-to-one odds, conclude that the computer is cheating.

  11. “the developers of Puzzle Quest actually should have considered cheating, but only in favor of the player. The game code could ensure that fortunate drops only happen for the human and never for the AI.”

    They actually tried, from http://fidgit.com/archives/2009/02/puzzle_quest_galactrix_creator.php:
    “We actually looked at anti-cheating, at having the game look ahead to see if mines were going to drop in, and then changing the gems that were going to drop in to actually reverse cheat, to cheat in favor of the humans. But we found that it was better in the end to remove certain annoyances, things like four of a kind giving extra turns. We removed some of those systems, we took the annoyances away. I think that’s about right. There’s still going to be people who think it cheats but I can categorically say it does not.”

  12. My perception of fairness in Puzzle Quest was heavily biased by the rate at which the computer makes its moves. So when random chance events give the computer a good setup it can exploit, you see these events play out in rapid succession and it really stands out in your mind because the events are so compressed. It makes it 100% clear how much of an advantage the computer really has over you. Both in terms of analyzing current best moves and the possibilities it has for creating setups that will benefit it for the next move. So I think I initially felt like the computer was cheating but then ascertained that it was more more accurately that the computer had an advantage that I did no possess. In either case, it’s kind of irrelevant because it tainted my experience with the game but a lot, actually. It just wasn’t fun. For that reason, I quit playing the game and why I have no interest in playing any sequels they have made. Especially if the developer is going to make statements like “I can categorically say it does not [cheat]“, it suggests to me that these people don’t have much of an idea what makes something fun. Make it feel like you are playing against a human and you might be onto something, guys…

  13. Pingback: When losing in games is NOT fun « Matchsticks for my Eyes

  14. Pingback: 翻訳記事:AIチートについて | スパ帝国

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>