Playing to Lose: AI and Civilization (GDC 2008)

I gave a talk at GDC 2008 on developing the AI for the Civilization series. I highlighted the difference between “good” AI and “fun” AI and how writing AI for Civ is tricky because it fits somewhere between those two extremes. Unfortunately, the talk was not filmed, and because I always wanted to get it online, I went ahead and reconstructed it from the original audio and slides. If you give it a watch, let me know what you think!

One thought on “Playing to Lose: AI and Civilization (GDC 2008)

  1. One thing I noted when you were discussing the “Good” vs. “Fun” distinction of Civ AI, I’m not fully convinced that the pass/fail Turing metric is quite right to apply to the AI, or more accurately, that the ‘Diplomacy Problem’ is actually part of the “balance/tactics” category rather than part of the Turing category. When you note that the AI also roleplays and deliberately engages in sub-par decision making without considering all of the tactics, which makes them less likely to pass Turing *equivalence* with a human player, that is actually what makes them *more* likely to pass a Turing *test* for simulating the “everyman” human, in my opinion.

    Part of that is the goalpost of what exactly “playing as a human does” means, and whether optimal play is actually more “human” than sandbox play, i.e., the assumption that a player will always play optimally or to win. If you were to introduce a dozen new players into Civ and politely ask them to play against each other without any prior experience with the game, you would of course discover that they would definitely fall into the “sandbox” category and choose actions that are more equivalent to the “fun” AI-like patterns of behaviour, with varying shifts in cooperation and competition as they played, because it would be especially *fun* for them when they didn’t have the experience to be “good” in the first place.

    If a game also limited skilled players to the same canned responses that the AI could use, in order to prevent them from speaking to other players out-of-character (which would be just awful, but just for sake of example ;-)), the AI players in the more modern Civs would probably be considered the games’ most Turing-believable players compared to players who were more likely to resort to exploitative tactics and technically-allowed backstabbing (like the technology-for-peace/invasion-next-turn you mention) because the AI’s more forgiving attitudes and mistakes would be perceived as human-like and the take-technology-and-backstab would seem so clearly exploitative and bug-like (the reason why you wouldn’t allow it for the AI) that it would just seem unreal. Mechanical advantage at the expense of relationships would be largely perceived to be the unemotional, pragmatic, and robotic behaviour that AIs would do, not humans, even though it is ironically more human-like in practice. Reality is unrealistic. =)

    If players had no idea who was real and who was fake, they would probably rate the AI players as “better people” and be more surprised when they discover that they’re not human. Only the fact that they obviously have only a fixed number of canned responses compared to the player’s ability to speak naturally makes it clear that they AI players are artificial.

    This actually makes me wonder how long it’ll be until machine-learning algorithms start attempting natural language conversation and diplomacy in video games… that’ll certainly be an interesting development. Of course, non-voiced dialogue is exclusively in the domain of retraux these days, so we might be at an unfortunate technology-versus-expectation level where we might not see it for a long time even though it is technically feasible relatively soon, because speech synthesis isn’t quite at that level yet and players might reject non-voiced dialogue out of hand in any studio that could afford to implement natural language generation.

    *****

    The only final comment I have on the video is that it could use some annotations that state what the questions were (at least in gist) that you were answering at the end. The first one was about the amount of CPU time available for AI decisions and the second was about rubber-banding the AI, but I might’ve missed the nuance of the questions. =)

Leave a Reply

Your email address will not be published. Required fields are marked *