White Paper Games on Developing Ambitious Dynamic AI For The Occupation

Avatar Guerric Haché | May 23, 2017 37 Views 0 Likes

37 Views

Artificial intelligence plays many different roles in video games. From animating enemies in shooters or RPGs, to engaging in full-blown conversations in experimental indie titles like Event[0] and Façade, games are a uniquely tangible and quotidian application of AI. But while most games are content to limit their artificial intelligence to path-finding and logistical decision-making, a few games push AI design further, trying to craft believable characters who don’t adhere to a fixed script. One of these games is White Paper Games’ upcoming The Occupation, which will explore the political aftermath of a terrorist attack in 1980s Britain.

To get a better sense of what it’s like to develop this kind of AI, we sat down for a chat with two of White Paper’s developers: co-founder and game/narrative designer Pete Bottomley, and game designer and QA specialist Jonny Pickton. The following interview has been edited for length and clarity.

[mks_separator style=”solid” height=”2″]

GAMEUMENTARY: To start off, what made you want to explore dynamic AI in The Occupation? Have any of you guys personally worked with AI before?

Jonny Pickton: None of us have explicitly worked with AI before like this. I’ve done some little prototypes in Unreal using out of the box tools like NavMesh, for the most part. Certainly no large-scale projects like this.

Pete Bottomley: Part of the motivation was that everybody on the team wanted to push their skills. After Ether One, especially on the art side, the guys wanted to introduce 3D characters into the world. But in the indie narrative first-person exploration genre, this is obviously an incredible tough thing to tackle, which is why games like Gone Home and Ether One tend not to have them. So we needed to figure out how we wanted to tackle them, and doing something different with AI seemed like a good challenge. So everyone had something they wanted to work on – the artists had goals with an art style and with characters, our sound designer wanted to explore certain things, and for the designers and programmers, AI was one of the things we were interested in.

When it comes to pre-scripted AI, there’s always something that feels a little bit off to me about those. It’s much more interesting when characters have an actual place in a world and are going about their day, so when it came to implementing characters into the game, dynamic AI seemed like the right way to go. Elizabeth from Bioshock Infinite was a good starting point, especially since we’re using the same tech Irrational did. They did a few different GDC technical talks around 2014, which was when we were conceptualizing The Occupation, so we pulled those apart and just went from there.

We also gave ourselves a pre-production phase, which we hadn’t really had on Ether One, and we planned things out a bit more. Luckily, we had made the transition from UE3 to UE4 already, when we ported Ether to PS4, which was a kind of trial by fire that helped us learn how to use Unreal 4. That was incredibly useful, because with UE4 there are a lot of designer tools. Johnny and I don’t know C++, but we were able to prototype all the AI without writing code – it’s all visual scripting. From there on, it was just a lot of prototyping.

GAMEUMENTARY: You bring up an interesting point – what kind of support does Unreal offer in terms of AI? Has it been very helpful?

Pickton: Yeah, massively so. Compared to other engines, things like NavMesh support, which gets characters moving through the world believably, is built into Unreal for free. That was one of the biggest concerns for us, but with Unreal we didn’t have to approach that technical hurdle ourselves.

So the main things we implement are behaviors instead of solving underlying technical problems. Behavior trees are built into Unreal as well, so that helps. It becomes about things like scripting events, stats, and how characters behave in the world. Unreal’s tech really helps us focus on our own design goals.

Bottomley: Because we only have one programmer on the team, we don’t like to do deep engine changes. There are all these clever people at Epic making this, so let’s just use what they’re doing.

There have been a few cases where we wanted a bit more custom behavior, though, especially with locomotion. Like Johnny mentioned, the NavMesh stuff does come with Unreal for free, but Unreal isn’t necessarily focused on the kind of believable, everyday AI we want to create – it’s faster-moving, direct, snappy, more like enemy AI. NavMesh will draw straight navigation paths, whereas you want a bit of curvature in those paths.

We also needed to layer our own animations on top of Unreal’s, and there’s a lot of math involved in that. For instance, when it comes to doors, in many video games characters will kind of move their arm up and the door will swing open. We, whether stupidly or brilliantly, make our characters actually hold the handle, open the door, and then close it behind themselves again. Making an AI character know where a door handle is, then reach out and use it properly, can be quite tricky. Even though these are only minor details, we hope these details will really make the AI more believable.

Pickton: The majority of that we managed to do through animation, which we add on top of Unreal systems, rather than changing what’s going on below the hood. In Unreal character movement is quite linear and they rotate in an unbelievable way, so we set up the mesh and the animations so it all feels more realistic and grounded in the world.

GAMEUMENTARY: So when you were designing this AI, did you work off of target scenarios – types of situations you hoped these dynamic systems could create?

Bottomley: That’s kind of what we’re working on at the moment. You don’t want to plan specific end scenarios so much as you want to create independent systems that can talk to each other. So you might want a fire system for how things burn, and a water system, and then an intended result of that is that if something’s burning, you can take that water and put it out.

Because of the setting and the location, our systems needed to represent basic interactions people have in everyday life. For instance, you can flip a fuse in a fuse box to power down a room, but that room has a security pad that requires a keycard for entry. So you could find that keycard and enter a code to access files on a computer inside. But you could also take a more destructive approach – if you find the fuse box and deactivate the whole room, that includes the keycard reader, it just opens. But then of course the computer has also been turned off, so you need to find another way to activate it to access those files.

Another system in the game involves a series of vacuum tubes that could be used to move objects around obstacles or checkpoints.

GAMEUMENTARY: When your AI engages in behaviors and reacts to its motivations, is it completely deterministic, or are there random elements to that?

Bottomley: We wanted a balance between unpredictability and a kind of order where it looks like they’re doing things purposefully. The way the game is set up, everyone has fixed jobs we know they’ll be doing throughout the day. They’re not randomly generated. For example, a graphic designer might be drawing designs and then get them printed. But while they have a core arc in the world, we also wanted more emergent behaviors that cause them to go off their path, and maybe be manipulated by players. So they might get thirsty, and they’ll go get coffee, and eventually they’ll have drank so much they’ll need to go to the bathroom.

So while we know that at 9:30 this AI should be roughly around here, we can’t actually know in advance when they’ll be in the bathroom or out for coffee, or even whether some alarm has caused them to move, so that’s a more random element. But there’s still an element of predictability players can use to plan their approach.

GAMEUMENTARY: Can the player interact with all the objects the AI can interact with – say, emptying their water supply to send them further away when they’re thirsty or drugging it somehow?

Pickton: Yeah, the majority of the mechanics involve interacting with the environment. We always make sure that the AI will use everything in the same way the player can, and vice versa. So with the water coolers, you can interfere with all the water coolers the AI drinks from. If you empty the cooler, they can’t get a drink there, and that will have a knock-on effect that further alters their path through the world.

There are limits of course – you can’t drug the water supply! But the way you interact with everything in the environment is the same way the AI can. We hope that’s where a lot of the systemic interactions with the AI will come from.

GAMEUMENTARY: That’s definitely a great approach – it’s always a bit weird and frustrating when the AI and the player play by different rules.

Pickton: Exactly.

Bottomley: The term we use in our meetings every day is consistent behavior. If something works one way in a given situation, why shouldn’t it do the same thing somewhere else? If the player sees the AI doing something they can’t do themselves, we either implement that or we find a way to design around it. It can take a lot longer to design around it, but either way it really helps.

Pickton: We do encounter edge cases during development, where for example we program the AI to use the water cooler, but at first hadn’t considered it might have a puddle in front of it because the player’s emptied it, so we need to design behavior for that. But even within this open world, within a given system there’s a ceiling to what the player can do, so we work around that.

GAMEUMENTARY: With these other characters running around the world at the same time as the player, how does that interaction play out? Can you talk to them, or interact with them directly, in a way that the AI responds to?

Bottomley: When it comes to dialog, there’s a hard balance to creating a good first-person voice, between saying things you’d want to say but not imbuing them with so much distinct personality that the player feels they don’t identify with the character and can’t project their own personality through it. But there are a few ways the player can interact with the AI. Since we already simulate stats like hunger, thirst, and so on, we also added relationship stats. The idea is that if an NPC is walking past you in a corridor, and if they feel neutral towards you, they might say hi. You can choose to say hi back, or just ignore them, but if you say hi they might be a bit more friendly next time.

We had a playable demo recently at Rezzed featuring a starting area of the game, where your briefcase needs to get checked by the security guard. So you can press a buzzer to get his attention, and he’ll respond by being friendly, and maybe talking to you about a painting he likes nearby while he’s checking your briefcase. But if you’re the kind of player who keeps pressing the buzzer over and over again, he won’t be friendly, since you’re being a bit of a jerk. Later on, if you’re acting suspicious, you might be able to get away with a bit more if he likes you than if you’ve been rude to him. We haven’t explored the limits of what happens when you’re super friendly or super negative to the AI, but we do want to push in that direction.

NPC to NPC, because they work together all day and have known each other for years, it’s very hard to sway their feelings about each other. Unless they’ve done something terrible in the game story, they generally remain pretty stable. This makes things a lot easier – we can really focus on who the player is and how they’re interacting with the world.

GAMEUMENTARY: AI can sometimes be hard for players to read, like how it can sometimes be hard to understand why your allies in Civilization suddenly hate you. Since AI is really important in this game, how are you designing for transparency and making sure the player has an understanding of what the AI is planning?

Pickton: Yeah, the way the AI is telegraphing to the player is really important. This is a conversation we’re still having, in terms of how to show the AI’s intent in a readable way. One feature we have is AI speaking to themselves or to each other, talking about what they’re thinking or what they’re planning on doing. So security guards might say to each other, “Oh, we need to go investigate this room.” That helps give the player an understanding.

There are a few other things as well – every AI in the game has their real task schedule posted in the world somewhere. So the janitor might have a list of the rooms he needs to clean, and the player can find that list in the world and use that to plan around his route. That’s on top of the more fundamental things – as we said earlier, every AI has a specific job, so the player can infer from what a character’s job is where they might be likely to go throughout the course of the day.

Bottomley: I think the main thing is telegraphing their behavior. You don’t want the AI to be too punishing, or too smart; Sid Meier talks a lot about this with regards to Civilization AI. I think one of the big issues for us is that we don’t have set, optimized patrol routes for AI. You can’t predict AI paths in a scripted way. We want actions to have believable consequences, including changing how the AI moves and acts, and since that makes planning harder, the AI really needs to be telegraphing its intention to you in response to your actions.

In a lot of games with stealthy elements, it’s difficult to decide what happens when you get caught by the AI. Do you get killed and have to restart the mission? Do you just reload? Do you evade and hide until the AI just forgets? This is where it can get extra frustrating when the AI doesn’t telegraph well, but we don’t deal with that in the same way, since this is a real world setting. What would happen if you actually snuck into a public library when you shouldn’t? The security guard might tell you off and escort you out of the area, or escort you to the security office and have a chat with you, and keep a closer eye on you going forward.

So that’s what happens in The Occupation, and the world keeps going in the background. So you’ve lost a bit of time, and since the game is progressing in real time, that’s a penalty of its own that feels natural. But even if you misjudge the AI, you’re not losing a whole mission or reloading, it’s more about having the AI be disappointed or suspicious of you and reacting accordingly.

Pickton: That was a really good thing when we had the playable at Rezzed, actually; a lot of players were quite surprised. Normally, if you get caught by the AI it’s demo over, but they got told off and could keep playing, though they’d lost some time and the security guard was now irreparably angry with them. It’s not a very technical thing, but we think it’s a more believable kind of consequence that makes sense in the context of the game.

The game’s world progresses in real-time, so you’ll need to spend your four hours wisely.

GAMEUMENTARY: And when players first encountered the game at Rezzed, did they have particular difficulties with anything, or were there unusual outcomes you hadn’t expected to see?

Pickton: It really highlighted a lot of the usability features, and gave us an insight into how people interpreted our mechanics. It was really interesting to see people play it and work with the minutiae of the controls, and to see how the mechanics we had applied to the AI characters worked in the context of the players’ actions.

It was also really interesting to see how they approached the world – it’s not a linear game, so we genuinely had no idea where players would go, where they would end up, and what they would do in the time they were given. The routes people read through the level were sometimes new to us, and the areas they preferred to explore weren’t something we could plan for ahead of time – we had just been trying to make a cohesive, believable world as a whole.

Bottomley: We were definitely a bit worried about the openness of the environment; you don’t want your players to wander off into the wrong area and feel frustrated or stuck. You want them to encounter interesting interactions in the world. So you want them to figure out they can peek through closed blinds to see through windows, and to find areas with audio logs they can listen to. We also intentionally started the demo at a point that should be 30-40 minutes into the game. We wanted to identify what mechanics people struggled with the most, the kind of frustrating interactions that people were having in Ether, for example, so we could go back afterwards and design the opening sequence to help resolve those.

Two mechanics in particular stuck out during the playtest. The first was the player’s briefcase; it’s a physical thing in the world that you can lay down and open up, placing things inside it and removing things manually. It’s a physical inventory, it can be left behind in places, it can be searched by the AI. We want to avoid on-screen UI as much as possible, but there’s a reason people use menu-based inventories. They’re easier to navigate and interact with, so getting the control scheme for our physical inventory streamlined was very important, and a lot of feedback from Rezzed really helped.

The other mechanic was a “press-and-hold.” You might remember old vending machines where you put a coin on a tray and pushed it into a machine to get stuff out. We have a bunch of those where you push down and pull, using the directional keys. It was a bit weird to tutorialize these unique mechanics – normally you’d just click and it the coin would go in, but we want the more manual process, the more physical interaction with the world. You don’t want it to be unintuitive, but you want it to be satisfying to interact with, and that creates a bit more design complexity.

GAMEUMENTARY: Did you encounter any strange bugs or behavior during development? You mentioned earlier that characters would wander off onto the wrong floors for instance.

Pickton: Quite a few with the guard character, actually. He’s obviously the most reactive to the player – if you’re doing something you shouldn’t, he’s going to follow you around or scowl at you and so on. So there have been times we’ve caught him doing wacky things like not seeing you properly, or chasing you down paths we didn’t think he would ever take.

One of the funniest ones we had was in the demo, where players had to go into this linear area to acquire a certain device. You need that item to progress, so the guard was never supposed to be able to enter that room. But for some reason, he was able to sense you entering anyway, and he would immediately barge in and tell you to leave. Somehow through his emergent AI behaviors he found a way around his restrictions, and got in there to do his job!

GAMEUMENTARY: Sounds to me like that kind of bug is a sign you’re doing interesting things!

Pickton: Hopefully! Yeah, they’ve become self-aware. Soon they’ll take over the world.

Bottomley: It’s that thing where you’ve got all these systems that are running constantly in the world. Tutorializing that is very difficult, because if we funnel you down a linear area we need to block off certain systems to make sure things don’t get too crazy. But if the player finds a unique way around it that we didn’t anticipate, they might enter the open world and the AI will be inactive, waiting for some trigger in the tutorial.

You want the player to feel smart, and to be able to think back on the game and approach it in a new way that has interesting consequences, and that requires these systems always be running everywhere. We want to hide scripted elements as much as possible, and use them in as few places as possible, to make sure the player never sees them in that way. For us, it breaks the game if players become aware of specific triggers in the world and things that deactivate or activate the AI. It shouldn’t feel so clean-cut.

Pickton: We can get around that for the most part, since the AI acts irrespective of the player. If the player isn’t even playing the game, the AI will go on and do their whole jobs, and go home in the evening. They’re never waiting for you to do anything – you’re interrupting them.

Your briefcase will be searched and your politeness will be tested.

GAMEUMENTARY: So the player could in theory just sit there and wait to see what happens?

Bottomley: Yeah, and we’ve got some small touches in the world that allow and support that. The player can just sit down outside, watch the wind blowing through the trees – that sort of thing. We kind of want the player to stop and wonder at the fact that there’s a whole world functioning around them. And because of the real-time element, it could make for an interesting choice of whether to just sit there taking the world in.

So if you sit down for a while, the AI will recognize that, and an old man might come sit next to you and start telling you stories about his past, or there are all these notes you can read through as the world progresses. There are a lot of these narrative touches that the player might miss out on if they’re always acting, so we hope that a player might decide to start a new game plus, then come back and try to seek out these extra narrative touches. We’re hoping to create a whole, rich world where you can go anywhere, but since you only have four hours, there’s an interesting choice of whether to stand there and take it in or get moving.

GAMEUMENTARY: Thanks for taking the time to answer all these questions! I’d just like to ask one last broader question: It seems that other game developers are slowly trying to implement more dynamic AI in their games; are there any other projects out there right now that you find interesting in that regard?

Pickton: We Happy Few is interesting in this regard. The way they’ve been developing procedural systems and different AI roles, like civilians versus guards, is reminiscent of what we’re doing. Obviously that’s more of an action game, but they do share that aspect of procedural AI in a believable world.

Bottomley: I’ve been playing a bunch of Prey, though that doesn’t have any human AI. I don’t think there are very many direct parallels with other games, since AI is always going to be fairly unique to the game it’s in. I think immersive sims like Dishonored and Hitman are also an influence to us.

What’s really interesting to us is seeing this wave of companies creating first-person narrative games. For instance Fullbright with Tacoma, and whatever Campo Santo is making next. All these studios are creating these games out of their own strict technical and financial limitations, and now that those games are successful and have made a bit of money, it’ll be interesting to see what the next wave of their games will be and how the genre develops.