Fall/Winter 2003

New Model Army: An Interview with Michael van Lent

Computer gaming and the future of military training

Jay Worthington and Michael van Lent

Though games based on war have a centuries-long history,[1]their modern use as a training tool dates to nineteenth-century Prussia. With the publication in 1824 of von Reisswitz’s Instructions for the Representation of Tactical Maneuvers Under the Guise of a Wargame, the wargame first appeared in its modern form as a tool of military analysis and training. Modern computing has, of course, produced a revolution in the field. In 1958, the Navy commissioned the US military’s first computer wargame—called the Navy Electronic Wargame System—and in the years since, wargames produced for the military have developed alongside their mass-market counterparts. In 1999, the US Army moved to bridge the gap between popular and military wargame production by entering into a partnership with the University of Southern California. This collaboration resulted in the founding of the Institute for Creative Technologies (ICT), which aims to bring together officers, academic computer scientists, and commercial game designers and developers. This past spring, ICT’s Full Spectrum Command, a company-level simulator, was released for use by the US Army, and its Full Spectrum Warrior, a squad-level game, will enter use by the Army in late summer. Full Spectrum Warrior was also recently given a public demonstration at the Electronic Entertainment Exposition, in anticipation of its commercial Xbox release in early 2004. Cabinet editor Jay Worthington spoke by phone with ICT Research Scientist Dr. Michael Van Lent.


Cabinet: Is friendly Artificial Intelligence (AI) in your games designed differently than enemy AI?

Michael Van Lent: Generally, the same solutions are applied for both. It’s interesting that often the friendly characters have to be held to a higher standard, because you’re able to observe their behavior much more closely, and you spend a lot more time looking at them. An enemy character, however, might be on the screen for a second and a half before you shoot it, while a friendly character who’s always beside you is going to have to look a lot better.

Where are you getting the rules that control the behavior of friendly and of enemy combatants?

We have two primary sources. The first is army field manuals, which the military has made available on the web. We also work very closely with the instructors at Fort Benning, who teach in the infantry school there. These are the guys who are teaching the tasks—company command and squad leader command—that we’re teaching in the game.

And enemy force doctrines?

We’re modeling asymmetric opponents, so it’s not like we’re modeling an organized enemy force. We’re modeling terrorists, guerrilla forces, and individuals running around with guns who don’t have uniforms on, and there really isn’t any doctrine for how those kinds of forces act.

So where are you getting the rules for their behavior?

We’re talking to the Opposing Forces guys in the military, who spend a lot of time thinking about how enemies are going to act. We’ve also talked with the people who lead the opposition forces at the National Training Center at Fort Irwin. Some of the enemies’ behavior is the product of the game developers thinking up smart things for them to do, which is just the same as the smart opponents in Iraq who are now thinking up new ways to address US tactics.

Have you come up with any opposition strategies that are surprising to the Fort Benning and Fort Irwin people ?

In a few cases, yes. It’s mostly been not new exciting tactics but cases where a mission plays out in a certain way, and then we’re able to go in and change that mission a little bit so that what the opponent does is different.

Is there some randomness built in?

There’s a little bit of randomness, but most of our AI behavior is currently scripted by the mission designers, although what I call adaptive opponent AI, which gives these opponents some ability to think and plan on their own and come up with new and novel strategies, is an active research area here. That’s one of the holy grails of the gaming industry—an opponent who figures out new things to do, based upon what it has seen you do in the past.

How culturally specific are the opposition AI designs?

They’re not culturally specific. Our BlueForce—which is the friendly forces—is culturally specific. We currently have US forces, and we’re working with the Singaporean armed forces, who are helping to fund the next round of development, so we’re now developing a set of Singaporean friendly force tactics. Our opponent forces aren’t specific to any particular culture, though.

But wouldn’t the opponents that the Singaporean Army anticipates engaging in the foreseeable future be rather different than the ones the US Army foresees fighting?

I don’t know. We’re close allies with Singapore, and we’re both obviously fighting Al Qaeda—they’ve had some recent experience in that area—so I think their concerns are pretty much in line with our concerns.

Are you having to develop different enemy designs for the commercial release of the game?

The commercial version will differ from the military version. The physics and the explosions will probably have to get amped up for the commercial version. They’re also probably going to turn down the effectiveness of the AI’s weapons in firefights, so the firefights will last longer and be more entertaining. In real life—especially in urban combat, where the ranges are pretty close—a firefight is a very deadly and very short affair. In the game, you typically want them to last a little longer. I should point out that the player isn’t shooting in either of our games. Our games are about thinking, making decisions, and carrying out those decisions through orders to the soldiers, so neither of our games has a shoot key.

How important is the enemy AI model to the goals of your project?

You need a realistic opponent in order to ensure that the decisions and the adaptations you’re having to make are in response to the kinds of situations that you could possibly see in the real world. It’s also important that the player’s own soldiers do realistic things. Often, you’ll see that you issue an order to the soldiers, and it’s not always clear how your orders led to the resulting actions. This can lead to a gap in the soldier’s understanding of what happened, where the soldier can try to put some of the blame for a negative outcome on the artificial intelligence.

I see that you’ve put in an after-action debriefing routine to try to bridge that gap.

It’s called explainable AI—we’ve given the computer-controlled soldiers the ability to explain why they were doing what they doing and how their behavior was based on your orders.

Will the enemy AI also explain how it acted in response to what it perceived as the situation at any given moment?

We currently don’t have the explainable AI turned on for the opponent, partly because the opponent is a little more difficult to explain, because you can’t rely on the student having a big knowledge base of doctrine through which to understand the AI’s decision making. In the after-action review, we do uncover what the opponent’s plan was for the mission, so you can say, “I had my view of what I thought the enemy was doing, but here’s the truth of what the enemy actually planned to do, and let’s see how my situational awareness matches what actually happened.”

Where are the games set?

We’ve identified the Black Sea region as where this all takes place. We haven’t identified specific countries though. It’s a purely fictional setting at this point. The locale is just reflected in the architecture of the urban environments. Though for Full Spectrum Command, the military PC game, we recreated the McKenna Mount site, which is the urban combat training site the Army has built at Fort Benning. The idea was that it would be most useful if the soldiers trained in the game on the same site where they trained in real life.

Is that site itself modeled on anyplace real?

I don’t believe so. It’s a very American-style architecture. I think they just designed it to give the maximum range of experiences you could put someone through—one story building with pitched roof, one-story building with flat roof, two-story buildings, three-story buildings, big warehouses, buildings with lots of different rooms and labyrinths, settings like that.

And yet the architecture and the sound in the Full Spectrum Warrior trailer has a very Middle Eastern feel to it.

Well, the product’s only been in development for about 18 to 20 months. Recent events probably did push it in that direction somewhat, just to give it resonance with what’s going on in real life.

In the games’ “2020 mode,” where you project the army’s capabilities into the future, what sort of extrapolations of future enemy capabilities are you making?

“2020 mode” is really there to look at what kind of equipment the soldiers, typically BlueForce soldiers, might have in 2020. So we’re modeling things like forward-looking infrared radar, heads-up displays, dynamic GPS targeting, man-portable drones, etc. As for the enemy’s future capabilities, the army always says that whatever can be bought off the shelf will be, and enemies will be using those tools in unexpected ways. So, I would be loath to try to define what the opponents are going to be doing in 20 years and to start training soldiers for that. It might not be accurate, and it certainly won’t be complete, and so we haven’t tried.

Has there been discussion for the future of modeling enemy regular opposition in your games, or is it all asymmetric?

It’s all asymmetric. The view from the Army’s perspective is that the biggest challenge to soldiers today is urban operations and asymmetric opponents. The army has a pretty good sense of how to train for traditional force-on-force engagements. Really, the problem is how to train for the kinds of situations being encountered today in Iraq. And so that’s where there’s the most bang for the buck in these products. And I also believe that that’s going to be an increasing, if not the primary, challenge to the Army in the future.

  1. Chaturanga, the Indian precursor to the modern game of chess, dates back to at least the seventh century A.D., and Latrunculi, a Roman game with origins dating back to approximately 100 B.C.E., also had military inspiration in its design.

Dr. Michael van Lent is a research scientist at the Institute for Creative Technologies, a partnership of the US Army and the University of Southern California. He specializes in agent architectures and artificial intelligence design for computer gaming.

Jay Worthington is an associate at Paul Hastings and (until moments ago) an editor at Cabinet. He was also a co-founder of Clubbed Thumb in New York City.

If you’ve enjoyed the free articles that we offer on our site, please consider subscribing to our nonprofit magazine. You get twelve online issues and unlimited access to all our archives.