Introducing Autonomous Behaviour in Instructor-led Virtual Reality Training
XVR Simulation offers advanced incident command training support with immersive 3D virtual reality environments, 2D maps, communication and media-oriented simulators for emergency services, companies, and governments to train their personnel at operational, tactical and strategic command levels. These training tools are explicitly developed to support the flexibility and agility desired by instructors, who are always in full control of the scenario both during its definition as well as during every training session. With customers showing a rising demand to train at a larger scale and include behaviour of large simulated crowds in their scenarios, instructors are being overburdened with orchestrating all the items in response to participants’ actions. XVR Simulation has embarked on a quest to include Artificial Intelligence (AI) technologies to assign autonomous adaptive behaviour to any item in any scenario in support of the instructors. The major challenge to be addressed here is to include AI technologies that (a) the instructor can understand, that (b) can handle interventions during training, and (c) are easily used by instructors when creating and running training scenarios.
After investigating possible solutions, XVR Simulation has adopted the MASA DirectAI engine to address the above-mentioned challenge. Within the XVR architecture, the concept of a virtual ‘brain’ was introduced: a reasoning and decision-making process based on available sensor information that can autonomously decide on the execution of actuators. In order to support understanding and interaction between instructors and ‘brains’, an appropriate level of abstraction for brains’ sensors and actuators needed to be defined. These concepts needed to be integrated into the instructors’ mental model of a training scenario. Sensor information is defined in terms of scenario concepts (e.g., opposing party (red team), emergency services (blue team), safe zones, danger zones, victim treatment zones, et cetera). The actuators are defined as scriptable tasks that an instructor creates for a training scenario. Both sensors, actuators and strictly defined decision-making attributes of the brains are freely configurable by the instructor, fulfilling the three criteria of the challenge.
The first implementation of this solution shows that the approach adopted by XVR has resulted in understandable autonomous behaviour for crowds. The chosen solution unburdens the instructors from (a large number of) manual scenario interventions, while still retaining the control necessary in unforeseen training moments.