Chapter 3. Outlook

Table of Contents

Unlocking the Interactive Middle Ground
Reactive vs. Interactive Design

Most games, and most parts of most games, are "a world waiting to happen" [9]. The reason is simply that we are lacking the resources to accomplish what we are trying to do: creating a separate reality. Hence, we fake it. Every good magician knows that the magic results from controlling the stage. The root cause for deterministic scripting in games is not narrative ambition - that is often a rationale imparted after the fact - but the need to cover up shortcomings in the fabric of our work. There are exceptions. Movement is the most obvious one, and with movement comes collision - minimalist physics. Game physics represents the bottom-up approach to simulation. At the opposite end of the scale, there is AI, a weird mixture of simulation and fake that is inextricably linked to our concept of narrative. Both physics and AI have their fallacy. AI attempts to fake the (practically) impossible: interaction at a semantic level. Physics, bound by feasibility with respect to today's CPU and memory, mires player and designer alike in a swamp of micromanagement without adequate controls. It is the middle ground in between the low level physics and the heights of NPC AI that has been largely ignored. Interactive Surfaces open the window into that interactive middle ground.

Figure 3.1. Half-Life 2: a world waiting to happen?

Half-Life 2: a world waiting to happen?

Unlocking the Interactive Middle Ground

Despite the significant inmprovements of realtime rigid body physics simulation demonstrated by revent games, the hard lessons of "Trespasser" still apply. Pushing and pulling crates is as much of a drag in HL2 as it was in HL1. Actions that are easy and intuitive in the real world are near-impossible feats in games. Gravity guns and levitators are indicators of a serious UI problem, not a solution. But even if we could solve these issues, is giving the player the ability to stack boxes the best way to expend development and CPU resources?

Figure 3.2. Trepasser: on the one hand

Trepasser: on the one hand

For years I believed physics would push games to the next level of interactivity. Superficially, I was right. Factually, I was wrong. Adding more and more detail to game physics while being stuck with controls that are inherently coarse and thus increasingly inadequate can only increases player alienation, i.e. reduce immersion in the worst possible place - in the inner loop of game-player interaction, the second-to-second flow of action and reaction. The more object our game physics can simulate, more impotent the player will feel. Once we can handle 10K rigid objects in a scene, we will find the vast majority of them to be dust and debris. As long as players have to fight the interface just to stack a box on another, designers will create "box piles waiting to collapse" for them.

In addition, the more malleable the virtual world becomes, the more glaring will be the AI's obvious inability to manipulate the world in meaningful ways. Not only will AI always be found lacking in its interaction with the player: the more dynamic the environment becomes (especially if we do find ways to leverage this dynamics for increased player expressiveness), the more inadequate the AI's interaction with the environment will appear. It is certainly possible to make NPCs shoot explosive barrels on occasion, but does that mean we should strive to teach them - literally - how to lay bricks?

Figure 3.3. Half-Life 2 Physics: waiting to happen

Half-Life 2 Physics: waiting to happen

Between the bottom-up simulation of physics and a top-down pretense of intelligent behavior we find a vast middle ground of interactivity that does neither require nor benefit from in-depth simulation. When we driving a car in the real world, our graps of vehicle physics is neither elaborate nor explicit. Our car UI is extremely narrowband: a pedal or two, a steering wheel. We usually do not solve differential equations to decide on how to steer the car. Power steering and ABS compensate a lot of system responses to streamline the experience. For that reason, dribing a car within a game (even a racing game) is much less of a challenge than stacking boxes, hence we experience driving in games as very captivating and immersive.

While playing a game, our physical and proprioceptive reality is always governed by an extremly narrow interface. The mouse is a point-and-click device. Inevitably, First Person Games became point-and-click shooters. Immersion takes place within the proprioceptive and sensorimotor constraints of the basic mechanics of the interface, not outside of them. Moving your arm and your finger maps, on a very low level, to aiming a gun and pulling a trigger. Pushing your toes down into the ground is subtly compatible with the idea of increasing speed. If we want intuitive control and, ultimately, immersion, we have to start with the basic mechanics of our interface and the invariants of human perception and motor control, because the game experience is anchored there. The earliest FPS games had USE buttons as soon as they had trigger buttons. Doors and elevators made the spaces of Doom interactive, and all it took was pointing to a button and pressing it - virtually and literally. Immersion is made from this type of cognitive "resonance".

Figure 3.4. Half-Life 2: look, no hands!

Half-Life 2: look, no hands!

However, we already have several workable UI metaphors for games, ranging from point-and-click and drag-and-drop to mouselook and Elite-style 6 DoF controls. Interactive Surfaces and iconic representation allows us to apply our knowledge of HCI to let the player interact with objects that are more complex than a crate but much less challenging or versatile than sentient beings - namely machines (and, at a more autonomous levels, animals). In the real world, we are surrounded by machines, some of them (say, VCRs) of hideous complexity, yet we usually do not manipulate them the way we manipulate bricks. It is the paradox of interactivity that actions that are easy and automatic using our hands in the real world map to the most difficult and self-conscious chores in the virtual world, while tasks we would reasonably not attempt to perform in the real world can be made simple and accessible in the virtual world.

By means of example, consider a runaway nuclear reactor. Games are just as likely to require the player to perform the shutdown of a nuclear power plant as they are to require her to drive an ATV. In fact, that is the whole point of immersive games: to let us attempt and experience what, for most of us, is inaccessible in real life. In "Half-Life 1"'s Lambda Core, the player fixes and restarts a nuclear power plant - i.e. she solves a multistage puzzle. Using Interactive Surfaces, it is possible to add interactive mini-games to this set-piece. Imagine the player controlling control rods through sliders on a GUI screen, continuously having to re-balance a nonlinear dynamic system represented by just a few lines of code, while Geiger counters and klaxons warn the player about the early stages of a runaway chain reaction. Boiling coolant shaking the floors and rupturing strean pipes add emphasis to the message. A computer voice threatens to perform an automated emergency shutdown, which would force the player to regroup and restart. If your budget can afford the assets, you can even add a camera view of the fuel elements, showing the increasing glow of Cerenkov radiation, and the control rods lowering themselves unsteadily into the core. Depth and fidelity of simulation is needed for adding depth and fidelity to the player experience. We cannot simulate this depth the way we simulate rigid body physics - we will have to invent our own versions of teichoskopia [10]. Interactive surfaces provide us with the abstraction and indirection needed to handle in-game mechanisms more interesting than crates, and more feasible than wind-up people. Simulating machines - simulating black boxes, whether man-made reality or made-up magic - can be done in sufficient depth at much less expense than detailed physics or halfway acceptable AI, and with potentially much larger impact on the player.



[9] attributed to Ed Fries

[10] Literally: "View from the Walls". Here interpreted as a device to add scale by relying on indirect observation, or simply to fake what is not feasible, e.g. the sight of an army from the stage of a theatre.