Continuity

Maintaining View and Position Continuity

In addition to the H-Crane, "Munch's Oddysee" also featured taking over NPCs (through a posession orb that was also a player proxy) and remote controlling the Snoozer, a robot carrying a large gun. Whenever a game features remote control or posession of entities that have player-like mobility, the view (whether first or third person) has to shift. Interactive Surfaces not only remove the need for any camera cut to another location, the transition back and forth is now initiated by the player and under her control.

Figure 2.7. Munch's Snoozer Gun Platform

Munch's Snoozer Gun Platform

Imagine a DOOM3 Transparently Interactive Surface with a View representation that is connected to a sentry bot. The player, upon activating the GUI, acquires control of the sentry bot through the same controls used for her "body". The GUI shows a camera view identical to the player's view, from the POV of the sentry bot. The player can break the connection at any time, and while she might forget that her "body" is located in front of a GUI while execising control of the sentry bot, there is no sense of discontinuity even on involuntary loss of control over the proxy.

As a side note, while it is possible to lerp the players view as of the moment of activating the GUI into an aligned fullscreen view of the GUI exlucing the game world (e.g. for rendering optimization), doing so might bring cause a discontinuity. It is possible that "view bobbing" effects affecting the player view angles relative to the GUI and other reminders of "being there" are essential to maintaining continuity.

Figure 2.8. Munch's Snoozer Remote Control

Munch's Snoozer Remote Control

Mixing Views For Continuity

Whenever a game cuts from one camera view to another, the player experiences a discontinuity and a (more or less brief) corresponding loss of control. Switching between fullscreen views amplifies this effect. Take the DOOM3 PDA as an example: the view transition drags the player out of the game back to the menu and even desktop. Worse, as the game is not paused, the need for in-game awareness is maintained while the fullscreen presentation excludes visual awareness. If the game's design had aliased the PDA mode with PAUSE, fullscreen presentation would have made sense. Instead, the DOOM3 design combines the worst of both worlds.

Interactive Surfaces permit the designer to allow the player to choose the effective POV (especially if frozen on activation of the IS). If the designer wants to establish a specific default or optimal view position, overloading e.g. DOOM3's "center view" command places the decision on whether or not to use the designer-choen view in the hands of the player. While it is possible to force a given view of a specific Interactive Surface on the player, I believe it is counterproductive to do so, as our objective has to be to allow for player-controlled mixing of local and remote view. There are basically two mechanism by which Interactive Surfaces permit mixing the local and remote view. One is translucency, the other is periphery. In addition, it is possible to mix views of multiple locations, or multiple views of the same location, for dramatic effect.

Peripheral View

"Munch's Oddysee", a game I worked on for part of 2001, featured the H-Crane, a ceiling crane that the player could steer directly through use of the contoller, while locked to a third person camera placed by the designer. In DOOM3, the Hazardous Waste Disposal crane in the Alpha Labs is controlled using a GUI. Instead of being placed out of body, the player remains at the remote control and in control of the view. By allowing the player to choose her own view of the Interactive Surface, we also allow the player to determine what trade-off between remote and local view is desirable. DOOM3 implements this very well, as GUI activation does not impose any restriction at all on player movement, and only loose restrictions on player view control.

Figure 2.9. DOOM3 Alpha Labs: controlled within view

DOOM3 Alpha Labs: controlled within view

The importance of in-game peripheral vision should not be underestimated, and offers opportunities for dramatic delivery. Imagine light and shadows from behind the player cast on the frame of an in-game display screen, announcing the presence of an enemy. In "Munch's Oddyssee", posession of an NPC had to be aborted whenever the player's "body" was attacked, and no audio cue or transition delay could remove the moment of disorientation on cutting from the remote fullscreen view to the local view. Mixing a peripheral local view with a focal remote view might help to avoid this discontinuity.

Translucency

Translucency (in the sense of "Halo" or "Minority Report" floating displays) permits keeping both local and remote focus at once. In this case, the player effectively sees "through" the Interactive Surface and looks straight at the proxy she controls - the IS becomes a classic HUD pegged to an in-game plane. The DOOM3 Alpha Labs crane is implemented relying on a large amount of peripheral vision, but could have been implemented using a floating display.

Figure 2.10. DOOM3 Translucent Display

DOOM3 Translucent Display

Translucency makes sense for iconic widgets on the Interactive Surface, and could be combined with textual output. Translucency also allows to blend a local camera view with a remote camera view - for example, the player could control a sentry bot placed on the other side of a door, while keeping an eye on that door through the floating display. However, as long as the player does not have intuitive control of the blend (to shift focus between the two views), this type of view mixing faces problems similar to those arising from depth of field cueing: unless we put the designer in charge of steering the player's attention, we are as likely to work against the player's wishes as we are to work in accordance with them. To keep an eye on that door, peripheral mixing might well be preferable, no matter how appealing the idea of having an enemy break through the display to pull the player back into the local environment.

Figure 2.11. Translucent UI in Halo: iconic representation

Translucent UI in Halo: iconic representation

In-game Split Screen

Video playback through textures permits us to mix views from multiple locations for dramatic effect. Known as split-screen process, this dramatic technique has been used in many movies (see e.g. the 1977 Robert Aldrich movie "Twilight's Last Gleaming" for cinematic use of two- and four-panel split screen views).

Games have used remote cameras and video playback on in-game surfaces for some time, sometimes for ambience, sometimes for narrative delivery. Unlike a simple active surace, an Interactive Surfaces can be used in a way to force the player to stay put (e.g. the player has to hold down a PLAY button). The advantage of split screens - namely, offering information from several remote locations - is retained. In order to mix camera views from multiple locations, the corresponding view surfaces have to be placed in proximity, or (similar to DOOM3's Alpha Labs security camera) there has to be an interface to cycle or switch between views quickly.

Figure 2.12. F.E.A.R.: not quite split-screen

F.E.A.R.: not quite split-screen

Scaling Setback with Death By Proxy

The worst gameplay discontinuity we have to deal with is avatar death. Any game design that includes setback and resource management will at some point have to apply "capital punishment" for sustained failure. More often than not, this is a consequence of design choices rather than a design objective. Player avatar death is (wrongly) perceived part of our game designs. Halo's use of resource regeneration illustrates that it is perfectly possible to implement "soft penalty" mechanics for failure that also facilitate player choice with respect to risk/result trade-offs. Ultinmately, player avatar death is just one example of setback through loss of resources and/or play time.

However, Interactive Surfaces allow us to preserve the concept of avatar death while also preserving continuity of the gameplay experience. In taking control of a proxy through an IS, the player has acquired a important and (at designer discretion) rare resource: a powerful remote gun platform, a turret, a vital security camera. Knowing that this proxy is not the "real body", the player is now free to experiment with different strategies and tatics, and to take higher risks. Failure will still ultimately result in complete loss of all assets represented by the proxy, but this loss will not result in a setback in played time. Proxies enable the player to suffer an inconsequential death or pursue a suicide tactic.

Figure 2.13. Death By Proxy: Munch's Snoozer

Death By Proxy: Munch's Snoozer

A common problem in designing an interactive experience is that high-risk challenges will always end in initial failure for some if not all players. A typical "try and die" game experience breaks immersion in the worst possible way and forces players to adopt unrealistic play tactics in order to minimize play time investiment and/or frustration. Death by proxy permits the designer to include high risk scenarios, just as it permits the player to pursue high risk tactics without paying the ultimate price.