Simulation from the neck up: A few early thoughts about Oculus Rift

Oculus Rift, a head-mounted, virtual reality interface conceived for mass production and everyday use, is tentatively scheduled for wide release in the next year. But the hype is already here: more than 50,000 developer OR units have been released, and new interactive games and other media designed specifically for OR are emerging. Media culture high and low has also taken notice: OR won a best-in-show award at the 2014 Consumer Electronics Show, and it’s already inspired (if that’s the right word) several NSFW parodies of the pornographic possibilities for VR (an idea depicted long ago in the under-appreciated 1983 film Brainstorm.)

While early iterations of OR are capturing popular imagination, how well does OR capture virtual reality? I’ve had a chance to play with the device in both videogame and art experience installations, and below I describe a few early observations.

crystal_cove1 (1)

OR persuasively addresses one of the nagging embodiment issues of navigation in simulated environments: the loss of the neck. Anyone who has played first-person videogames knows the powerful, isomorphic relation between the body as an apparatus of mobile perception and the body as a subjective camera position in screenic environments. But there are limits to this immersive quality; one generally can’t “look” in one direction while “moving” in the other (the strafing mechanic perhaps being one exception). Given the loss of proprioception in non-motion-control videogames—all movement and spatial awareness is grafted to the visual plane and translated to the hand controller—it makes sense to lock head and body into the same vertical axis and “become the gun.”

With the latest version of OR, dubbed “Crystal Cove,” a motion camera is mounted separately to track torso position—which means that movements of the body that also affect head position can also be tracked and translated to the device’s two tiny screens—one in front of each eye—and rendered at an impressive 1080p. Given its high-resolution display and multiple technological strategies for tracking head movement, OR dramatically alters the act of looking as a screen-based/planar experience given depth by linear perspective (or Z axis). Instead, real vision is matched by simulated, 3D vision, and one observes space and its objects much like their real counterparts—by moving the head for multiple angles of view, and by focusing to select an object out of the visual field for further inspection.

How this illusion is achieved depends on both the science of motion-tracking and display technologies, as well as on the art in their application. I had a chance to hear a lecture at UCLA (2/4/14) from Steven LaValle, chief scientist for Oculus Rift. LaValle divided the discussion of OR’s development into three inter-related areas, including sensor characterization (defining and programming the gyroscopes, accelerometers and other tracking systems) and filtering (fusing or synthesizing the tracking inputs). But LaValle said the biggest challenge—and area of research—for OR development is perceptual psychology and the science of human vision.

Perhaps the most classic and illustrative problem of translating the gestalt of human vision into VR is what’s known as simulation “sickness.” The headaches and nausea induced by simulation often result from what is called an ocular-vestibular mismatch—a disconnect between the visual and spatial alignment data provided by our body’s “sensors,” between what our eyes see and what our bodies feel by way of spatial orientation, courtesy of the ear’s vestibular nerve (which senses gravity and head movements).  Many other problems also attend VR development (screen brightness, resolution and refresh rates (very important when close to the eyes), blurring and warping, etc.), but in my experience, OR perpetuates a persuasive, headache-free VR experience for looking in a nearly spherical, 360-degree range of motion.

While the current state of motion-tracking and display technologies makes everyday VR a near “real” reality, it’s the art of VR interface design I find most intriguing. As LaValle explained, the use of predictive tracking algorithms, for example, can now successfully correlate precise, minute movements of the head with real-time adjustments in the display with almost no perceptible lag. But as LaValle’s team discovered, the net result was, perversely, too much input. Every micro-movement of the head—movements we don’t notice in human perception—resulted in noticeably twitchy corrections in the display field. LaValle’s team eventually decided to de-tune their predictive tracking model as an interpretive intervention to deliver a more “natural” replication of human vision.

As for my own experiences with OR, I’m blown away by the interface as means of replacing physical “vision” with a virtual analog. For example, I’ve had a chance to play a new game in development in the UCLA Game Lab designed specifically for use with OR. Called Classroom Aquatic, the game has already garnered some

classroom-aquatic-1

media coverage. Set in an underwater classroom of dolphins, the player has to cheat off the exams of other “students” by turning, looking, and leaning without being caught by the teacher. I found myself turning completely around to cheat off the paper behind me, simply because the sensation of so much bodily movement through the torso and head in a virtual environment is so satisfying (getting caught by the teacher be damned).

As many designers have already discovered, OR can be combined with over-the-ear headphones to create a powerful audio-visual sensorium. A case in point is a public art installation by new media artist and UCLA graduate David Leonard: a haunting, multi-media memorial to River Phoenix’s death outside The Viper Room on the Sunset Strip 20 years ago. By adding a camera to OR’s head-mounted display, Leonard brings a real location back into the VR experience. To “see” the art work, viewers don the camera and OR head-set, and then look at a QR tag placed on the sidewalk outside the club. The encoded image displays in the device as a positionally responsive, site-specific image of Phoenix lying on the sidewalk. The accompanying headphones convey the commotion outside the club that night as Joaquin Phoenix places a 911 call for help (a video re-creation of the installation can be viewed here). Leonard juxtaposes the immediacy of that horrible moment with the hypermediacy of its aftermath—the onslaught of media coverage (magazine covers, TV news stories, pundit commentaries, film clips, etc.) that ensued. The overall effect is like a multi-media museum piece, made all the more difficult in its interpretation—and impactful in emotional effect—by being in the actual location where Phoenix died.

These two examples illustrate—from the comedic to the chilling—the range OR possesses for “mobile” vision or, more precisely, for the screen as a simulated space to envelop the body. (See VR Cinema as a fascinating example of remediating cinema’s theater-going experience into an OR one.) The expressive possibilities—and cultural and media consequences—for OR seem limitless. For example, thinking about OR in a cyberpunk vein, one can imagine the dark scenarios that await the use of OR and headphones combined with psychoactive drugs. Whether employed for terrorist interrogations or recreational tripping, OR-based freak outs seem just around the corner.

More prosaically, as beings who rotate, tilt, and pan our camera-like heads to better see the world, we require of our VR experiences a successful interface that replicates how vision and spatial orientation are the compass of the body. This is OR’s raison d’etre. But what about the rest of the body, and the body parts that do things independent of vision, or in concert with it? Here is where OR creates an aquarium effect for me different from the one intended in Classroom Aquatic. Wearing the device, I feel immersed from the neck up (perhaps the torso, too, gets a little wet as a supporting mechanism for head positioning), but I’m only “submerged” in the virtual space as much as my head as a “moving camera” will allow. We can’t blame OR for this, per se: it’s trying to solve head-position problems, which are challenging enough.

But if the rest of the body falls outside the design parameters of OR, the device’s total capture of my vision also occludes the body in a different way. Hands, fingers, legs, feet… they all disappear from view. They become like phantom limbs, sensing and feeling but afforded no function within the space of OR. Looking into a virtual mirror in OR would certainly be an interesting—and potentially dysmorphic—experience. Of course, OR may be combined with gamepads and other input devices to restore some sense of direct manipulation within the world. But for me, mixing control schema “metaphors” amplifies what I call the uncanny valley of interface: the more “real” one component of interaction becomes, the more degraded the other inputs become by comparison. I’ve discussed this phenomenon in the blending of gestural/motion control with button-and-stick inputs in my research on the Nintendo Wii; I suggest that cognitive dissonance increases in the player in relation to control schemas as they move into widely divergent modalities (e.g., the “natural” swinging of the arm vs. joystick movement of the remaining, avatarial body in some Wii games). If the agency of “looking” becomes more “naturalized” through OR, but the agency of “doing” remains denied or reduced, then OR may perversely deepen the “rift” already evident between screen-based, photoreal vision and the abstraction of, say, hitting an X button to “jump.”

While OR out of the box is at its best when both physical and virtual bodies are essentially confined to a swiveling chair, all sorts of third-party attachments and experimental enhancements are possible to restore aspects of embodied agency (see OVR Vision, which combines camera-mediated “real” vision with VR vision through a 3D camera attachment to OR). But perhaps more importantly, OR may stimulate new genres of videogaming and interactive viewing that are less “tool”-based or motivated by picaresque narratives and their spatially inclined navigations. Perhaps visual interfaces alone will enable manipulation of virtual environments; game design in the future may give us more adventures in smaller spaces, exquisitely detailed and revealing of their secrets only through intense visual exploration. Visual and verbal communication may become core mechanics in the future, perhaps shifting the very concept of videogame play toward modes that are more collaborative and conversational, rather than predicated on hand-eye coordination.

Regardless of the challenges in store for video games and other interactive media, the implications of OR for VR will be exciting to follow in the years ahead. As Marshall McLuhan and Don Ihde (among others) have attested, our media tools—our interfaces—are not only extensions of ourselves, but also alterations to the phenomena of reality as a lived experience. Just as vision is a privileged human sense, tools and interfaces for vision remain the principal target for VR experience. But what kinds of worlds will these be? So full of sight and sound, but so desolate to the touch? The larger cultural question of what we want from VR—and its consequences for plain old R—remains a tantalizing mystery.

Comments are closed.