Virtual Reality Lab College of Design

How it Works

All Virtual Reality (VR) environments require coordination between two systems: a tracking system and a graphics system. The tracking system records the real-time location of a VR user within a physical space. The graphics system uses the location data to create a view of a 3D model corresponding to the VR user's physical point of view. The quality of the VR experience is directly related to the speed and sophistication of the tracking and graphic systems. Currently, our state of the art VR system uses the PhaseSpace Impulse tracking system in combination with powerful graphics computers and head-mounted displays.


Tracking System

PhaseSpace Impulse Camera System

Mounted on the suspended truss are 36 Phasespace camera modules, positioned in groups of three at 12 nodes. The modules each contain a binocular pair of cameras powered by high-speed linear CCDs (charged coupled devices) that capture images at 960 frames per second. A processor in each module analyzes the images to find LED markers in the tracked volume and determine their location in space. The center module in each node are aimed inward to cover the center of the courtyard. The modules on either side are aimed outward to cover the periphery. With this configuration the VRDL has expanded it's tracked area to a diameter of more than 50 feet (15 meters).

A triple camera node

A triple camera node

LED Markers

The LED markers worn on the body or assigned to objects within the tracked volume emit a unique light signal visible to the PhaseSpace cameras. The cameras triangulate the position of LED markers and transmit this information to a central server that processes the data and calculates actual positions.

Camera Server

The server aggregates the data from each camera module to find the precise location of each LED marker. It can resolve these locations within a one millimeter accuracy.

Graphics System

Perception Head-Mounted Display

VRDL IEEE VR 2014 from Aaron Westre on Vimeo.

The Perception is a prototype head-mounted display that uses a tablet computer for both graphics processing and a high resolution display. The tablet, an Apple iPad mini, has a high-performance graphics processor that can render complex 3D scenes in stereo at up to 60 frames per second. This speed is critical in virtual reality applications since lower frame rates can cause disorientation and discomfort for many users. The high pixel density of the screen allows the tablet to be closer the lenses without sacrificing visual fidelity. This also widens the field of view offering a more immersive virtual experience. Precise, flat-field magnifying lenses are used to bring the image into focus with minimal distortion. A custom app retreives tracking data from the Phasespace server via wifi and renders a stereo image from the user’s point of view. In addition to speed and visual quality the tablet also has inertial sensors that help smooth out tracking data, a camera that allows users to see the real world and bluetooth connectivity for interface devices.

UMN School of Architecture
Rapson Hall
89 Church Street SE
Minneapolis, MN 55455