Slip into Sitterson Hall on a cloudy morning and observe how the cool gray light from the windows bathes the floor and the rich burgundy cushions on seats in the lobby. One long bench curves the length of the room, paralleling the information desk, unoccupied at present. It’s eerily quiet; there’s no one around. You move smoothly along the marble floor as if your feet are not touching the ground. As though you’re flying.

Now take off the head gear and come back to earth.

Rejoin the mutter and hum of a second-floor computer lab in the real Sitterson Hall, where research teams are working on three-dimensional, interactive computer images-virtual reality. The 32-year-old UNC-CH Department of Computer Science, one of the first in the nation, is a leader in the development of virtual environments.

In the lab, research assistants in shapeless sweatshirts hunch quietly at their terminals. Some wear headphones. A couple of them shoot the breeze, whispering. Science breakthroughs happen here, but as far as the students are concerned, it’s no big deal. In fact, the students grind their teeth a little at the term “virtual reality.” To them, the phrase seems too grandiose. They spend their days drawing out or typing in the thousands of statements used to program the machines. They make hundreds of minute adjustments to calibrate their cameras and light-emitting diodes so that a user’s perceptions will coordinate with his or her movements. Not exactly Star Wars.

Virtual world,” or, better, “virtual environment” is a more palatable term. The latter counters the popular assumption that this business is based on video games, and it conveys what users see when they don the eleven-pound head-mounted display that amounts to four cameras piled on the user’s head. The two screens inside, one for each eye, are a visual gateway. They allow users to explore the surface of an atom, feeling a simulation of the force that binds molecules together.

If the virtual environment is also projected onto a screen, you can observe what the user is seeing as he or she stands, oddly vulnerable, alone in the middle of the room. He turns cautiously and takes a step or two. He holds a remote-control-type device, pointing and clicking into midair. At nothing.

On the screen, you can see that the pointer is represented by a crudely-shaped hand, floating unconnected just ahead of the user’s line of vision. Using the virtual hand to open virtual drawers is tricky. You can’t feel the objects you touch with the virtual hand; the process is more like using a computer mouse in three-dimensional space. But after you get accustomed to the equipment and an odd feeling of displacement, you slide into smooth movement and find yourself somewhere else.

Immersion. In this virtual environment you are surrounded by the cool depths of a digital otherworld, where objects are weightless and humans are disconnected from a sense of touch or smell. Sometimes there is sound in a virtual world, but often there is only the distraction of outside noise, which fades away as you glide deeper into the landscape. It’s hard to believe that this world is created by mathematical equations.

The philosophy that has permeated these labs is what Fred Brooks, professor and award-winning scientist, calls the “driving-problem approach.” It entails a software engineer’s working with a collaborator in another field to solve a specific problem-getting down to what one student calls “the dirty details of computer programming.” This kind of task-specific tool research, Brooks says, “keeps you honest. When you’re the judge of your own work, it’s hard to be dispassionate.” Being judged by someone who actually has the need you’re working to fill, he says, is the best way to keep your work streamlined and give the user exactly what he or she needs. Ph.D. student Stefan Gottschalk says this approach is a way to work on real problems rather than practice on theoretical ones. “It’s very important to be able to train with the abstractions,” Gottschalk says, “but at the same time you want one eye on the real world, knowing what real-world considerations need to be made.”

Computers are really an applied technology,” graduate student Mark Livingston says. He works in his sockfeet, shoes kicked far back under the desk. “They’re most interesting and most useful when you do something else with them.” He’s preparing for an upcoming site visit by a team of doctors who will assess the department’s ultrasound project.

Livingston works on Professor Henry Fuchs’ project to enhance ultrasound technology with computer-augmented vision, a process that is like virtual environment technology except that the viewer sees a display that allows him or her to look at both the physical world and three-dimensional computer images at the same time. Grad student Lars Bishop, a clean-cut, rapid-fire talker, says it may be the most versatile of virtual-image techniques.

The process doesn’t replace the sight of the real world with one made of computer graphics. Instead, Bishop says, computer scientists think of it as enhancing vision, taking the view that’s already there and adding computer graphics to it to expand the viewer’s perception.

Taking ultrasound slices of an image and digitally pasting them together in a three-dimensional form gives the doctor and patient a clearer view of, say, a tumor, cyst, or fetus.

Research assistants practice the technique on a beat-up-looking mannequin torso laid out on a table in the center of one of the two lab areas. The figure has a commercially purchased dummy breast on which Livingston and others practice inserting a needle, using the computer images, calculated and charted digitally, to aim for a target somewhere within the breast.

Optimally,” says Livingston, “it should be as easy and natural a procedure as looking into a microscope.”

The type of naturalness he describes is exactly what Fred Brooks was aiming for when he laid down his goals for the department. Brooks wanted to concentrate on computer systems-the interaction between hardware and software-and also on a natural-language interface, communicating with the computer in non-mathematical language in the way that is common to computers today.

But as far as technology has come in bridging the gap between machine and human being, there are still some things computer technology can’t do. Virtual sex, for instance, has received a lot of press coverage; but associate professor Gary Bishop, a big Southerner with a booming voice, has four words on that topic: “It ain’t gonna happen,” he says with an emphatic nod. Some things are just too difficult.

In fact, the biggest hurdle facing virtual technology today is, surprisingly, visual: it’s the speed at which the image changes. There are two ways to render images on computer, says Professor Steve Weiss, department chair. There are photograph-like, intricate images that can be created over time, or cruder images that can change rapidly enough to keep up with the movements of a viewer’s eye and refresh themselves at a speed fast enough to look like motion. Although the breakthrough Pixel Planes graphics computers developed by scientists at UNC-CH can run graphics faster than other graphics computers can, they cannot yet generate the most intricate images fast enough to make them move the way they should in virtual reality. This message hasn’t gotten to the public yet, Weiss says, so often users expect more from their virtual experience than engineers are ready to give them. But virtual technology has been evolving for longer than one might think.

It’s been a long time coming,” Gottschalk says. Some say that virtual reality has been in the making since dark caves in ancient times, when people first started telling each other stories. The storyteller created a virtual environment for his or her listeners. Later, print, radio, television, and movies drew their audience into a world apart from the ordinary. The development of computer-generated virtual environments was just the next, earth-shaking step.

Marissa Melton was formerly a staff writer for Endeavors.