Virtual Reality… Still A Waste Of Time?

vrAccording to novelist and multimedia developer Douglas Adams, there is no such thing as virtual reality. Or, to put it another way, everything we perceive is virtual reality.

The world is composed of an infinite number of spectra, frequencies and textures. The human body perceives some of these and simply ignores the others.

Computer-generated sensation is no different from sensations created by night binoculars, a hearing aid or even a pair of glasses. Each is a virtual reality.

However, computer-generated sensation lets users create entirely new worlds, rather than simply providing different ways to perceive the real one.

Types of virtual reality

According to the new book “Virtual Reality — Through the New Looking Glass,” by Ken Pimentel and Kevin Teixeira, there are three types of computer-generated reality: total immersion, augmented reality and projected reality.

Total immersion, which attempts to bring the user completely into a computer-generated world by using the computer to stimulate the visual, aural and tactile senses, is what most people think of as virtual reality.

Total immersion today is usually accomplished with head-mounted displays, three-dimensional sound systems and pressure-sensitive gloves or body suits.

For example, total immersion would allow users to walk through and interact with a computer-generated architectural model of a building.

The Boston Computer Museum recently demonstrated a total-immersion application built by Intel Corp. and Sense8 Corp., a virtual-reality software company. The application allowed users to enter a virtual world and build a house with simple objects contained in that world.

In augmented reality, the computer generates images and sounds that are meant to merge with the user’s perceptions of his or her real surroundings.

Rather than a complete head cover, augmented-reality systems may use partially mirrored display devices that simultaneously reflect the computer-generated image and pass through the real world.

Some of the new heads-up displays currently used in fighter aircraft and proposed for automobiles are a type of augmented reality. In addition, augmented reality could be used to display computer-generated template information on top of a machine being assembled or repaired.

Projected reality marks the halfway point between full immersion and augmented reality. Projected reality lets users enter a computer-generated world but still remain solidly in the real world.

For example, in projected reality, the computer places a virtual image of a user on a large video screen; sensors track the user and allow for interaction with computer-generated objects and even other users.

The Boston Museum of Science’s recent Star Trek exhibit featured a projected-reality system. This application allowed one to five people to stand in a simulated transporter room and watch themselves “beam down” to the surface of a planet. Once on the planet, the participants could play games or investigate native fauna.

Other examples of projected reality include Sierra On-Line Inc.’s new bulletin-board service, The Sierra Network, which allows subscribers to build a virtual animated persona of themselves and then interact with other virtual personae.

The BBS’ virtual environments include standard board games such as checkers and chess, along with 3-D, full-motion worlds of paintball and flight simulators. There is even an adults-only world built around Sierra’s popular Leisure Suit Larry character.

Virtual-reality elements

The computer-based world is generated and updated on hardware that ranges from 486-based PCs to supercomputers from Cray Research Inc. and Thinking Machines Corp. Peripherals typically include sensors that track movement, stereoscopic video displays, speakers and some type of navigation and selection device such as a joystick or a 3-D mouse.

The reality engine — software that lets a user interact with the generated world — forms the heart of a virtual-reality system. The reality engine takes a stored set of objects and landscapes and merges the “virtualized” user into that environment. Every time the user moves or acts, the engine updates the world and re-creates it for the user.

Action is communicated by the sensors and navigation devices; the world is described for the user by sending signals to the displays, the sound system and tactile feedback devices.

A quick draw Reality engines must operate in real time — sensing, recalculating, redrawing and displaying the world at least 10 times per second. The sound in a virtual-reality system has to be even more precise because gaps or broken sound can severely distract the user.

Most virtual-reality systems run on either a PC or a workstation such as Silicon Graphics Inc.’s Iris Indigo and Iris Crimson.

Even with this powerful hardware, however, only a finite number of items can be repositioned and redrawn 10 or more times per second. Therefore, the landscape and elements of a virtual world must be described as a small number of simple polygons.

A landscape, for instance, could be composed of (in increasing detail) 200 to 20,000 separate triangles. Each time the user moves, the perspective, location and shadow of every triangle must be recalculated and redrawn.

There is always a trade-off between the reality and resolution of an image and the speed of recalculation and redraw. Most simulations sacrifice realism to get 10 or more images per second of smoothness.

However, there is a way to make a simple world look more realistic. With texturing, predetermined patterns can be overlayed on simple polygons. A square representing the side of a house, for instance, could be textured with a granite or brick pattern.

This method was used in the Boston Computer Museum’s immersion application: Simulated trees were composed of only two polygons, rather than 30 or more. A tree pattern added depth and realism.

New hardware, especially machines built around Intel’s i750 chip, are optimized for this type of visual texturing.

A usable virtual-reality system can be built on a 486-based computer for less than $20,000 with off-the-shelf parts. Truly complex and realistic worlds, however, require special workstations such as the $100,000 Reality Engine from Silicon Graphics.

Calling all senses

For a user to be brought completely into a virtual world, all senses must be engaged. However, that goal is still far in the future.

Current virtual-reality hardware does a good job with sound, and the hardware continues to improve. Texas Instruments Inc.’s MWave technology, which is just becoming available now, promises to add true 3-D aural depth at a fraction of the cost of current solutions.

Today’s hardware does a passable job on the video side.

Fast video accelerator boards such as Intel’s ActionMedia II, which is built around the i750 chip, continue to increase the capabilities of available hardware.

Although tactile feedback still has a long way to go, electric-and air-compressor-based devices are being tested and refined. Smell and taste have yet to be addressed, however.

Uses of the visual world

So what can you do in a virtual world?

Wandering through such an environment is typically described in degrees of freedom. The first three degrees of freedom — x, y, z — describe the location of a single point in the familiar Cartesian space.

Solid objects also need to have their orientation and motion around the x, y and z axis. The last three degrees of freedom, Pitch, Yaw and Roll, represent angles away from 0, 0, 0.

Users can also manipulate objects in a virtual world.

Some systems use a joystick or pointing device to select an object, while others allow users to reach out with a virtual hand and grab. Once users have selected an object, they can typically scale, move, texture, “squish” or place the object on top of other objects.

Virtual reality has already been put to good use in the fields of architecture, finance, science, education and aviation. And many new applications are right around the corner.

Architects were among the first to embrace virtual reality. Whole buildings have been designed, explored and modified without a single physical model ever being constructed.

A company in Japan, for example, uses virtual reality to let buyers walk through and change a kitchen before it is actually built and installed. And the ability to virtually exist in a new structure has improved more than a few buildings standing today.

Financial applications can benefit almost immediately from virtual reality. Today’s traders and market makers deal with an incredible amount of data about individual companies, economic forecasts, market movements and other news events. Virtual reality can help financial professionals make better sense of all that data by visual, spatial and aural immersion.

Some brokerages are working on applications that would let users surf over a landscape of stock futures, with color, hue and intensity indicating deviation from current share price. Sound could be used to convey other information, such as current trends or the debt/equity ratio. At the touch of a finger, single industries, regions or volatility levels could be highlighted.

Managing virtual WANs

The harried network manager could benefit from virtual reality as well. Imagine being able to fly around a virtual representation of a global WAN. Network trouble spots and even traffic patterns could be represented as colors, hues or sounds. Problems could even be anticipated before they happen, rather than afterward.

Chemical and bioengineers can use virtual reality to explore new ways to build molecules or splice genes. Scientists are able to build models now, but they often lack force and spatial reference; virtual reality can emulate those aspects. Some forward-thinking pharmaceutical companies are exploring virtual-reality technologies even now.

The medical field is also starting to explore virtual reality. Doctors, for instance, are beginning to use virtual reality to simulate complex operations before taking scalpel to flesh.

Virtual reality is also being used to bring people to places that they cannot physically enter. Specially constructed robots can be connected to virtual-reality systems to allow technicians, for example, to work inside a nuclear reactor or on Mars. These “waldos,” as they are known to science-fiction fans, allow users to work in dangerous places from the comfort of their own homes.

“Man,” according to Adams, “is a tool-making animal.” The computer is our most versatile and impressive tool yet — a tool that makes other tools. Virtual reality, as it emerges and matures, will become one of our most important computer-generated tools.

5 Replies to “Virtual Reality… Still A Waste Of Time?”

  1. highendserverfun says: Reply

    Virtual reality is definitely not a waste of time! I grew up dreaming of becoming a part of a different world and this is the only thing that made this dream come true!

  2. Meanwhile, virtual reality in games is something that my husband and I are happy about. It makes us forget about our problems at work and be in a completely different world.

  3. Virtual reality, for me, is not a waste of time. I am addicted to games that make use of this concept and it provides me countless benefits. This may sound shallow but if not for these virtual reality games, I do not know how else I am going to let go of the stress that the real world is causing me.

  4. I remember the madness that was thrown about in the 90s when people thought virtual reality would become something huge. Problem: giant vaporware! Basic science fiction!

    It never came about, and probably will never except for the ultra-rich, I guess. It’s funny thinking about it now, though.

    1. You’re spot on here, uh, Pedro. This was one of the things that I thought we could do, flying cars be damned!

      Once we get over our cell phone fetish, maybe someone will work on it.

Leave a Reply