Is a three-dimensional computer-generated situation that simulates the real world?

VR can be defined as “an advanced form of human-computer interface that allows the user to interact with and become immersed in a computer-generated environment in a naturalistic fashion” (Schultheis and Rizzo, 2001) (p. 82).

From: Comprehensive Clinical Psychology (Second Edition), 2022

As far as Simulated Environment/Reality goes, Microsoft is sitting pretty at the top of the innovations list with ‘HoloLens’.

CNet columnist Nick Statt described HoloLens as..

“…a sleek, flashy headset with transparent lenses. You can see the world around you, but suddenly that world is transformed — with 3D objects floating in midair, virtual screens on the wall and your living room covered in virtual characters running amok.”

As Neil Armstrong famously said – upon landing on the Moon – this probably is an instant of ‘one small step for a man, one giant leap for mankind’ in the field of Simulated or virtual reality.

The concept of ‘HoloLens’ now raises one big question: What is going to be the numero uno technology in Simulated Environment?

But before we get to that question, the three apparently similar technologies need to be dissected and explained in simple terms for the average Joe. Why? Because just like mobile technology, Simulated Environment can soon become a part of our everyday lives.

The 3 technologies explained

Virtual Reality: Virtual reality is the use of computer-generated technology to create an artificial three dimensional simulated environment or to recreate a real life environment or situation. Such created environment can be explored and interacted with.

In simple terms we can say that virtual reality is a simulated artificial 3D environment and the person becomes a part of this new world. The person is so much immersed in the 360-degree view of this new world that he has little or no sensory input from the room his body is in.

Augmented Reality: Augmented reality is a technology that layers computer generated images atop physical surroundings in order to make it more meaningful through the ability to interact with it. This superimposition of images onto real world surroundings gives a sense of illusion or virtual reality.

In simple terms, augmented reality is technique of enriching the real world with digital information and media, such as 3D models and videos. This is done by overlaying the digital information on the real environment in real-time camera view of your devices. The aim is to create a system in which the user cannot tell the difference between the real world and the augmentation in it. The systems are designed to enhance user’s sensory perception of the augmented world they are seeing or interacting with.

Holography: Holography is a lens less photography. It is a photographic technique that records the light scattered from an object, and then presents it in a way that appears three-dimensional. Such images are called Holograms.

In lay man language we can say it is a technique of capturing a 3D object onto a 2D surface in a way that it appears as the same 3D object with same surroundings, when viewed in proper illumination. Holograms are the pictures that never die. If the holograms are cut into pieces, then each piece would be able to generate the complete 3D object that was captured.

Must Read: Goodbye Note Currency

How they Work?

VR:  Virtual reality can be seen only through the use of headsets like Oculus, Sony, HTC etc. and requires 3 things. A PC, console or smartphone to run the app or game, a headset which secures a display in front of your eyes and some kind of input – head tracking, controllers, hand tracking, voice, on-device buttons or trackpads. A video is sent from the console or computer to headset via HDMI cable in case of headsets and for gears like Samsung and google the smartphone is fitted in the headset.

AR: The functioning of AR is not much different from VR. It needs glasses or camera to see the augmented view. The camera is capturing the video in the traditional manner. In the AR system a marker appears in the video and wherever you hold the marker the system super-imposes the 3d model of the object on the marker. To the viewer it appears as though the image has materialized by magic. The size and movement of the image is tracked by the computer via the placement and positioning of the marker.

Holography: According to the present form of technology, you don’t require any gear to view it. As the application of technology is limited to the holography stickers till now. Well, the research is still going on. Holography is based on the principle of Interference. Interference is the phenomenon that occurs when the two waves meet while travelling through the same medium. Holography is technology of producing holograms. To make a hologram we need the light of single color such as laser light. The figure 3 shows how holograms are made.

Background of Technologies

The most surprising thing about the history of these technologies is that these have been there from many decades. Virtual Reality dating back to Nineteenth century, reporting to be oldest of all. Virtual reality has beginnings that preceded the time that the concept was coined and formalized. If we focus strictly on the aim of creating illusion, then the earliest attempt is surely the 360 degree murals. These paintings were intended to fill the viewer’s entire field of vision, making them feel present at some historical event or scene. In 1838, the stereoscopic images were founded. In 1939 View-Master were developed and patented, which were used for virtual tourism. And this technology has been growing since then with first VR HMD in 1960.

The holography comes next in the list, and dates back to 1947 when for the first time we got its concept. In 1960 when the lasers were founded we got the first Hologram. It grew till 1990, and after that became dormant and limited to Japan and USA military research.

See Also: Future Buildings Evolving Architecture

Augmented Reality is old but youngest among the three and dates back to the year 1968 when the world got its first Head mounted display. The system used computer generated graphics to show users simple wireframe drawings.

Gears for all Technologies:

While we have gears for Virtual Reality and Augmented Reality but there are no specific gears for Holography as this technology is still being researched in laboratories. Though we get some traces of this technique in the Microsoft’s HoloLens gear. But it is more dependent on the Augmented Reality feature.

Examples of All:

Well, we suppose that you have now got a general idea about all these most talked technologies. But it is not easy to decide which could be the best of all and which could be the technology of the future. As, these have just started to nourish. And each has their own limitation. Seeing to the limitations we could see the mixture of all three of them in Future, called as Mixed Reality. HoloLens is best example for that. A change could be that in future we would no more need to wear any gear to view the Simulated Reality.

Home Technology Computers

virtual reality (VR), the use of computer modeling and simulation that enables a person to interact with an artificial three-dimensional (3-D) visual or other sensory environment. VR applications immerse the user in a computer-generated environment that simulates reality through the use of interactive devices, which send and receive information and are worn as goggles, headsets, gloves, or body suits. In a typical VR format, a user wearing a helmet with a stereoscopic screen views animated images of a simulated environment. The illusion of “being there” (telepresence) is effected by motion sensors that pick up the user’s movements and adjust the view on the screen accordingly, usually in real time (the instant the user’s movement takes place). Thus, a user can tour a simulated suite of rooms, experiencing changing viewpoints and perspectives that are convincingly related to his own head turnings and steps. Wearing data gloves equipped with force-feedback devices that provide the sensation of touch, the user can even pick up and manipulate objects that he sees in the virtual environment.

The term virtual reality was coined in 1987 by Jaron Lanier, whose research and engineering contributed a number of products to the nascent VR industry. A common thread linking early VR research and technology development in the United States was the role of the federal government, particularly the Department of Defense, the National Science Foundation, and the National Aeronautics and Space Administration (NASA). Projects funded by these agencies and pursued at university-based research laboratories yielded an extensive pool of talented personnel in fields such as computer graphics, simulation, and networked environments and established links between academic, military, and commercial work. The history of this technological development, and the social context in which it took place, is the subject of this article.

Is a three-dimensional computer-generated situation that simulates the real world?

Gadgets and Technology: Fact or Fiction?

Is virtual reality only used in toys? Have robots ever been used in battle? From computer keyboards to flash memory, learn about gadgets and technology in this quiz.

Artists, performers, and entertainers have always been interested in techniques for creating imaginative worlds, setting narratives in fictional spaces, and deceiving the senses. Numerous precedents for the suspension of disbelief in an artificial world in artistic and entertainment media preceded virtual reality. Illusionary spaces created by paintings or views have been constructed for residences and public spaces since antiquity, culminating in the monumental panoramas of the 18th and 19th centuries. Panoramas blurred the visual boundaries between the two-dimensional images displaying the main scenes and the three-dimensional spaces from which these were viewed, creating an illusion of immersion in the events depicted. This image tradition stimulated the creation of a series of media—from futuristic theatre designs, stereopticons, and 3-D movies to IMAX movie theatres—over the course of the 20th century to achieve similar effects. For example, the Cinerama widescreen film format, originally called Vitarama when invented for the 1939 New York World’s Fair by Fred Waller and Ralph Walker, originated in Waller’s studies of vision and depth perception. Waller’s work led him to focus on the importance of peripheral vision for immersion in an artificial environment, and his goal was to devise a projection technology that could duplicate the entire human field of vision. The Vitarama process used multiple cameras and projectors and an arc-shaped screen to create the illusion of immersion in the space perceived by a viewer. Though Vitarama was not a commercial hit until the mid-1950s (as Cinerama), the Army Air Corps successfully used the system during World War II for anti-aircraft training under the name Waller Flexible Gunnery Trainer—an example of the link between entertainment technology and military simulation that would later advance the development of virtual reality.

Panorama of the Battle of Gettysburg, painting by Paul Philippoteaux, 1883; at Gettysburg National Military Park, Pennsylvania

James P. Rowan

Sensory stimulation was a promising method for creating virtual environments before the use of computers. After the release of a promotional film called This Is Cinerama (1952), the cinematographer Morton Heilig became fascinated with Cinerama and 3-D movies. Like Waller, he studied human sensory signals and illusions, hoping to realize a “cinema of the future.” By late 1960, Heilig had built an individual console with a variety of inputs—stereoscopic images, motion chair, audio, temperature changes, odours, and blown air—that he patented in 1962 as the Sensorama Simulator, designed to “stimulate the senses of an individual to simulate an actual experience realistically.” During the work on Sensorama, he also designed the Telesphere Mask, a head-mounted “stereoscopic 3-D TV display” that he patented in 1960. Although Heilig was unsuccessful in his efforts to market Sensorama, in the mid-1960s he extended the idea to a multiviewer theatre concept patented as the Experience Theater and a similar system called Thrillerama for the Walt Disney Company.

The seeds for virtual reality were planted in several computing fields during the 1950s and ’60s, especially in 3-D interactive computer graphics and vehicle/flight simulation. Beginning in the late 1940s, Project Whirlwind, funded by the U.S. Navy, and its successor project, the SAGE (Semi-Automated Ground Environment) early-warning radar system, funded by the U.S. Air Force, first utilized cathode-ray tube (CRT) displays and input devices such as light pens (originally called “light guns”). By the time the SAGE system became operational in 1957, air force operators were routinely using these devices to display aircraft positions and manipulate related data.

During the 1950s, the popular cultural image of the computer was that of a calculating machine, an automated electronic brain capable of manipulating data at previously unimaginable speeds. The advent of more affordable second-generation (transistor) and third-generation (integrated circuit) computers emancipated the machines from this narrow view, and in doing so it shifted attention to ways in which computing could augment human potential rather than simply substituting for it in specialized domains conducive to number crunching. In 1960 Joseph Licklider, a professor at the Massachusetts Institute of Technology (MIT) specializing in psychoacoustics, posited a “man-computer symbiosis” and applied psychological principles to human-computer interactions and interfaces. He argued that a partnership between computers and the human brain would surpass the capabilities of either alone. As founding director of the new Information Processing Techniques Office (IPTO) of the Defense Advanced Research Projects Agency (DARPA), Licklider was able to fund and encourage projects that aligned with his vision of human-computer interaction while also serving priorities for military systems, such as data visualization and command-and-control systems.

Another pioneer was electrical engineer and computer scientist Ivan Sutherland, who began his work in computer graphics at MIT’s Lincoln Laboratory (where Whirlwind and SAGE had been developed). In 1963 Sutherland completed Sketchpad, a system for drawing interactively on a CRT display with a light pen and control board. Sutherland paid careful attention to the structure of data representation, which made his system useful for the interactive manipulation of images. In 1964 he was put in charge of IPTO, and from 1968 to 1976 he led the computer graphics program at the University of Utah, one of DARPA’s premier research centres. In 1965 Sutherland outlined the characteristics of what he called the “ultimate display” and speculated on how computer imagery could construct plausible and richly articulated virtual worlds. His notion of such a world began with visual representation and sensory input, but it did not end there; he also called for multiple modes of sensory input. DARPA sponsored work during the 1960s on output and input devices aligned with this vision, such as the Sketchpad III system by Timothy Johnson, which presented 3-D views of objects; Larry Roberts’s Lincoln Wand, a system for drawing in three dimensions; and Douglas Engelbart’s invention of a new input device, the computer mouse.

Is a three-dimensional computer-generated situation that simulates the real world?
New from Britannica

Is a three-dimensional computer-generated situation that simulates the real world?

Newborn humans have about 300 bones in their body; as babies grow, their bones will fuse into the standard 206-part skeleton that adults have.

See All Good Facts

Within a few years, Sutherland contributed the technological artifact most often identified with virtual reality, the head-mounted 3-D computer display. In 1967 Bell Helicopter (now part of Textron Inc.) carried out tests in which a helicopter pilot wore a head-mounted display (HMD) that showed video from a servo-controlled infrared camera mounted beneath the helicopter. The camera moved with the pilot’s head, both augmenting his night vision and providing a level of immersion sufficient for the pilot to equate his field of vision with the images from the camera. This kind of system would later be called “augmented reality” because it enhanced a human capacity (vision) in the real world. When Sutherland left DARPA for Harvard University in 1966, he began work on a tethered display for computer images (see photograph). This was an apparatus shaped to fit over the head, with goggles that displayed computer-generated graphical output. Because the display was too heavy to be borne comfortably, it was held in place by a suspension system. Two small CRT displays were mounted in the device, near the wearer’s ears, and mirrors reflected the images to his eyes, creating a stereo 3-D visual environment that could be viewed comfortably at a short distance. The HMD also tracked where the wearer was looking so that correct images would be generated for his field of vision. The viewer’s immersion in the displayed virtual space was intensified by the visual isolation of the HMD, yet other senses were not isolated to the same degree and the wearer could continue to walk around.

Early head-mounted display device developed by Ivan Sutherland at Harvard University, c. 1967.

Courtesy of Ivan Sutherland