Another definition of virtual reality includes its potential functionalities. It allows us to: “extract oneself from physical reality to change virtually the time, place and/or type of interaction: interaction with an environment simulating reality or interaction with an imaginary or symbolic world” (Fuchs et al. 2006, p. 7).
Indeed, the use of virtual reality makes it possible to bypass the physical laws of our world. In virtual reality, the functioning of time is different; it is possible, for example, to stop time when you perform an action, or to go backwards in time. It also makes it possible to change place. While manipulating visualization and controlling devices in the physical world (imagine a user in an immersive room manipulating an arm with force feedback), he/she has the feeling of being in the virtual environment in which he/she is immersed (e.g. in an operating theater). Interaction with virtual reality is also different from interaction with the physical world, since it responds to the rules and workings of the application. For example, movements can be made by holding the button on a joystick rather than actually moving in the physical world.
3.2.4. A technical definition of virtual reality
We will end with a technical definition of virtual reality. It is: “a scientific and technical field exploiting computing (1) and behavioral interfaces (2) in order to simulate in a virtual world, the behavior of 3D entities, which interact in real time with each other and with one or more users, in pseudo-natural immersion (3) via sensor-motor channels” (Fuchs et al. 2006, p. 8).
Let us complete some aspects of this definition:
1 1) The term “computing” refers to all hardware and software parts of the system.
2 2) Behavioral interfaces are of three types:– sensory interfaces, which provide information to the user about changes in the virtual environment. For example, a change in color may inform a user simulating an assembly operation, that the tool he/she is using has collided with another object;– motor interfaces, which inform the system of the user’s motor actions. For example, the system can exploit data on the user’s position in an immersive room;– sensory-motor interfaces, which provide information to both the computer system and the user. For example, a force feedback arm informs the system about the user’s gesture, and at the same time forces the user to make efforts to simulate the resistance of the part being pierced.
3 3) Finally, we speak of “pseudo-natural” immersion, because the user does not act the same way in the virtual environment as he/she would naturally act in the physical world. There are sensor-motor biases in the interaction with virtual reality: for example, instead of walking from one point to another in the virtual environment, the user can teleport. Furthermore, the virtual environment does not necessarily provide all the sensory stimuli of the physical world.
It is also important to note that the definition of virtual reality is based on two conditions: the user must be immersed in a virtual environment, and he/she must also interact within this environment in real time. Therefore, 360° videos or 3D cinema are excluded from this definition, since the user is immersed in a world without really being able to interact with it.
3.3. The main interaction devices
Virtual reality systems are computer systems that include various peripherals. These are referred to as devices and can be classified in four categories (Burkhardt 2003):
– display devices;
– motion and position capture devices;
– proprioceptive and cutaneous feedback devices;
– sound input and presentation devices.
Each of these device categories will be discussed in the following sections.
3.3.1. Display devices
Visual presentation devices are the most common. It is rare to find systems that do not mobilize vision, although we could cite systems that were developed for visually impaired people that can incorporate spatialized sounds.
Virtual reality systems can be classified into three categories based on the degree of immersion of their visual presentation devices (Mujber et al. 2004). Non-immersive (or deskop-VR) systems are systems consisting of conventional computer monitors, which do not use specific hardware. Semi-immersive systems refer to widescreen displays, wall projection systems, interactive tables and head-mounted displays without stereoscopic vision. Stereoscopic vision is when the user has 3D vision of the virtual environment because a different image is displayed for each eye. Finally, fully immersive systems include head-mounted displays with stereoscopic vision and CAVE-type immersive rooms. In the latter, the virtual environment is viewed by users in cubic rooms, where three to six sides are screens.
In summary, non-immersive systems do not include specific technologies and do not offer stereoscopic vision, whereas semi-immersive and immersive systems use to specific technologies and can offer users stereoscopic vision.
3.3.2. Motion and position capture devices
Motion and position capture devices are also used. They provide the system with real-time information on the user’s actions and position, as well as their evolution. Sensors can locate the user’s entire body, a part of their body or an object held by the user (Fuchs and Mathieu 2011). Different types of sensors exist: mechanical sensors, electromagnetic sensors and optical sensors.
These sensors allow the virtual environment to react to the user’s actions. For example, the system can detect the user’s head movements and adapt the virtual environment accordingly, as if the user were actually in a place and turned their head to explore it visually.
3.3.3. Proprioceptive and cutaneous feedback devices
There are three types of proprioceptive and cutaneous feedback devices. The first are haptic (or force feedback) devices. They give information to the system about the movements made by the user, at the exact same time that they act on those movements. They are called sensory-motor interfaces. They allow the user to feel the stiffness, weight, inertia and friction of the objects they are manipulating (Gosselin and Andriot 2006). These devices include force-feedback arms, as well as wearable devices such as gloves and exoskeletons. For example, in industry training, these devices can transmit force to the user’s arms to make the user feel as if he or she has to exert strong pressure on a workpiece in order to pierce it.
The second proprioceptive and cutaneous feedback devices are tactile devices. They provide sensory information to the user about contact with a virtual object. This information may concern its shape, roughness, texture or temperature (Benali-Koudja and Hafez 2006). These devices use matrix printer technology and Braille display systems for the blind. This type of device is still underdeveloped today.
The final proprioceptive and cutaneous feedback devices include motion simulation devices. Motion simulation devices place force on the user’s body to change its orientation in space, or to cause it to undergo accelerations (Fuchs 2006). They include seats and other single-user devices and simulation booths. In the latter, the virtual environment display devices are included and they are usually multi-user. These devices can be used to simulate the movements experienced by passengers inside a car cabin or on a boat.
3.3.4. Sound input and presentation devices
Sound input devices include voice recognition systems. They allow the user to interact with the system through voice commands that are recognized and processed by the computer.
Sound presentation devices include software and hardware interfaces that enable spatialized sound (Tsingos and Warusfel 2006). They allow the simulation