4) In order to function, AI must be able to access large volumes of data provided by, among others, Big Data. Big Data refers to the ability to produce or collect digital data, store, analyze and visualize them (Cardon 2015). The traces we leave on the Internet (“likes”, comments), the data sent by connected objects… on all aspects and at all times of our lives constitute a considerable mass of information2 on how we act, think and even on what we experience and feel (Dagiral et al. 2019). However, this raw data have little meaning and value as it stands. The challenge is to assign value to them through correlation, in order to transform them into information, and then into knowledge about the subject, that is relevant, useful and focused data. These are “smart data”. Combined with predictive models, these systems are capable of evaluating and anticipating individual behavior in some detail, or even seek to modify attitudes and decisions (as in the case of the Cambridge Analytica scandal3).
5) Immersive environments (based on virtual, augmented and tangible reality) consist of immersing a person in an artificial environment, by stimulating their sensory (via sound, vision, smell), cognitive (information and decision-making) and sensory-motor (haptics, gestures) modalities via appropriate digital devices (3D headsets, haptic gloves, etc.) (see Chapter 3). This world can be imaginary, symbolic or simulate certain aspects of the real world. Different types of immersive environments can be distinguished:
– Virtual reality makes it possible to extract ourselves from physical reality in order to virtually change time, place and/or types of interaction. It gives a person the opportunity to engage in sensory-motor and cognitive activity in an artificial, digitally recreated world (Fuchs et al. 2006). It allows us to simulate what we would do in a normal and real situation. These devices are often used in the field of vocational training: employees find themselves in situations close to their actual working conditions, which are difficult to reproduce (Ganier et al. 2013). For example, it may be necessary to simulate the altitude to carry out a technical intervention at the top of an electrical pylon, or to carry out a difficult operation on a patient, with medical complications. Virtual reality can also be used in the medical field to treat anxiety-provoking situations (treatment of post-traumatic stress disorder (Moraes et al. 2016) or delusions of persecution (Freeman et al. 2016)).
– Augmented reality consists of adding/enriching virtual information above the real physical environment using a video headset, a computer or any other projection system (Marsot et al. 2009). There are applications for productive or maintenance tasks (such as indicating to the operator very precisely the location of rivets to be screwed on the aircraft’s cabin: the targets are then projected virtually on the surface) or when the operator wearing a headset is presented virtually with the sequences and location of the various operations to be performed to change a part on a large industrial machine (the steps and circuit to be changed appear superimposed on the engine). We therefore interact with the virtual in order to know how to act on the real thing.
– Augmented virtuality (tangible environment) consists of integrating real entities into a virtual environment; both can interact together (Fleck and Audran 2016). For example, an architect will physically manipulate models of houses in a virtually recreated living space in order to evaluate the best exposure to sun and wind and thus calculate possible energy losses. In this case, we interact with reality to act on the virtual.
The integration of these various immersive technologies (virtual reality, augmented reality, augmented virtuality) is called mixed reality (Moser et al. 2019).
All these new generations of technologies are thus intended to replace/improve all or part of human functions (physical, sensory and/or cognitive). The objective is to optimize the capacities at work (learning, understanding, decision-making, action, etc., both individual and collective) and to make work processes more efficient and effective in order to increase reactivity and profitability. According to a very deterministic approach, it also appears that the choice of such systems aims at the emergence of a working model oriented towards individual excellence, organizational agility, collective intelligence and also an efficient mutualization of the activity (between humans and machines). This would also explain the enthusiasm of companies for such systems, as Champeaux and Bret (2000, p. 45) have already mentioned, for technologies that are now more traditional: “Adopting them is no longer an opportunity, but an obligation. It is no longer a question of whether we are going to go there, but of how we are going to go there, that is, with what strategy, what investments, what objectives”.
However, while it appears that technology can affect certain dimensions of the activity, it cannot determine or shape it according to predefined and expected patterns. There is no technological determinism in the strict sense of the term. In other words, a technological innovation does not in itself impose a single type of organization or business model, but makes various forms of it possible. It is indeed the use (i.e. the conditions of use of the tool – individual, collective, organizational, etc., the project and the experiences of the user…) and not the intrinsic characteristics of the technology that will determine its effects, which can therefore be contrasted. It is these paradoxes that we will now examine in the following section.
1.3. Five paradoxes of the diffusion of technologies in/on the activity
1.3.1. Sense of loss of control over the activity vs increased control over the activity
The multiplication of digital media at work (shared digital schedules and spaces, messaging and social networks, collaborative platforms, reporting tools) is accompanied by a requirement for permanent availability and reactivity. The activity is thus increasingly determined, punctuated and emphasized by the various digital injunctions, warnings and solicitations. In the research we conducted on the impact of technology on the management profession (Bobillier Chaumon et al. 2018), we identified a category that we called “self-service or dispossessed managers”. They felt deprived of their ability to define or control their work schedule (work “imposed” by shared schedules) or simply to keep and achieve the objectives they had set for themselves during the day (work “prevented” by the many digital interruptions that required them to complete the current activity in order to initiate a new, unplanned task). This feeling of loss of control over the activity was paradoxically consubstantial with the increase in control that employees experienced in their activity.
Indeed, more and more so-called prescriptive (Bobillier Chaumon 2017) or info-normative (Frenkel et al. 1992) technical systems determine and frame the work to be done, but are also capable of assessing whether the work has been done well, that is, whether it is compatible with labor standards and norms. The software packages that manage the dialogue between