It turns out that if I care about risk, I should be more concerned about the monitor than the keyboard. Once we have calculated the risk, we can then consider mitigations: what to do to manage the risk. In the case of my desktop computer, I might decide to take out an extended manufacturer's warranty to cover the monitor but just choose to buy a new keyboard if that breaks.
Risk is all around us and has been since before humans became truly human, living in groups and inhabiting a social structure. We can think of risk as arising in four categories:
ASSESSMENT | MITIGATION |
---|---|
Easy If there are predators nearby, they might kill us … | Easy … so we should run away or hide. |
Easy If our leader gets an infection, she may die … | Difficult … but we don't know how to avoid or effectively treat infection. |
Difficult If the river floods, our possessions may be washed away … | Easy … but if we camp farther away from the river, we are safer. |
Difficult If I eat this fruit, it may poison me … | Difficult … but I have no other foodstuffs nearby and may go hungry or even starve if I do not eat it. |
For the easy-to-assess categories, both the probability and the loss are simple to calculate. For the difficult-to-assess categories, either the probability or the loss is hard to calculate. What is not clear from the simple formula we used earlier to calculate risk is that you are usually calculating a risk against something that is generally a benefit. In the case of the risk associated with the river, there are advantages to camping close to it—easy access to water and ability to fish, for example—and in the case of the fruit, the benefit of eating it will be that it may nourish me, and I do not need to trek further afield to find something else to eat, thereby using up valuable energy.
Many of the risks associated with interacting with other humans fit within the last category: difficult to assess and difficult to mitigate. In terms of assessment, humans often act in their own interests rather than those of others, or even of a larger group; and the impact of an individual not cooperating may be small—hurt feelings, for example—or large—inability to catch game—or even retribution towards a member of the group. In terms of mitigation, it is often very difficult to guess what actions to take to encourage an individual, particularly one you do not already know, to ensure that they interact with you in a positive manner. You can, of course, avoid any interactions at all, but that means you lose access to any benefits from such interactions, and those benefits can be very significant: new knowledge, teamwork for hunting, more strength to move objects, safety in numbers, even having access to a larger gene pool, to name just a few.
Humans developed trust to help them mitigate the risks of interacting with each other. Think of how you have grown to know and trust new acquaintances: there is typically a gradual process as you learn more about them and trust them to act in particular ways. As David Clark points out when discussing how we develop trust relationships, this “is not a technical problem, but a social one”.10 We see here both time and various other contexts in which trust relationships can operate. Once you trust an individual to act as a babysitter, for instance, you are managing the risks associated with leaving your children with that person. An alternative might be that you trust somebody to make you a cup of tea in the way that you like it: you are mitigating the chance that they will add sugar to it or, in a more extreme case, poison you and steal all of the loyalty points you have accrued with your local cafe.
Trust is not, of course, the only mitigation technique possible when considering and managing risk. We have already seen that you can avoid interactions altogether,11 but two alternatives that are different sides of the same coin are punishment and reward. I can punish an individual if they do not interact with me as I wish, or I can reward them if they do. Many trust relationships between individuals are arguably built up over time with a combination of these mitigations, even if the punishment is as little as a frown and the reward as little as a smile. What is even more interesting is that the building of the trust relationship is two-way in this case, as the individual being rewarded or punished needs to trust the other individual to be consistent with rewards or punishments based on the behaviour and interactions presented.
Risk, Trust, and Computing
Risk is important in the world of IT and computing. Organisations need to know whether their systems will work as expected or if they will fail for any one of many reasons: for example, hardware failure, loss of power, malicious compromise, poor software. Given that trust is a way of mitigating risk, are there opportunities to use trust—to transfer what humans have learned from creating and maintaining trust relationships—and transfer it to this world? We could say that humans need to “trust” their systems. If we think back to the cases presented earlier in the chapter, this fits our third example, where we discussed the bank trusting its IT systems.
Defining Trust in Systems
The first problem with trusting systems is that the world of trust is not simple when we start talking about computers. We might expect that computers and computer systems, being less complex than humans, would be easier to consider with respect to trust, but we cannot simply apply the concept of trust the same way to interactions with computers as we do to interactions with humans. The second problem is that humans are good at inventing and using metaphors and applying a concept to different contexts to make some sense of them, even when the concept does not map perfectly to the new contexts. Trust is one of these contexts: we think we know what we mean when we talk about trust, but when we apply it to interactions with computer systems, it turns out that the concepts we think we understand do not map perfectly.
There is a growing corpus of research and writing around how humans build trust relationships to each other and to organisations, and this is beginning to be applied to how humans and organisations trust computer systems. What is missing is often a realisation that interactions between computer systems themselves—case four in our earlier examples—are frequently modelled in terms of trust relationships. But as these models lack the rigour and theoretical underpinnings to allow strong statements to be made about what is really going on, we are left without the ability to allow detailed discussion of risk and risk mitigation.
Why does this matter, though? The first answer is that when you are running a business, you need to know that all the pieces are correct and doing the correct thing in relationship to each other. This set of behaviours and relationships makes up a system, and the pieces its components, a subject to which we will return in Chapter 5: The Importance of Systems. We can think of this as similar to ensuring that your car is made up of the correct parts, placed in the correct locations. If you have the wrong brake cable, then