Trying to apply our definition of trust to ourselves is probably a step too far, as we are likely to find ourselves delving into questions of the conscious, subconscious, and unconscious, which are not only hotly contested after well over a century of study in the West, and over several millennia in the East, but are also outside the scope of this book. However, all of the preceding points are excellent reasons for being as explicit as possible about the definition and management of trust relationships and using our definition to specify all of the entities, assurances, contexts, etc. Even if we cannot be sure exactly how our brain is acting, the very act of definition may help us to consider what cognitive biases are at play; and the act of having others review the definition may uncover further biases, allowing for a stronger—more rational—definition. In other words, the act of observing our own thoughts with as much impartiality as possible allows us, over time, to lessen the power of our cognitive biases, though particularly strong biases may require more direct approaches to remedy or even recognise.
Trusting Others
Having considered the vexing question of whether we can trust ourselves, we should now turn our attention to trusting others. In this context, we are still talking about humans rather than institutions or computers, and we will be applying these lessons to computers and systems. What is more, as we noted when discussing cognitive bias, our assumptions about others—and the systems they build—will have an impact on how we design and operate systems involved with trust. Given the huge corpus of literature in this area, we will not attempt to go over much of it, but it is worth considering if there are any points we have come across already that may be useful to us or any related work that might cause us to sit back and look at our specific set of interests in a different light.
The first point to bear in mind when thinking about trusting others, of course, is all that we have learned from the discussions of cognitive bias in the previous section. In other words, other human entities are just as prone to cognitive bias as we are, and also just as unaware of it. Whenever we consider a trust relationship to another human, or consider a trust relationship that someone else has defined or designed—a relationship, for instance, that we are reviewing for them or another entity—then we have to realise not only that they may be acting irrationally but also that they are likely to believe that they are acting rationally, even given evidence to the contrary.40 Stepping away from the complexity of cognitive bias, what other issues should we examine when we consider whether we can trust other humans? We looked briefly, at the beginning of this chapter, at some of the definitions preferred in the literature around trust between humans, and it is clear both that there is too much to review here and also that much of it will not be relevant. Nevertheless, it is worth considering—as we have with regards to cognitive bias—if any particular concerns may be worthy of examination. We noted, when looking at the Prisoner's Dilemma, that some strategies are more likely to yield positive results than others. Axelrod's work noted that increasing opportunities for cooperation can improve outcomes, but given that the Prisoner's Dilemma sets out as one of its conditions that communication is not allowed, such cooperation must be tacit. Given that we are considering a wider set of interactions, there is no need for us to adopt this condition (and some of the literature that we have already reviewed seems to follow this direction), and it is worth being aware of work that specifically considers the impact when various parties are allowed to communicate.
One such study by Morton Deutsch and Robert M. Krauss41 looked at differences in bargaining when partners can communicate with each other or not (or unilaterally) and when they can threaten each other or not (or unilaterally). Their conclusions, brutally relevant during the Cold War period in which they were writing, were that bilateral positions of threat—where both partners could threaten the other—were “most dangerous” and that the ability to communicate made less difference than expected. This may lead to an extrapolation to non-human systems that is extremely important: that it is possible to build—hopefully unwittingly—positive feedback loops into automated systems that can lead to very negative consequences. Probably the most famous fictional example of this is the game Global Thermonuclear War played in the film WarGames,42 where an artificial intelligence connected to the US nuclear arsenal nearly starts World War III.
Schneier talks about the impact that moral systems may have on cooperation between humans and the possibly surprising positive impact that external events—such as terrorist attacks or natural disasters—tend to have on the tendency for humans to cooperate with each other.43 Moral systems are well beyond the scope of our interest, but there are some interesting issues associated with how to deal with rare and/or major events in terms of both design and attacks on trust relationships. We will return to these in Chapter 8, “Systems and Trust”.
Trust, But Verify
Without wanting to focus too much on mistrust, we should not, however, assume good intent when interacting with other humans. Humans do not always do what they say they will do, as we all well know from personal experience. In other words, they are not always trustworthy, which means our trust relationships to them will not always yield positive outcomes. Not only that, but even if we take our broader view of trust relationships, where we say that the action need not be positive as long as it is what we expect, we can also note that humans are not always consistent, so we should not always expect our assurances to be met in that case, either.
There is a well-known Russian proverb popularised in English by President Ronald Reagan in the 1980s as “trust, but verify”. He was using it in the context of nuclear disarmament talks with the Soviet Union, but it has been widely adopted by the IT security community. The idea is that while trust is useful—and important—verification is equally so. Of course, one can only verify the actions—or, equally, inactions—associated with a trust relationship over time: it makes no sense to talk about verifying something that has not happened. We will consider in later chapters how this aspect of time is relevant to our discussions of trust; but Nan Russell, writing for Psychology Today about trust for those in positions of leadership within organisations,44 suggests that “trust, but verify” is only the best strategy when the outcome—in our definition, the actions about which the trustor has assurances of being performed by the trustee—is more important than the relationship itself. Russell's view is that continuous verification is likely to signal to the trustee that the trustor distrusts them, leading to a negative feedback loop where the trustee fails to perform as expected, confirming the distrust by the trustor. What this exposes is the fact that the trust relationship (from the leader to the person being verified) to which Russell is referring actually exists alongside another relationship (from the person being verified to the leader) and that actions related to one may impact on the other. This is another example of how important it is to define trust relationships carefully, particularly in situations between humans.
Attacks from Within
To return to the point about not necessarily trusting other humans, there is often an assumption that all members of an organisation or institution will have intentions broadly aligned with each other, the institution, or the institution's aims. This leads to trust relationships between members of the same organisation based solely on their membership of that organisation, and not on any other set of information. This, we might expect, would be acceptable and, indeed, sensible, as long as the context for expected actions is solely activities associated with the organisation. If, say, I join a netball club, another member of the club might well form a trust relationship to me that expects me to lobby our local government officers for funding for a new netball court, particularly if one of the club's stated aims is that it wishes