Think, for instance, of a component that is calculating the risk associated with an event. It takes as input a probability in a range from 0 to 1 and a dollar amount and then outputs the product of the two according to our formula. What would happen if a new version was released that, instead of taking the probability as a range from 0 to 1, expected a percentage (in the range from 0 to 100)? This would be a change to the contract, and any components integrated with this one—either for input or output—would need to be informed of the change and possibly updated in order for the system to work as expected.
To return to our definition:
“Trust is the assurance that one entity holds that another will perform particular actions according to a specific expectation”.
The contract is the “specific expectation” in this case. The contract is usually defined with an application programming interface (API), either expressed using one of a common set of descriptive languages or specific to the particular language in which the component is written. The first reason that being able to discuss risk management and mitigation is important, then, is to allow us to construct a business by integrating various systems along the lines of the contracts they provide. The second reason for its importance is security.
Defining Correctness in System Behaviour
Earlier, we skirted slightly around the idea of correctness in terms of components and their behaviours, but one way of thinking about security is that it is concerned with maintaining the correctness of the behaviour of your systems in the face of attempts by malicious actors to make them act differently. Whether these malicious actors wish to use your systems to further their own ends—to mine crypto-currency, exfiltrate user data, attack other targets, or host their own content—or to disrupt your business, the outcome is the same: they want to use your systems in ways you did not intend.
To guard against this, you need to know:
How the systems should act
How to recognise when they do not act as expected
How to fix any problem that arises
The first goal is an expression of our trust definition, and the second is about monitoring to ensure that the trust is correctly held. The third—fixing the problem—is about remediation. All three of these goals may seem very obvious, but it is easy to miss that many security breakdowns arise precisely because trust is not explicitly stated and monitored. The key thesis of this book is that without a good understanding of the trust relationships between systems in contexts in which they operate or might operate, it is impossible to understand the possibilities available for malicious compromise (and, indeed, unintentional malfunction). Many attacks involve taking systems and using them in ways—in contexts—not considered by those who designed and/or operate them. A full understanding of trust relationships allows better threat modelling, stronger defences, closer monitoring, and easier remediation when things go wrong, partly because defining contexts where behaviours are defined allows for better consideration of where and how systems should be deployed.
We can state our three aims differently. To keep our systems working the way we expect, we need to know:
What trust relationships exist
How to recognise when a trust relationship has broken
How to re-establish trust
There is, of course, another thing we need to know: how to defend our systems in the first place and design them so that they can be defended. These are topics we will address later in the book.
Notes
1 1 I sympathise with anyone tasked with translating this book: “trust” is a concept that is very culturally and linguistically situated.
2 2 This book is not a work of literary criticism, and we will generally be steering clear of Derrida, Foucault, deconstructionism, post-structuralism, and other post-modernist agendas.
3 3 Or at least what appears to be a human—a topic to which we will return in a later chapter.
4 4 Gambetta, 1988.
5 5 Hern, 2017.
6 6 There is an interesting point about grammar here. In British English, collective nouns or nouns representing an organisation, such as Google, can often take either a singular or a plural verb form. In the US, they almost always take the singular. So, saying “The company were adamant that they…”, an easy way to show that there are multiple actors possibly being represented here, works in British English but not in US English. Thus British English speakers may be more likely than US readers to consider an organisation as a group of individuals than as a monolithic corporate whole.
7 7 Wikipedia, “Boeing 737 MAX groundings”, 2021.
8 8 See Rescorla 2000 for a definition of the HTTP protocol, the core component of the communication.
9 9 Wikipedia, “Morris Worm”, 2020.
10 10 Clark, 2014, p. 22.
11 11 Or attempt to do so: humans are quite good at seeking out those who do not want to interact with them, and bothering them anyway, as any tired parent of young children will tell you.
CHAPTER 2 Humans and Trust
As Richard Harper points out in his preamble to a collection of essays on trust, computing, and society,1 much of the literature around trust is not really about trust at all, but about mistrust. It is the setting up—and maybe the demolishing—of a trust relationship that could be labelled as mistrust; and given the consequences of its failing, getting this part right is important. If you take a simple view of trust as something which is binary—it is either there or it is not—rather than considering it as a more complex relationship or set of relationships, then the area which is not black and white, but tinged with complications, is what is relevant.
That is what could fairly be labelled, within the literature, as mistrust.
It would be nice to believe that we can take a reductionist view of trust, which allows us to follow this lead, moving all the complicated parts into a box labelled mistrust and having a well-defined set of parts we need to consider that are all just about trust; but we saw in the