Trust in Computer Systems and the Cloud. Mike Bursell. Читать онлайн. Newlib. NEWLIB.NET

Автор: Mike Bursell
Издательство: John Wiley & Sons Limited
Серия:
Жанр произведения: Зарубежная компьютерная литература
Год издания: 0
isbn: 9781119692317
Скачать книгу
complex. We certainly do want to get to a clearer, more refined definition, but we first need to delve deeper into what trust looks like and how it is defined in the various spheres of relevant academic study. Although our interest is less in the human-to-human realm than in trust relationships that involve computer systems (whether human-to-computer or computer-to-computer), it is important to understand the theoretical and academic underpinnings of trust in the human-to-human realm. This is not just because there is utility in being able to relate some of this thinking to our realm by being able to compare what we mean with what we do not mean, but also because any application of trust between realms must necessarily be metaphorical and deserves a thorough examination. As discussed in Chapter 1, “Why Trust?”, metaphors are useful but can be misleading and need to be employed with care. The other reason is that if we cannot unpick what can be intended when the word trust is used, then it is difficult to define what we wish to communicate as we try to restrict some of the various associated concepts and choose those that we want to use.

      First, we need to admit that the field of study regarding trust is both active and wide: there are a lot of definitions of human-to-human trust, many of which are not easily reconcilable. Most of the definitions, understandably, focus on social elements, and, as noted by Harper, there is a strong overtone of mistrust. Here are some examples supplied by other noted authors ruminating on the notion of trust:

       Trust in social interactions is “the willingness to be vulnerable based on positive expectation about the behaviour of others”.2Cheshire notes that Baier's definition3 “depends on the possibility of betrayal by another person”.

       For Hardin, when considering interpersonal trust, “my trust in you is encapsulated in your interest in fulfilling the trust”.4 Cheshire distinguishes trustworthiness from trust and discusses how risk-taking can act as a signal that one party considers another trustworthy.5 Dasgupta6 has seven starting points for establishing trust, of which three are related directly to punishment, one to choice, one to perspective, one to context, and one to monitoring.

      All of these examples may be helpful when considering human-to-human trust relationships—though even there, they generally seem a little vague in terms of definition—but if we are to consider trust relationships involving computers and system-based entities, they are all insufficient, basically because all of them relate to human emotions, intentions, or objectives. Applying questions around emotions to, say, a mobile phone's connection to a social media site is clearly not a sensible endeavour, though we will examine later how intention and objectives may have some relevance in discussions about trust within the computer-to-computer realm.

       trust (or, symmetrically, distrust) is a particular level of the subjective probability with which an agent assesses that another agent or group of agents will perform a particular action, both before he [sic] can monitor such action (or independently of his capacity ever to be able to monitor it) and in a context in which it affects his own action.7

      There are some interesting points here. First, Gambetta discusses agents, though the usage is somewhat different to that which we employed in Chapter 1. We used agents to describe an entity acting for another entity, whereas he is using a different definition, where an agent is an actor that takes an active role in an interaction. Confusingly, the usage within computing sometimes falls between these two definitions. A software agent is considered to have the ability to act autonomously in a particular situation—the term autonomous agent is sometimes used equivalently—but that is not necessarily the same as acting as a person or an organisation. However, in the absence of artificial general intelligence (AGI), it would seem that software agents must be acting on behalf of humans or human organisations even if the intention is to “set them free” to act autonomously or even learn behaviour on their own.

      The second important point that Gambetta makes is that a trust relationship—he is specifically discussing human trust relationships—is partly defined by expectations before any actions are performed. This resonates closely with the points we made earlier about the importance of collecting information to allow us to form assurances. His third point is related to the second, in that he discusses the possible inability of the trustor to monitor the actions in which they are interested. Given such a lack of assuring information, the ability to evaluate the likelihood of trust is based on the same data: that presented beforehand.

      For his fourth point, however, Gambetta also identifies that there are contexts in which actions can be monitored, though he seems to tie such actions to actions the trustor will take. This seems too restrictive on the trustor, as there may be actions taken by the trustee that do not lead to corresponding actions by the trustor—unless the very lack of such actions is considered action in itself. More important, however, is the implicit assumption (from the negative explicit in the previous statement) that monitoring should take place.

      The word friend was chosen carefully because a trust relationship is already implicit in the set of interactions that we usually associate with someone described as a friend. The same is not true for the word somebody, which I used to denote the person who was to raise the flag. The situation as described is likely to make our minds presume that there is a fairly high probability that the trust relationship I have to the friend is sufficient to assure me that they will pass the information correctly. But what if my friend standing on the corner is actually a business partner of the flag-waver? Given our human understanding of the trust relationships typically involved with business partnerships, we may immediately begin to assume that my friend's motivations in respect to correct reporting are not neutral.