Trust in Computer Systems and the Cloud. Mike Bursell. Читать онлайн. Newlib. NEWLIB.NET

Автор: Mike Bursell
Издательство: John Wiley & Sons Limited
Серия:
Жанр произведения: Зарубежная компьютерная литература
Год издания: 0
isbn: 9781119692317
Скачать книгу
or not performed), and the activity was not imbued with any further significance. If, however, we give or associate our example with some value—say, a signal that money should be transferred into my bank account from the flag-waver and their business partner's bank account—it is clear that a lot more is at stake. If, for example, my friend chooses to favour the business relationship over our friendship, my friend might collude with the flag-waver and tell me that they raised the red flag even when they did not. Or, my friend might tell me the flag-waver has not raised the flag when they have, colluding with a third party with the intention of defrauding both me and the third party by somehow accessing the funds in my bank account that I do not believe to have been transferred.

      The channels for reporting on actions—i.e., monitoring them—are vitally important within trust relationships. It is both easy and dangerous to fall into the trap of assuming they are neutral, with the only important one being between me and the acting party. In reality, the trust relationship that I have to a set of channels is key to maintaining the trust relationships that I have to the main actor that is the monitor—who or what we could call the primary trustee. In trust relationships involving computer systems, there are often multiple entities or components involved in actions, and these form a chain of trust where each link depends on the other: the chain is typically only as strong as the weakest of its links.

      In the end, however, it is up to my bank to provide valid information and ensure its correctness, though I will be the one who pays for these measures and am likely to bear any cost of invalid data: security economics raises its head again. This discussion about trust chains and monitoring will reappear later in the book as an important issue when designing and managing trust.

      We have examined, then, a variety of different trust definitions in the human realm, though none of them seems a perfect fit for our needs. Before we throw all of these out with no further consideration, however, there is an interesting question about the overlap between human-to-human relationships and human-to-computer relationships when the computer has a closely coupled relationship with an organisation. This is different to case 3 that we discussed in Chapter 1, when we discussed the relationship between a bank and its systems, and more like case 2, where my trust relationship to the bank, and the bank's relationship to me, are characterised by interactions largely with their computer systems. In this case, punishment or other social impacts (positive or negative) may be more relevant, as we may be able to relate them to people rather than to the computers with which the actual interaction takes place. We will return to this question later, once we have addressed questions around trust to institutions—which is related but distinct—later in this chapter.

      The Prisoner's Dilemma

       Both prisoners stay silent, in which case they are both sentenced to one year in prison.

       One prisoner stays silent, but their colleague betrays them, in which case the betrayer goes free but the silent prisoner receives a sentence of three years in prison.

       Both prisoners betray the other, in which case they both end up in prison for two years.

      The rational position for each prisoner to take is to betray the other because betrayal provides a better reward than staying silent. Three interesting facts fall out of this game and the mountains of theoretical and experimental data associated with it:

       If the prisoners play repeatedly but know the number of repeated games, then the most rational strategy is to punish the other for bad behaviour and keep betraying.

       If they do not know the number of repetitions, then the most rational strategy is to stay silent.

       In reality, humans tend to a more cooperative strategy when playing variants of this game, working together rather than betraying each other.

       Enlarge the shadow of the future (make players more aware of future games and less bound into their—and their fellow