Trust in Computer Systems and the Cloud. Mike Bursell. Читать онлайн. Newlib. NEWLIB.NET

Автор: Mike Bursell
Издательство: John Wiley & Sons Limited
Серия:
Жанр произведения: Зарубежная компьютерная литература
Год издания: 0
isbn: 9781119692317
Скачать книгу

      This may seem like an immense amount of unpacking to do on what was originally presented as a simple statement. But when we move over to the world of computing systems, we need to consider exactly this level of detail, if not an even greater level.

      Let us begin moving into the world of computing and see what happens when we start to apply some of these concepts there. We will begin with the concept of a trusted platform: something that is often a requirement for any computation that involves sensitive data or algorithms. Immediately, questions present themselves. When we talk about a trusted platform, what does that mean? It must surely mean that the platform is trusted by an entity (the workload?) to perform particular actions (provide processing time and memory?) whilst meeting particular expectations (not inspecting program memory? maintaining the integrity of data?). But the context of what we mean for a trusted platform is likely to be very different between a mobile phone, a military installation, and an Internet of Things (IoT) gateway. That trust may erode over time (are patches applied? Is there also a higher likelihood that an attacker may have compromised the platform a day, a month, or a year after the workload was provisioned to it?). We should also never simply say, following the third corollary (on the lack of trust symmetry), that “these entities trust each other” without further qualification, even if we are referring to the relationships between one trusted system and another trusted system.

      One concrete example that we can use to examine some of these questions is when we connect to a web server using a browser to purchase a product or service. Once they connect, the web server and the browser may establish trust relationships, but these are definitely not symmetrical. The browser has probably established that the web server represents the provider of particular products and services with sufficient assurance for the person operating it to give up credit card details. The web server has probably established that the browser currently has permission to access the account of the user operating it. However, we already see some possible confusion arising about what the entities are: what is the web server, exactly? The unique instance of the server's software, the virtual machine in which it runs (if, in fact, it is running in a virtual machine), a broader and more complex computer system, or something entirely different? And what ability can the browser have to establish that the person operating it can perform particular actions?

      When you write a computer program that prints out “Hello, world!”, who is “saying” those words: you or the computer? This may sound like an idle philosophical question, but it is more than that: we need to be able to talk about entities as part of our definition of trust, and in order to do that, we need to know what entity we are discussing.

      What exactly, then, does agency mean? It means acting for someone: being their agent—think of what actors' agents do, for example. When we engage a lawyer or a builder or an accountant to do something for us, we set very clear boundaries about what they will be doing on our behalf. This is to protect both us and the agent from unintended consequences. There exists a huge legal corpus around defining, in different fields, exactly the scope of work to be carried out by a person or a company who is acting as an agent for another person or organisation. There are contracts and agreed restitutions—basically, punishments—for when things go wrong. Say that my accountant buys 500 shares in a bank with my money, and then I turn around and say that they never had the authority to do so: if we have set up the relationship correctly, it should be entirely clear whether or not the accountant had that authority and whose responsibility it is to deal with any fallout from that purchase.

      The situation is not so clear when we start talking about computer systems and agents. To think a little more about this question, here are two scenarios:

       In the classic film WarGames, David Lightman (Matthew Broderick's character) has a computer that goes through a list of telephone numbers, dialling them and then recording the number for later investigation if they are answered by another machine that attempts to perform a handshake. Do we consider that the automatic dialling Lightman's computer performs is carried out as an act with agency? Or is it when the computer connects to another machine? Or when it records the details of that machine? I suspect that most people would not argue that the computer is acting with agency once Lightman gets it to complete a connection and interact with the other machine—that seems very intentional on his part, and he has taken control—but what about before?

       Google used to run automated programs against messages received as part of the Gmail service.5 The programs were looking for information and phrases that Google could use to serve ads. The company were absolutely adamant that they, Google, were not doing the reading: it was just the computer programs.6 Quite apart from the ethical concerns that might be raised, many people would (and did) argue that Google, or at least the company's employees, had imbued these automated programs with agency so that philosophically—and probably legally—the programs were performing actions on behalf of Google. The fact that there was no real-time involvement by any employee is arguably unimportant, at least in some contexts.

      Another example may help us to consider the question of context. Consider a hypothetical automated defence system for a military base in a war zone. Let us say that, upon identifying intruders via its cameras, the system is programmed to play a recording over loudspeakers, warning them to move away; and, in the case that they do not leave within 30 seconds of a warning, to use physical means up to and including lethal force to stop them proceeding any further. The base commander trusts the system to perform its job and stop intruders: a trust relationship exists between the base commander and the automated defence system. Thus, in the language of our definition of trust:

       “The base commander holds an assurance that the automated defence system will identify, warn, and then stop intruders who enter the area within its camera and weapon range”.

       The base is no longer in a war zone, and rules of engagement change

       Children enter the coverage area who do not understand the warnings or are unable to leave the area

       A surge of refugees enters the area—so many that those at the front are unable to move, despite hearing and understanding the warning

      These may seem to be somewhat contrived examples, but they serve to show how brittle trust relationships can be when contexts change. If the entity being trusted with defence of the base were a soldier, we would hope the soldier could be much more flexible in reacting to these sorts of changes, or at least know that the context had changed and protocol dictated contacting a superior or other expert for new orders. The same is not true for computer systems. They operate in specific contexts; and unless they are architected, designed, and programmed to understand not only that other contexts exist but also how to recognise changes in contexts and how their behaviour should change