Privacy Risk Analysis. Sourya Joyee De. Читать онлайн. Newlib. NEWLIB.NET

Автор: Sourya Joyee De
Издательство: Ingram
Серия: Synthesis Lectures on Information Security, Privacy, and Trust
Жанр произведения: Компьютеры: прочее
Год издания: 0
isbn: 9781681732008
Скачать книгу
be defined and taken into account for the evaluation of the risks.12

      Even though they are not necessarily comprehensive, categories are useful to minimize the risks of omission during the analysis. They take the form of catalogues, typologies or knowledge bases in existing methodologies [33, 55]. For their part, attributes help analysts identify all relevant factors for each component. The use of templates in certain methodologies [33] fulfill a similar role. Table A.1 in Appendix A provides a summary of the categories and the attributes suggested for each component.

      1The Working Party 29, or Article 29 Working Party, is a group set up under the EU Directive. It includes a representative from each European data protection authority. One of its missions is to provide recommendations to the European Commission and to the public with regard to data protection and the implementation of the EU Directive.

      2This person is the “data subject” defined in Definition 2.2.

      3Here we define “processing” in the same way as in the EU Directive as “any operation or set of operations which is performed upon personal data, whether or not by automatic means, such as collection, recording, organization, storage, adaptation or alteration, retrieval, consultation, use, disclosure by transmission, dissemination or otherwise making available, alignment or combination, blocking, erasure or destruction

      4This issue is further discussed in Section 5.1.

      5However, a data subject may act as a risk source for another data subject.

      6For example, in the case of video-surveillance systems or location-based services.

      7This happened for example in the case of the murder of actress Rebecca Schaeffer in 1989 where the murderer extracted her home address from the Department of Motor Vehicle records [104, 140].

      8This phase can typically take the form of a more traditional risk/benefit analysis considering the potential consequences of privacy harms for the controller (mostly in financial, reputational and legal terms).

      9The severity is sometimes called the “impact” or “adverse impact” [17, 55].

      10In general, the decision can be to accept a risk, to avoid or mitigate it, or to share or transfer it. Mitigation or avoidance measures can be combinations of technical, organizational and legal controls.

      11For example, the categories of data being processed by a health information system may include health data, contact data, identification data, genetic data, etc.

      12For example, the level of motivation of a risk source or the level of precision of location data.

      CHAPTER 3

       Processing System

      The first step of a privacy risk analysis is the definition of its scope, which requires a detailed and comprehensive description of the processing system under consideration. This description should include all personal data flows between the components of the system and communications with the outside world. This information is necessary for the privacy risk analysis, in particular for the identification of the privacy weaknesses and the capacities of the risk sources to get access to the personal data.

      To summarize, the description should be sufficient to ensure that all potential privacy problems arising out of the system can be detected [106]. In most cases (at least for systems developed following a rigorous methodology), a detailed documentation about the system should already be available and it should be sufficient to supplement it with some additional information to meet the requirements of a privacy risk analysis.

      In this chapter, we present a set of attributes useful to characterize a processing system with a view to privacy risk analysis (Section 3.1) and introduce the running example used throughout this book (the BEMS System) with its attributes (Section 3.2).

      In order to meet the needs of the subsequent steps of the privacy risk analysis, the description of the system should include at least the following attributes:

      1. The functional specification describing the functionalities that the system is supposed to provide, including the potential use cases or scenarios. The functional specification should be consistent with the declared purpose of the system. It is also useful during the analysis to check compliance with the data minimization principle (personal data should be collected only if necessary to achieve the purpose).

      2. The controls including all existing measures (technical and organizational) to protect personal data. Precise knowledge of existing controls is necessary to detect potential privacy weaknesses.

      3. The interface including all interactions of the system with the external world, including users and other systems, and the possibility or not to collect the consent of the user. The interface is useful, inter alia, to detect potential risks related to data dissemination, transfers to third parties and lack of control from the users.

      4. The data flows describing the internal view of the system, including the basic components, their locations, supporting assets, the access rights and the data flows between them. The analysis of the data flows is instrumental in the search for privacy weaknesses.

      5. The supporting assets consisting of all software, hardware and networks on which the system relies and the stakeholders controlling them. Supporting assets, in conjunction with data flows and actors, are useful to analyze the capacities of the risk sources to get access to personal data.

      6. The actors having access to the system or interacting with it, including roles inside the organization of the data controller and the access rights of each actor.

      This set of attributes is inspired by previous work on privacy risk analysis. For example, the view-based approach proposed by Oetzel and Spiekermann [106] includes:

      1. The system view, which takes into account applications, system components, hardware, software, interfaces and the network topology.

      2. The functional view, which consists of generic business processes, use cases, technical controls, roles and users.

      3. The data view, which consists of categories of data processed by the system, data flow diagrams, actors and data types.

      4. The physical environment view, which includes physical security and operational controls such as backup and contingency measures.

      A whole chapter (Chapter 4) is dedicated to personal data in this book. The other components presented in [106] overlap with the above list of attributes.

      The definition of the data flows is a key part of the characterization of a system. A standard approach is to resort to data flow diagrams (DFD), which are structured, graphical representations based on four main types of building blocks: external entities, data stores, data flows and processes [166]. Deng et al. [40] propose an enhanced representation of data flows with trust boundaries to separate trustworthy and untrustworthy elements of the system. These boundaries are used to identify the potential risk sources. As argued in [166], the definition of the DFD is a crucial step in a privacy risk analysis and an incorrect DFD is likely to result in erroneous conclusions about privacy risks. Moreover, the granularity of the DFD dictates the level of detail at which the analysis can be conducted.

      In this section, we introduce the BEMS1 System used to illustrate the concepts discussed throughout this book. The BEMS System includes the billing