Privacy Risk Analysis. Sourya Joyee De. Читать онлайн. Newlib. NEWLIB.NET

Автор: Sourya Joyee De
Издательство: Ingram
Серия: Synthesis Lectures on Information Security, Privacy, and Trust
Жанр произведения: Компьютеры: прочее
Год издания: 0
isbn: 9781681732008
Скачать книгу
sources of risks, that is to say the entities whose actions can lead to a privacy breach. These entities are often referred to as “adversaries” or “attackers” in the security literature but we prefer to use the term “risk source” here as it is less security-laden and it is not limited to malicious actors. We define a risk source as follows:

      Definition 2.7 Risk source. A risk source is any entity (individual or organization) that may process (legally or illegally) personal data related to a data subject and whose actions may directly or indirectly, intentionally or unintentionally lead to privacy harms.

      Any of the stakeholders, apart from the data subject himself,5 may be a risk source. Each risk source should be associated with a number of attributes, including its capabilities, background information, motivations, etc. We discuss risk sources and their attributes in Chapter 6.

      A feared event is a technical event in the processing system that can lead to a privacy harm. An unauthorized party getting access to the health data of a patient or a controller re-identifying a person from an alleged anonymized dataset are examples of feared events. The occurrence of a feared event depends on the existence of weaknesses (of the system or the organization), which we call privacy weaknesses, and the ability of the risk sources to exploit them.

      Definition 2.8 Feared Event. A feared event is an event of the processing system that may lead to a privacy harm.

      Definition 2.9 Privacy weakness. A privacy weakness is a weakness in the data protection mechanisms (whether technical, organizational or legal) of a system or lack thereof.

      As an illustration, a weak encryption algorithm used to protect personal data is a privacy weakness. Weak anonymization algorithms are other examples of privacy weaknesses. The term “vulnerability” is often used with a close meaning in the area of computer security, but we choose the expression “privacy weakness” here because in some cases privacy harms can stem from the functionality of the system itself6 (which would probably not be considered as a vulnerability in the usual sense of the word). For the same reason, we use the expression “harm scenario” to denote the succession of events leading to a feared event, which is often referred to as an “attack” in the security literature. In the simplest cases (for example an unauthorized employee getting access to unprotected data), the exploitation of the privacy weakness is the feared event itself and the harm scenario boils down to a single event. A more complex harm scenario would be a succession of access attempts using passwords from a dictionary and leading to the discovery of the correct password and the access to the personal data.

      Definition 2.10 Harm scenario. A harm scenario is a succession of events or actions leading to a feared event.

      Feared events denote events (in a technical sense) that have to be avoided. The ultimate goal of a privacy risk analysis is the study of the impacts of these events on individuals, groups or society, which we call the “privacy harms.” For instance, the unauthorized access to health data (a feared event) by a risk source may cause privacy harms such as discrimination (against a patient or a group of patients) or psychological distress. Similarly, the illegal access to location data such as home address may lead to economic or physical injury (e.g., burglary or murder7).

      The characterization of privacy harms is not an easy task as it may depend on many contextual factors (cultural, social, personal, etc.). Obviously, societies in different parts of the world follow different sets of unwritten rules and norms of behavior. For example, a data subject belonging to a certain society may feel uneasy if his religious beliefs (or lack thereof) or sexual preferences are revealed. “Acceptance in society” is generally an important factor for individual well-being and should be considered in the risk analysis.

      The definition of privacy harms adopted in this book is inspired by Solove’s vivid description of how feared events may affect individuals and society as a whole [140]. It also bears close similarities with the definition of harms proposed by Center for Information Policy Leadership (CIPL) [26].

      Definition 2.11 Privacy Harms. Privacy harm is a negative impact of the use of a processing system on a data subject, or a group of data subjects, or society as a whole, from the standpoint of physical, mental, or financial well-being or reputation, dignity, freedom, acceptance in society, self-actualization, domestic life, freedom of expression or any fundamental right.

      The above definition takes into consideration the impact on society, because certain harms, like surveillance, are bound to have global impacts such as chilling effect or loss of creativity which are matters for all society, not just individuals. As discussed in Chapter 1, this definition of privacy harms does not concern the impacts on the data controllers or the data processors themselves, which could be considered in a second stage (as indirect consequences of privacy harms) but are not included in the scope of this book.8

      The word “risk” is used in this book (as often in the risk management literature) as a contraction of “level of risk.” Levels of risk are generally defined by two values [17, 32, 55]: likelihood and severity.9

      The GDPR also refers explicitly to these two dimensions in its Recital 76:

      “The likelihood and severity of the risk to the rights and freedoms of the data subject should be determined by reference to the nature, scope, context and purposes of the processing. Risk should be evaluated on the basis of an objective assessment, by which it is established whether data processing operations involve a risk or a high risk.”

      In the context of privacy, the likelihood characterizes the probability that a privacy harm may be caused by the processing system, and the severity represents the magnitude of the impact on the victims. The likelihood should combine the probabilities that a risk source will initiate a harm scenario, the probability that it will be able to carry out the necessary tasks (i.e., perform the scenario, including the exploitation of the privacy weaknesses of the system, to bring about a feared event) and the probability that the feared event will cause a harm [17]. The likelihood and the severity can be defined in a quantitative or qualitative manner (for example, using a fixed scale such as “low,” “medium,” “high”). Risks are often pictured in two dimensional spaces [33] or matrices [17]. They are also sometimes reduced to a single value through the use of rules to calculate products of likelihoods by impacts [55].

      The first goals of a privacy risk analysis are the identification of the privacy harms that may result from the use of the processing system and the assessment of their severity and likelihood. Based on this analysis, decision makers and experts can then decide which risks are not acceptable and select appropriate measures10 to address them. The risk analysis can be iterated to ensure that the risks have been reduced to an acceptable level. Considering that risk analyses always rely on certain assumptions (e.g., about the state-of-the art of the technology or the motivations of the potential risk sources), they should be maintained and repeated on a regular basis. Among the challenges facing the analyst, particular attention must be paid to two main difficulties:

      1. the consideration of all factors that can have an impact on privacy risks and

      2. the appropriate assessment of these impacts and their contribution to the assessment of the overall risks.

      To discuss these issues in a systematic way, we propose in the next chapters a collection of six components (respectively: processing system, personal data, stakeholders, risk sources, feared events and privacy harms), each of them being associated with:

      1. categories of elements to be considered for the component11 and

      2. attributes