A model for compound purposes and reasons as a privacy enhancing technology in a relational database

Get Complete Project Material File(s) Now! »

Personal PETs

These are applications that are either installed locally on an individual’s computer or deployed as part of a group policy on their network, or available as a service on a wider network. Users have the ability to configure these applications to act on their behalf and make critical privacy decisions.
• Cookie managers/blockers: These allow users to manage the cookies that are placed on their computers. This will include interrogation to determine which web-sites have placed cookies on their systems, and also viewing the content. Importantly enough, they will also allow users to decide whether or not a site can place a cookie on their computer.
Most web browsers already provide some features of cookie management, allowing users to decide if they want to allow a particular site to place a cookie on their computer or deny it from doing so.
• Ad blockers: software which will block the delivery of advertisements. A more common example of this is pop-up blockers embedded in many web browsers today.

A Higher Level Classification

It is obviously important to recognise the fact that PETs can be classified in many ways, and this text appreciates this fact. However, in order to understand PETs from a user’s perspective it is necessary to classify them according to their functionality.
There are typically two types of actions that a user will take when interacting with another entity.
They will either send messages which will be consumed immediately, or they will send messages which will be stored for later consumption (or use). Messages that will be consumed immediately are typically commands that are sent to other computers, whereas messages that are stored for later consumption are typically information on the user themselves, such as name, age, gender, and so on. A typical example may be information that is used to conduct financial transactions.
E-mails are messages that are typically stored for later consumption, but will necessarily include information on the user such as their e-mail address. In this text, messages that are consumed immediately are classified as what we do information, since it will describe browsing habits for example. Systems that protect what we do attempt to protect the individual by providing anonymity.
Furthermore, messages that are stored for later consumption are classified in this text as part of who we are information, since they will typically describe the interacting entities as individuals. Systems that protect who we are act in scenarios where an individual cannot remain anonymous (even pseudonimity in these situations would create problems), and their information is imperative to provide them some service, or in order to comply with legislation.

READ  Classification of Photoreactor on the Basis of the State of the Catalyst

Foundational Concepts

Before the discussion on PETs is continued in earnest, it is a good time to step to the side, and first introduce foundational concepts on aspects of privacy and PETs.
The progression of the discussion depends on the presentation and definition of two fundamental concepts in the privacy research field. These two concepts are briefly defined, with a more detailed discussion to follow.
The first fundamental concept is an anonymity set. Anonymity sets are used as a foundation on which to reason about privacy in a quantitative way. Using these sets, it is possible to start considering the probability with which a single element can be identified amongst other elements within the set, that is, determining who sent or received a message.
The second fundamental concept in a PET is the type of privacy that one may enjoy. These types all seem to stem from (and indeed are in many instances used interchangeably with) anonymity. The types of privacy are by general consensus accepted to be four: anonymity, unlinkability, unobservability, and pseudonimity [77]. In any one of these four, the data subject is supposed to be protected from some form of monitoring. The concept of anonymity sets are now discussed in more detail.

1 Introduction 
1.1 Problem Statement
1.2 Approach
1.3 Structure of the Text
2 Privacy Enhancing Technologies 
2.1 The Technological Approach
2.2 Classification of Privacy Enhancing Technologies
2.3 Protecting What We Do
2.4 Protecting Who we Are
2.5 Current Technologies Implementing PETs
2.6 Summary
3 Compound Purposes and Reasons 
3.1 Preliminaries
3.2 Distinction between Reasons and Purposes
3.3 Purpose (Reason) Representation Properties
3.4 A Structure for Purposes
3.5 Compound Purposes
3.6 Notation
3.7 Using Compounds
3.8 Summary
4 Verification 
4.1 Verification Structures
4.2 Compound Purpose Operators
4.3 Compound Reason Operators
4.4 Final Verification
4.5 Verification Complexity
4.6 Summary
5 Extensions to SQL 
5.1 Introduction
5.2 Straw-man HDAC Model
5.3 Extending SQL for Specifying Reasons
5.4 Omitting the for Clauses
5.5 Revoking Privileges in the HDAC Model
5.6 Implementation Thoughts
5.7 Insert, Update, and Delete
5.8 Summary
6 Privacy Agreements with Compounds 
6.1 Introduction
6.2 Acceptance Levels
6.3 Privacy Agreement Levels
6.4 Agreement Invalidation
6.5 Summary
7 Conclusion 
7.1 Reflection
7.2 Future work

GET THE COMPLETE PROJECT

Related Posts