(Downloads - 0)
For more info about our services contact : help@bestpfe.com
Table of contents
1 Overview
1.1 Background to differential privacy
1.1.1 Differential privacy and its applications
1.1.2 Metric differential privacy
1.2 Key themes in the practice of differential privacy
1.2.1 Privacy breaches as inference attacks
1.2.2 How to choose epsilon
1.2.3 Managing the privacy-utility balance
1.2.4 Reasoning about utility
1.3 Goals of this thesis
1.3.1 Contributions
1.3.2 Synopsis
1.3.3 Publications
I Foundations
2 Quantitative Information Flow
2.1 Modelling Systems as Channels
2.2 Modelling Adversaries
2.2.1 Vulnerability and Gain Functions
2.2.2 Posterior Vulnerability
2.2.3 Leakage
2.3 Comparing Channels: Refinement
2.4 Hyper-Distributions
2.4.1 The Geometry of Hyper-Distributions
2.4.2 Refinement of Hypers
2.5 Chapter Notes
3 Metric Differential Privacy
3.1 Definition and Properties
3.1.1 Technical Preliminaries
3.1.2 Definition of d-Privacy
3.1.3 Properties of Metric Differential Privacy
3.2 Geo-Indistinguishability – A Canonical Example
3.2.1 Utility Goals
3.3 Reasoning with Metrics
3.3.1 Oblivious Mechanisms
3.3.2 Local Differential Privacy
3.3.3 Reasoning about Privacy and Utility
3.4 Constructing d-Private Mechanisms
3.4.1 Other Metrics
3.5 Discussion and Concluding Remarks
3.6 Chapter Notes
II Analysis
4 Comparing Privacy Mechanisms
4.1 Motivating Examples
4.1.1 Technical Preliminaries
4.2 Average-Case Refinement
4.3 Max-Case Refinement
4.4 Privacy-Based Refinement
4.4.1 Comparing mechanisms by their “smallest « ” (for fixed d)
4.4.2 Privacy-based leakage and refinement orders
4.4.3 Application to oblivious mechanisms
4.4.4 Privacy as max-case capacity
4.4.5 Revisiting the Data Processing Inequality for d-Privacy
4.5 Verifying the Refinement Orders
4.5.1 Average-case refinement
4.5.2 Max-case refinement
4.5.3 Privacy-based refinement
4.6 Lattice properties
4.6.1 Average-case refinement
4.6.2 Max-case refinement
4.6.3 Privacy-based refinement
4.6.4 Constructing d-private mechanisms for any metric d
4.7 Comparing d-Privacy Mechanisms
4.7.1 Technical Preliminaries
4.7.2 Refinement order within families of mechanisms
4.7.3 Refinement order between families of mechanisms
4.7.4 Asymptotic behaviour
4.8 Conclusion
4.9 Chapter Notes
5 Inference Attacks
5.1 Motivation
5.2 Informal example: does it matter how to randomise?
5.3 Modelling Inferences
5.3.1 Quantitative Information Flow
5.3.2 Reasoning about Inferences
5.4 The Privacy-Utility Trade Off
5.5 Application to Differential Privacy
5.5.1 Utility-Focussed Privacy
5.5.2 Secrecy-Focussed Privacy
5.6 Experimental Results
5.7 Concluding Remarks
5.8 Chapter Notes
6 Optimality I: Discrete Mechanisms
6.1 Introduction
6.1.1 Motivation and Contribution
6.2 Modelling Utility
6.2.1 Universal optimality
6.2.2 The privacy context
6.2.3 Relationship to “counting” and “sum” queries
6.2.4 Our results on optimality
6.3 The Geometry of d-Privacy
6.3.1 The relationship between channels and hypers
6.3.2 The space of d-private hypers
6.3.3 Kernel and Vertex Mechanisms
6.4 Characterising Optimal Mechanisms
6.4.1 Existence of universally optimal mechanisms
6.4.2 Characterising universal L-optimality
6.5 Reasoning about Optimality
6.5.1 Loss Function Algebra
6.5.2 Classes of Loss Functions
6.5.3 Properties of Loss Function Classes
6.5.4 Identifying Optimal Mechanisms
6.6 Application to Oblivious Mechanisms
6.7 Optimality for d2-Private Mechanisms on f0: : :Ng
6.7.1 Kernel mechanisms
6.7.2 The geometric mechanism
6.7.3 Other optimal mechanisms
6.8 Optimality for dD-Private Mechanisms
6.8.1 Kernel mechanisms
6.8.2 The randomised response mechanism
6.8.3 Impossibility of universal optimality
6.8.4 Reframing the impossibility result for monotonic loss functions
6.9 Optimality for Other Metric Spaces
6.10 Conclusion
6.11 Chapter Notes
7 Optimality II: Continuous Mechanisms
7.1 Introduction
7.2 Preliminaries: loss functions; hypers; refinement
7.2.1 The relevance of hyper-distributions
7.2.2 Refinement of hypers and mechanisms
7.3 Discrete and continuous optimality
7.3.1 The optimality result for the geometric mechanism
7.3.2 The geometric mechanism is never « d-private on dense continuous inputs, eg. when d on X is Euclidean
7.3.3 Our result — continuous optimality
7.3.4 An outline of the proof
7.4 Measures on continuous X and Y
7.4.1 Measures via probability density functions
7.4.2 Continuous mechanisms over continuous priors
7.4.3 The truncated Laplace mechanism
7.5 Approximating Continuity for X
7.5.1 Connecting continuous and discrete
7.5.2 Approximations of continuous priors
7.5.3 N-step mechanisms and loss functions
7.5.4 Approximating continuous « d-private mechanisms
7.6 The Laplace and Geometric mechanisms
7.6.1 The stability of optimality for geometric mechanisms
7.6.2 Bounding the loss for continuous mechanisms
7.6.3 Refinement between N-Geometric and T;N-Laplace mechanisms
7.6.4 The Laplace approximates the Geometric
7.6.5 Approximating monotonic functions
7.7 Universal optimality for the Laplace Mechanism
7.8 Conclusion and Future Work
7.9 Chapter Notes
III Applications
8 Statistical Utility for Local Differential Privacy
8.1 Scenario: Estimating a Distribution
8.2 Comparing Mechanisms for Privacy
8.3 Comparing Mechanisms for Utility
8.3.1 Reconstructing the Original Distribution
8.3.2 Maximum Likelihood Estimation with IBU
8.3.3 Measuring Utility
8.4 Experimental Results
8.5 Conclusion
8.6 Chapter Notes
9 Text Document Privacy
9.1 Introduction
9.2 Documents, Topic Classification and Earth Moving
9.2.1 Word embeddings
9.3 Privacy, Utility and the Earth Mover’s Metric
9.3.1 Utility and Earth Moving
9.3.2 Implementing Earth Mover’s Privacy
9.3.3 Application to Text Documents
9.3.4 Properties of Earth Mover’s Privacy
9.4 Earth Mover’s Privacy for bags of vectors in Rn
9.4.1 Earth Mover’s Privacy in BRn
9.4.2 Utility Bounds
9.5 Text Document Privacy
9.5.1 Privacy as Indistinguishability
9.5.2 Privacy as Protection from Inferences
9.6 Experimental Results
9.7 Conclusions
9.8 Chapter Notes
10 Locality Sensitive Hashing with Differential Privacy
10.1 Introduction
10.1.1 Friend-Matching: Problem Setup
10.1.2 Our contributions
10.2 Locality Sensitive Hashing (LSH)
10.2.1 Random-Projection-Based Hashing
10.2.2 Privacy with LSH
10.2.3 Is LSH private?
10.3 LSH-based Privacy Mechanisms
10.3.1 Privacy Measures
10.3.2 Construction of LSHRR
10.3.3 Construction of LapLSH
10.4 Analyses of the Privacy Mechanisms
10.4.1 LSHRR: Privacy Guarantees for Hash Values
10.4.2 LSHRR: Privacy Guarantees for Inputs
10.4.3 LapLSH: Privacy Guarantees
10.5 Experimental Evaluation
10.5.1 Experimental Setup
10.5.2 Results
10.5.3 Discussion
10.6 Conclusion
10.7 Chapter Notes
11 Conclusions and Future Work
References
Appendices




