This lecture introduces Differential Privacy (DP) as a mathematically rigorous framework for privacy-preserving data analysis, motivated by the well-documented failures of traditional anonymization approaches such as linkage attacks and the mosaic effect. The lecture covers the formal definition of DP, key properties including the privacy budget (ε), and the mechanisms used to achieve it — Laplace, Exponential, and Gaussian — alongside practical considerations around the privacy-utility tradeoff. Real-world deployments at organizations are examined, with a forward-looking discussion of the emerging challenges of applying DP to large language model training.
Audience
- Faculty/Staff
- Student
- Graduate Students
Contact
Master of Science in Machine Learning and Data Science Program
Email
Interest
- Academic (general)