Home // Research // FSTM // DCS // Research Pro... // Privacy Attacks and Protection in Machine Learning as a Service

Privacy Attacks and Protection in Machine Learning as a Service

Funding: Fonds National de la Recherche > Aide à la Formation Recherche PhD
Start Date: Dec. 1, 2019
End Date: Nov. 30, 2023


Machine learning (ML) techniques have gained widespread adoption in a large number of real- world applications. Following the trend, machine learning as a service (MLaaS) is provided by leading Internet companies to broaden and simplify ML model deployment. Although MLaaS only provides black-box access to its customers, recent research has identified several attacks to reveal confidential information about model itself and training data. Along this line, this project’s goal is to further investigate new attacks in terms of ML models and training data and develop a systematic, practical and general defense mechanism to enhance the security of ML models. The project team including SaToSS and CISPA will also make source codes publicly available and use them in their own courses. This project will provide a deeper understanding of machine learning privacy, thereby increasing the safety of machine learning-based systems such as authentication system and malware detection, helping protect the nation and its citizens from cyber harm. This project PriML combines multiple novel ideas synergistically, organized into three inter-related research thrusts. The first thrust aims to explore potential attacks from the perspective of ML models via black-box explainable machine learning techniques. The second thrust focuses on investigating new attacks from the perspective of training datasets through DeepSets technique which can mitigate the complexity of deep neural networks and facilitate our attacks. Both thrusts include considering different types of neural networks and identifying inherently distinct properties of these types of attacks respectively. The third thrust involves un- derstanding and finding out a set of invariant properties underlying these attacks and developing defense mechanisms that exploit these properties to provide better protection of ML privacy.