Motivation
The issue of data privacy has recently dominated the discussion around the introduction of new AI-driven technologies. AI systems process vast amounts of information, and with this, proper handling of personal data becomes critical. In this context, industry leaders within the AI domain bear significant responsibility for safeguarding user privacy. Beyond this social responsibility, privacy becomes necessary with the threat of heavy fines from regulations like the European GDPR or the upcoming AI Act. As such, an important question becomes how to approach privacy in AI in a responsible and compliant manner. Motivated by this question, we aim to create a structured overview of privacy risks in General-Purpose AI (GPAI) and explore the perception of such risks by different stakeholders. As Europe stands at the forefront of the privacy discussion, leading the way for other countries, we place the focus of our investigation on the European perspective.
Research Objectives
In light of the abovementioned motivation, our main research objective is to provide a structured overview of the privacy risks posed by General-Purpose AI systems, supplemented by an understanding of differing perceptions of these risks. To accomplish this goal, we define two research questions:
This project is supported by and conducted in cooperation with Google.