Back to top

Privacy Risks of General-Purpose AI Systems - Stakeholder Perspectives in Europe (PRAISE)

Motivation

The issue of data privacy has recently dominated the discussion around the introduction of new AI-driven technologies. AI systems process vast amounts of information, and with this, proper handling of personal data becomes critical. In this context, industry leaders within the AI domain bear significant responsibility for safeguarding user privacy. Beyond this social responsibility, privacy becomes necessary with the threat of heavy fines from regulations like the European GDPR or the upcoming AI Act. As such, an important question becomes how to approach privacy in AI in a responsible and compliant manner. Motivated by this question, we aim to create a structured overview of privacy risks in General-Purpose AI (GPAI) and explore the perception of such risks by different stakeholders. As Europe stands at the forefront of the privacy discussion, leading the way for other countries, we place the focus of our investigation on the European perspective.

 

Research Objectives

In light of the abovementioned motivation, our main research objective is to provide a structured overview of the privacy risks posed by General-Purpose AI systems, supplemented by an understanding of differing perceptions of these risks. To accomplish this goal, we define two research questions:

  1. How can privacy risks of General-Purpose AI systems be systematized and cataloged?
  2. What is the perception of the identified risks in the European context, from the perspective of selected stakeholder groups?

 

This project is supported by and conducted in cooperation with Google.