you're reading...
BLOG

GDPR and AI: making sense of a complex relationship

Written by Mihalis Kritikos,

© Shutterstock

The development and deployment of artificial intelligence (AI) tools should take place in a socio-technical framework where individual interests and the social good are preserved but also opportunities for social knowledge and better governance are enhanced without leading to the extremes of ‘surveillance capitalism’ and ‘surveillance state’. This was one of the main conclusions of the study ‘The impact of the General Data Protection Regulation on Artificial Intelligence‘, which was carried out by Professor Giovanni Sartor and Dr Francesca Lagioia of the European University Institute of Florence at the request of the STOA Panel, following a proposal from Eva Kaili (S&D, Greece), STOA Chair.

Data protection is at the forefront of the relationship between AI and the law, as many AI applications involve the massive processing of personal data, including the targeting and personalised treatment of individuals on the basis of such data. This explains why data protection has been the area of the law that has most engaged with AI and, despite the fact that AI is not explicitly mentioned in the General Data Protection Regulation (GPDR), many provisions of the GDPR are not only relevant to AI, but are also challenged by the new ways of processing personal data that are enabled by AI. This new STOA study addresses the relation between the GDPR and AI and analyses how EU data protection rules will apply in this technological domain and thus impact both its development and deployment.

After introducing some basic concepts of AI, the study reviews the state of the art in AI technologies with a focus on the application of AI to personal data. It then provides an in-depth analysis of how AI is regulated in the context of the GDPR and examines the extent to which AI is captured by the GDPR conceptual framework. It discusses the tensions and proximities between AI and data protection principles, such as purpose limitation and data minimisation, examines the main legal bases for AI applications to personal data, and reviews data subjects’ rights, such as the rights to access, erasure, portability and object. Researchers and policy-makers will find the meticulous analysis of the provisions of the GDPR to determine the extent to which their application is challenged by AI, as well as the extent to which they may influence the development of AI applications, of great theoretical and practical value.

The study carries out a thorough analysis of automated decision-making, considering the extent to which it is admissible, the safeguard measures to be adopted, and whether data subjects have a right to individual explanations. It then considers the extent to which the GDPR provides for a preventive risk-based approach, focused on data protection by design and by default. In adopting an interdisciplinary perspective, the study identifies all major tensions between the traditional data protection principles — purpose limitation, data minimisation, special treatment of ‘sensitive data’, limitations on automated decisions — and the full deployment of the power of AI and big data. The vague and open-ended GDPR prescriptions are analysed in detail regarding the development of AI and big data applications. The analysis sheds light on the limited guidance offered by the GDPR on how to balance competing interests, which aggravates the uncertainties associated with the novel and complex character of new and emerging AI applications. As a result of this limited guidance, controllers are expected to manage risks amidst significant uncertainties about the requirements for compliance and under the threat of heavy sanctions.

It should be noted that one of the main study findings is that, despite several legal uncertainties, the GDPR generally provides meaningful indications for data protection in the context of AI applications, that it can be interpreted and applied in such a way that it does not substantially hinder the application of AI to personal data, and that it does not place EU companies at a disadvantage by comparison with non-European competitors.

The study then proposes a wide range of concrete and applicable policy options about how to reconcile AI-based innovation with individual rights and social values and ensure the adoption of data protection rules and principles. Some of the proposed options relate to the need for a responsible and risk-oriented approach that will be enabled by the provision of detailed guidance on how AI can be applied to personal data in a way that is consistent with the main principles and general provisions of the GDPR. This guidance can be provided by national data protection authorities, and the Data Protection Board in particular, and should also involve civil society, representative bodies and specialised agencies.

The study emphasises the need to distinguish between use of personal data in a training set, for the purpose of learning general correlations and their use for individual profiling, as well as on the need to introduce an obligation of reasonableness for controllers engaged in profiling. The authors’ proposal concerning the facilitation of the exercise of the right to opt out of profiling and data transfers along with the right of collective enforcement in the data protection domain is of practical importance.

The study’s added value lies not only in the detailed legal analysis and realistic policy options it puts forward but also in its engagement with the general discussion about the values of the GDPR and the need to embed trust in AI applications via societal debates and dialogue with all stakeholders, including controllers, processors and civil society. This societal engagement would be necessary to develop appropriate responses, based on shared values and effective technologies. The arguments and findings of the study offer both theoretical insight and practical suggestions for action that policy-makers will find stimulating and worth pursuing.

Read the full report and accompanying STOA Options Brief to find out more. You can also watch the video of the presentation of interim findings to the STOA Panel.

About Scientific Foresight (STOA)

The Scientific Foresight Unit (STOA) carries out interdisciplinary research and provides strategic advice in the field of science and technology options assessment and scientific foresight. It undertakes in-depth studies and organises workshops on developments in these fields, and it hosts the European Science-Media Hub (ESMH), a platform to promote networking, training and knowledge sharing between the EP, the scientific community and the media. All this work is carried out under the guidance of the Panel for the Future of Science and Technology (STOA), composed of 27 MEPs nominated by 11 EP Committees. The STOA Panel forms an integral part of the structure of the EP.

Discussion

Trackbacks/Pingbacks

  1. Pingback: GDPR and AI: making sense of a complex relationship | Vatcompany.net - July 3, 2020

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Download the EPRS App

EPRS App on Google Play
EPRS App on App Store
What Europe Does For You
EU Legislation in Progress
Topical Digests
EPRS Podcasts

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 3,319 other followers

RSS Link to Scientific Foresight (STOA)

Disclaimer and Copyright statement

The content of all documents (and articles) contained in this blog is the sole responsibility of the author and any opinions expressed therein do not necessarily represent the official position of the European Parliament. It is addressed to the Members and staff of the EP for their parliamentary work. Reproduction and translation for non-commercial purposes are authorised, provided the source is acknowledged and the European Parliament is given prior notice and sent a copy.

For a comprehensive description of our cookie and data protection policies, please visit Terms and Conditions page.

Copyright © European Union, 2014-2019. All rights reserved.

%d bloggers like this: