Written by Mihalis Kritikos,
The development and deployment of artificial intelligence (AI) tools should take place in a socio-technical framework where individual interests and the social good are preserved but also opportunities for social knowledge and better governance are enhanced without leading to the extremes of ‘surveillance capitalism’ and ‘surveillance state’. This was one of the main conclusions of the study ‘The impact of the General Data Protection Regulation on Artificial Intelligence‘, which was carried out by Professor Giovanni Sartor and Dr Francesca Lagioia of the European University Institute of Florence at the request of the STOA Panel, following a proposal from Eva Kaili (S&D, Greece), STOA Chair.
Data protection is at the forefront of the relationship between AI and the law, as many AI applications involve the massive processing of personal data, including the targeting and personalised treatment of individuals on the basis of such data. This explains why data protection has been the area of the law that has most engaged with AI and, despite the fact that AI is not explicitly mentioned in the General Data Protection Regulation (GPDR), many provisions of the GDPR are not only relevant to AI, but are also challenged by the new ways of processing personal data that are enabled by AI. This new STOA study addresses the relation between the GDPR and AI and analyses how EU data protection rules will apply in this technological domain and thus impact both its development and deployment.
After introducing some basic concepts of AI, the study reviews the state of the art in AI technologies with a focus on the application of AI to personal data. It then provides an in-depth analysis of how AI is regulated in the context of the GDPR and examines the extent to which AI is captured by the GDPR conceptual framework. It discusses the tensions and proximities between AI and data protection principles, such as purpose limitation and data minimisation, examines the main legal bases for AI applications to personal data, and reviews data subjects’ rights, such as the rights to access, erasure, portability and object. Researchers and policy-makers will find the meticulous analysis of the provisions of the GDPR to determine the extent to which their application is challenged by AI, as well as the extent to which they may influence the development of AI applications, of great theoretical and practical value.
The study carries out a thorough analysis of automated decision-making, considering the extent to which it is admissible, the safeguard measures to be adopted, and whether data subjects have a right to individual explanations. It then considers the extent to which the GDPR provides for a preventive risk-based approach, focused on data protection by design and by default. In adopting an interdisciplinary perspective, the study identifies all major tensions between the traditional data protection principles — purpose limitation, data minimisation, special treatment of ‘sensitive data’, limitations on automated decisions — and the full deployment of the power of AI and big data. The vague and open-ended GDPR prescriptions are analysed in detail regarding the development of AI and big data applications. The analysis sheds light on the limited guidance offered by the GDPR on how to balance competing interests, which aggravates the uncertainties associated with the novel and complex character of new and emerging AI applications. As a result of this limited guidance, controllers are expected to manage risks amidst significant uncertainties about the requirements for compliance and under the threat of heavy sanctions.
It should be noted that one of the main study findings is that, despite several legal uncertainties, the GDPR generally provides meaningful indications for data protection in the context of AI applications, that it can be interpreted and applied in such a way that it does not substantially hinder the application of AI to personal data, and that it does not place EU companies at a disadvantage by comparison with non-European competitors.
The study then proposes a wide range of concrete and applicable policy options about how to reconcile AI-based innovation with individual rights and social values and ensure the adoption of data protection rules and principles. Some of the proposed options relate to the need for a responsible and risk-oriented approach that will be enabled by the provision of detailed guidance on how AI can be applied to personal data in a way that is consistent with the main principles and general provisions of the GDPR. This guidance can be provided by national data protection authorities, and the Data Protection Board in particular, and should also involve civil society, representative bodies and specialised agencies.
The study emphasises the need to distinguish between use of personal data in a training set, for the purpose of learning general correlations and their use for individual profiling, as well as on the need to introduce an obligation of reasonableness for controllers engaged in profiling. The authors’ proposal concerning the facilitation of the exercise of the right to opt out of profiling and data transfers along with the right of collective enforcement in the data protection domain is of practical importance.
The study’s added value lies not only in the detailed legal analysis and realistic policy options it puts forward but also in its engagement with the general discussion about the values of the GDPR and the need to embed trust in AI applications via societal debates and dialogue with all stakeholders, including controllers, processors and civil society. This societal engagement would be necessary to develop appropriate responses, based on shared values and effective technologies. The arguments and findings of the study offer both theoretical insight and practical suggestions for action that policy-makers will find stimulating and worth pursuing.