Scientific Foresight (STOA) By / July 13, 2021

STOA meets its International Advisory Board to discuss the Artificial Intelligence Act

The proposal sets out a risk-based approach to regulating AI applications: those presenting an ‘unacceptable risk’ would be banned, those presenting a ‘high-risk’ would be subjected to additional requirements before entering the market, and others, such as chatbots and ‘deep fakes’, would be subject to new transparency requirements.

STOA meets its International Advisory Board to discuss the Artificial Intelligence Act

Written by Philip Boucher and Carl Pierer.

The European Commission published the much-anticipated Artificial Intelligence Act (AIA), an ambitious cross-sectoral attempt to regulate artificial intelligence (AI) applications on 21 April 2021. Its aim is to ensure that all European citizens can trust AI by providing proportionate and flexible rules – harmonised across the single market – to address the specific risks posed by AI systems and set the highest standards worldwide.

The proposal sets out a risk-based approach to regulating AI applications: those presenting an ‘unacceptable risk’ would be banned, those presenting a ‘high-risk’ would be subjected to additional requirements before entering the market, and others, such as chatbots and ‘deep fakes’, would be subject to new transparency requirements. Applications presenting ‘low or minimal risk’ – the vast majority of AI applications – could enter the market without restrictions, although voluntary codes of conduct may be developed. Other proposed measures include a European AI Board to monitor implementation and regulatory sandboxes to facilitate innovation.

In the context of the legislative proposal, STOA convened a meeting with the members of its International Advisory Board (INAB) on 24 June 2021, inviting Board members to present their views on the AIA and discuss them with the STOA Members. Participants welcomed the proposal as a timely and necessary step in the right direction. The Board members broadly agreed with the aims and approach of the AIA, but raised several points that they felt might require further reflection and discussion during the negotiation period.

Continuity from the AI White Paper’s focus on ‘AI Excellence’

Several key elements of the AIA were first set out in the AI White Paper. Some participants noted that the White Paper’s ambitious support for ‘AI excellence’ figures less prominently in the legislative proposal. Others highlighted what they saw as a missed opportunity to align AI development to the realisation of the Sustainable Development Goals. Other participants responded that the AIA focused upon product safety, and that support for innovation would be provided through other mechanisms. While many participants were pleased that the AIA proposed sandboxes for controlled experimentation with novel AI applications, they suggested that the explanation of how they would function in practice was insufficient. The Commission has indicated that further details would be set out in implementing acts.

List-based approach

The AIA includes list-based approaches to two of its key elements. First, the definition of AI technologies makes use of a list of techniques set out in Annex I. Second, the definition of high-risk applications includes safety components of products that are already subject to conformity assessment due to sectoral rules, as well as a list of additional applications categorised as high-risk applications, which is provided in Annex III. The Commission can update both annexes. Several participants argued that such an approach could lead to the AIA quickly becoming outdated, forcing the institutions into a constant ‘catch up’ stance following the emergence of new technologies, applications, and risks. It was suggested that, instead of lists, it may be better to rely upon criteria or principles to determine whether a specific piece of software would be defined as AI and to which risk category its applications may belong.

Some participants highlighted that the risk-based approach might not be appropriate for managing applications that are low-risk but high-frequency, such as targeted content delivery. It was suggested that low-risk applications could cumulate and interact to generate high-impact outcomes, some of which may be as serious as high-risk applications. Furthermore, high-risk applications in the AIA focus upon ‘critical’ domains of activity, such as essential services, infrastructure, employment and law enforcement. However, it was highlighted that AI systems that reinforce gender and racial discrimination remain an extremely serious risk, even in non-critical domains of activity. In this sense, the AIA may underestimate the pervasive character of the serious risks posed by AI throughout all domains of activity. It was also suggested that the focus on specific applications could represent a missed opportunity to address general purpose AI, which could become increasingly important in the years to come.

Governance and enforcement

A substantial part of the discussion focussed on enforcement and oversight, with several participants agreeing that reflection on the rules should go hand in hand with reflections on their enforcement. In the proposed AIA, national authorities would be responsible for enforcement of the AIA. An AI Board would be set up to monitor implementation and provide guidance. This Board would be comprised of Member State representatives and the European Data Protection Supervisor. Some participants were concerned that this approach could lead to uneven enforcement across the EU. In addition, some participants proposed that a group of experts could assist and advise the Board in their decisions on technically complex matters. Indeed, while it is not present in the AIA itself, the Commission has indicated its intention to establish such an expert group during the implementation process. Moreover, several participants highlighted the need for innovative approaches to regulation and implementation to keep up with the rapidly developing technology, welcoming the sandbox approach and calling for the elaboration of a wider range of innovative regulatory tools.  

Conformity assessment

Several participants raised issues relating to conformity assessment, in particular the question of who would be responsible for conducting it. For example, since the developers and deployers of AI systems are not always the same, it was suggested that the AIA should make a clearer distinction between these roles and their respective responsibilities when it comes to compliance and conformity assessment. Participants were concerned that failure to do so could have a chilling effect upon open source innovation.

Many participants commented on the need for access to data and algorithms in order to assess their conformity with the new rules. While AI providers may have the most profound understanding of their own data and algorithms, their role in assessing their own conformity may give raise to conflicts of interest and enforcement issues. On the other hand, it was suggested that some firms might be unwilling to provide authorities with access to their data and algorithms without guarantees about their use and safeguarding. Some participants stated that the key to conformity assessment would be to foster a healthy market for high-quality independent third party auditors, capable of assessing compliance while protecting firms’ data and algorithms. There was broad agreement on the importance of reflecting upon the role of data spaces and wider data governance issues, which are subject to other complementary initiatives.

Consumer protection

Some participants regretted the limited reference to consumer protection, as this could weaken the AIA’s effectiveness. They mentioned in particular the lack of mechanisms to address economic harms, to notify consumers of breaches, and to enable complaints and legal remedies. Furthermore, by prohibiting applications that intentionally manipulate behaviour in a way that gives rise to harms – rather than restricting any application that has this effect, regardless of intentions – some participants felt that the regulation might fail to effectively protect consumers. The AIA also includes specific protections for people with age-related or physical or mental vulnerability. While some participants suggested that the AIA should go further to protect vulnerable citizens, in particular children, others reasoned that the complexity of AI systems makes all consumers vulnerable in the sense that they struggle to make informed choices, and that the protections set out for vulnerable people in the draft AIA should apply to all citizens.

Next steps

The AIA is now subject to negotiation between the European Parliament and the Council. Meanwhile, other relevant legislative files are also under discussion, including the Data Governance Act, Digital Services Act, and a forthcoming Data Act.

There will be further opportunities for STOA and INAB members to discuss these proposals in the coming months. Meanwhile, stay informed about STOA’s Centre for AI and the International Advisory Board on the STOA website, or by following us on Twitter at @EP_ScienceTech.

Your opinion counts for us. To let us know what you think, get in touch via stoa@europarl.europa.eu.


Related Articles
Comments

Leave a Reply

%d bloggers like this: