Written by Luisa Antunes with Laia Delgado Callico.
Artificial intelligence (AI) was at the core of the response to the Covid‑19 pandemic, which in turn contributed to accelerating the development of AI. These technological advances carry benefits, risks, ethical issues and societal implications, which must be addressed when discussing policy options in AI governance.
On 12 February 2022, the European Parliament’s Panel for the Future of Science and Technology (STOA) organised an event entitled ‘Ethical issues in the Covid‑19 pandemic: The case of digital health applications’. This online workshop included a presentation of an STOA study on AI in healthcare, followed by a discussion panel on the related ethical, regulatory and policy challenges.
European Parliament Vice-President and STOA Chair Eva Kaili (S&D, Greece) opened the event, highlighting STOA’s commitment to technology as a fundamental means for ensuring EU citizens’ wellbeing.
Presentation of the STOA study ‘Artificial intelligence in healthcare: Applications, risks, ethical and societal impacts’
Karim Lekadir, Researcher, Director of the AI in Medicine Lab at the University of Barcelona (Spain) and principal author of the study, presented the main healthcare benefits of AI, including improvement of diagnosis and triage prediction, enhanced risk prediction and disease prevention, personalised treatment, optimisation of surgical intervention outcomes and self-care management.
Professor Lekadir provided guiding principles and policy options for the trustworthy application of AI in healthcare. The use of AI requires a holistic multi-criteria approach, based on principles of accuracy, robustness, fairness, usability, explainability, durability and traceability. The risks and biases can only be identified and minimised in a systematic manner, with a merged stakeholder approach, involving more research and education, where clinicians are kept at the centre of decision-making, using AI as a support, rather than a replacement tool, and ensuring a feedback loop for AI to learn from its mistakes.
Panel discussion on digital health tools: Governance and practice
STOA Panel member Anna Michelle Asimakopoulou (EPP, Greece) introduced the panellists and moderated the discussion.
Effy Vayena, Professor of Bioethics at the Swiss Federal Institute of Technology Zürich and Chair of the Hellenic Commission for Bioethics and Technoethics, addressed the use of digital health applications in Covid‑19 contact tracing and their low adoption rate among citizens. In her view, the transparency and lack of public accountability of the companies behind these apps are an issue, making governance essential to ensure their acceptance and their success with the public.
Alessandro Blasimme, Senior Scientist, Health Ethics and Policy Lab, Swiss Federal Institute of Technology Zürich, discussed the development of AI during the Covid‑19 pandemic and associated ethical issues, including prioritisation of healthcare access. Although there are already EU regulatory instruments and operational standards in place, they still do not operate coherently. He argued that regulation alone is insufficient to safeguard and promote responsible innovation in healthcare, which would benefit from an ‘ethics-by-design’ and principle-based approach.
Timo Minssen, Professor of Law and Founding Director of the Centre for Advanced Studies in Biomedical Innovation Law at the University of Copenhagen, stressed the importance of incorporating ‘lessons learnt’ from the current pandemic in the guidelines for preparedness for the prevention and management of future pandemics. This will require healthcare applications that are efficient and globally accessible and deployable. Nevertheless, geopolitical context and competitive interests complicate the regulation of these applications.
Elettra Ronchi, Senior Policy Consultant in Digital Health and Data Governance with the World Health Organization (Europe office), highlighted the key role that timely and reliable data collection played in the Covid‑19 response. An effective public health response depends on data governance that addresses privacy and data protection measures, as well as health information system gaps. To inspire public trust, the collection and sharing of personal data should be evidence-based, proportionate to the risks, limited in time and implemented with full transparency.
During the Q&A session, Professor Lekadir stressed that validation and evaluation of health applications should run in parallel with their development to ensure accuracy of AI decisions. Patient engagement is essential in this process in order to gain public trust. Dr Ronchi remarked how legislation is still not fully adapted to the digital era and raised the importance of ‘regulatory sandboxes’ to keep up with AI development. In addition, public trust is linked to disinformation and the current ‘infodemic’ has undermined the global response to the pandemic. Professor Minssen stressed the importance of multidisciplinary discussions between regulatory, social and technical experts, while also highlighting the importance of technology sustainability. Dr Blassime reflected on the importance of open and pluralistic discussions of ethical risks to accompany the regulatory framework. Professor Vayena called for transparency in the use of technology and a procedure to translate consensus into processes and mechanisms to act upon these values and ethical principles.
Anna Michelle Asimakopoulou concluded the event, reiterating that public trust in these technologies is key. The full recording of the event is available here.
Your opinion counts for us. To let us know what you think, get in touch via email@example.com.