you're reading...
BLOG

New STOA study on artificial intelligence: How does it work, why does it matter and what can we do about it?

Written by Philip Boucher,

Wired brain illustration - next step to artificial intelligence

© Adobe Stock

Artificial intelligence (AI) is probably the defining technology of the last decade, and perhaps also the next.

The European Commission recently closed the consultation period on its white paper on AI and the European Parliament has voted in favour of launching a special committee on AI in the digital age. In this context, STOA has published a timely study on AI, which provides accessible information about the full range of current and speculative AI techniques and their associated impacts, and sets out several regulatory, technological and societal measures that could be mobilised in response.

How does artificial intelligence work?

The study sets out accessible introductions to some of the key techniques that come under the AI banner, organised into three waves.

The first wave of early AI techniques is known as ‘symbolic AI’ or expert systems. Here, human experts create precise rule-based procedures – known as ‘algorithms’ – that a computer can follow, step-by-step, to decide how to respond intelligently to a given situation. Symbolic AI is at its best in constrained environments that do not change much over time, where the rules are strict and the variables are unambiguous and quantifiable. While these methods can appear dated, they remain very relevant and are still successfully applied in several domains.

The second wave of AI comprises more recent ‘data-driven’ approaches, which have developed rapidly over the last two decades and are largely responsible for the current AI resurgence. These automate the learning process of algorithms, bypassing the human experts of first wave AI.

The third wave of AI refers to speculative possible future waves of AI. While first and second wave techniques are described as ‘weak’ or ‘narrow’ AI, in the sense that they can behave intelligently in specific tasks, ‘strong’ or ‘general’ AI refers to algorithms that can exhibit intelligence in a wide range of contexts and problem spaces. Such artificial general intelligence (AGI) is not possible with current technology and would require paradigm-shifting advances.

Why does artificial intelligence matter?

The study builds upon the understanding of how AI works to examine several opportunities and challenges presented by AI applications in various contexts.

Several challenges are associated with today’s AI. Broadly, they can be understood as a balancing act between avoiding underuse – whereby we miss out on potential opportunities, and avoiding overuse – whereby AI is applied for tasks for which it is not well suited or results in problematic outcomes. Specific challenges include bias, employment impacts, liability issues, military use and effects on human autonomy and decision-making.

There are also several longer-term opportunities and challenges that are contingent upon future developments that might never happen. For example, it has been suggested that AI could escape human control and take control of its own development, or develop artificial emotions or consciousness, presenting interesting – yet speculative – philosophical questions.

What can we do about artificial intelligence?

The study sets out several options that could be mobilised in response to the opportunities and challenges presented by AI.

Most AI policy debates concern how to shape the regulatory and economic context in which AI is developed and applied in order to respond to specific opportunities and challenges. These could include creating a supportive economic and policy context, promoting more competitive ecosystems, improving the distribution of benefits and risks, building resilience against a range of problematic outcomes, enhancing transparency and accountability, ensuring mechanisms for liability and developing governance capacity. There are also more abstract policy debates about the broad regulatory approach, such as whether policies and institutions should be specific to AI or tech-neutral.

It is also possible to shape the development and application of AI through technological measures. They could include activities related to technology values, the accessibility and quality of data and algorithms, how applications are chosen and implemented, the use and further development of ‘tech fixes’, and encouraging more constructive reflection and critique.

Finally, societal and ethics measures could be taken, targeting the relationship between AI and taking account of social values, structures and processes. These could include measures related to skills, education and employment; the application of ethics frameworks, workplace diversity, social inclusivity and equality, reflection and dialogue, the language used to discuss AI, and the selection of applications and development paths.

Five key messages

Language matters. In many ways, the term ‘AI’ has become an obstacle to meaningful reflection and productive debate about the diverse range of technologies to which it refers. It could help to address the way we talk about AI, including how we identify, understand and discuss specific technologies, as well as how we articulate visions of what we really want from it.

Algorithms are subjective. Since human societies have structural biases and inequalities, Machine learning tools inevitably learn these too. While the only definitive solution to the problem is to remove bias and inequality from society, AI can only offer limited support for that mission. However, it is important to ensure that AI counteracts, rather than reinforces inequalities.

AI is not an end in itself. The ultimate aim of supporting AI is not to maximise AI development per se, but to unlock some of the benefits that it promises to deliver. Instead of perfecting new technologies then searching for problems to which they could be a profitable solution, we could start by examining the problems we have and explore how AI could help us to find appropriate solutions.

AI might fall short of its promises. Many AI applications could offer profound social value. However, employment impacts and privacy intrusions are increasingly tangible for citizens, while the promised benefits to their health, wealth and environment remain intangible. The response could include targeting more ambitious outcomes while making more modest promises.

Europe needs to run its own AI race. AI is at a pivotal moment for both regulation and technology development and the choices we make now could shape European life for decades to come. In running its own race, European AI can ensure a meaningful role for citizens to articulate what they expect from AI development and what they are ready to offer in return, to foster a competitive market that includes European small and medium-sized enterprises (SMEs), and to put adequate safeguards in place to align AI with European values and EU law.

Your opinion counts for us. To let us know what you think, get in touch via stoa@europarl.europa.eu.

About Scientific Foresight (STOA)

The Scientific Foresight Unit (STOA) carries out interdisciplinary research and provides strategic advice in the field of science and technology options assessment and scientific foresight. It undertakes in-depth studies and organises workshops on developments in these fields, and it hosts the European Science-Media Hub (ESMH), a platform to promote networking, training and knowledge sharing between the EP, the scientific community and the media. All this work is carried out under the guidance of the Panel for the Future of Science and Technology (STOA), composed of 27 MEPs nominated by 11 EP Committees. The STOA Panel forms an integral part of the structure of the EP.

Discussion

One thought on “New STOA study on artificial intelligence: How does it work, why does it matter and what can we do about it?

  1. Artificial Intelligence directly translates to conceptualizing and building machines that can think and hence are independently capable of performing tasks, thus exhibiting intelligence. Thanks for the post.

    Like

    Posted by Brad Traversy | October 1, 2020, 14:15

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Download the EPRS App

EPRS App on Google Play
EPRS App on App Store
What Europe Does For You
EU Legislation in Progress
Topical Digests
EPRS Podcasts

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 3,398 other followers

RSS Link to Scientific Foresight (STOA)

Disclaimer and Copyright statement

The content of all documents (and articles) contained in this blog is the sole responsibility of the author and any opinions expressed therein do not necessarily represent the official position of the European Parliament. It is addressed to the Members and staff of the EP for their parliamentary work. Reproduction and translation for non-commercial purposes are authorised, provided the source is acknowledged and the European Parliament is given prior notice and sent a copy.

For a comprehensive description of our cookie and data protection policies, please visit Terms and Conditions page.

Copyright © European Union, 2014-2019. All rights reserved.

%d bloggers like this: