Written by Lieve Van Woensel with Victoria Joseph,
Since early 2015, the Science and Technology Options Assessment (STOA) Panel at the European Parliament, has been experimenting with science and technology foresight, to provide Members with insights that will help them anticipate possible future developments.
Foresight is a fundamental path in policy-making, as policy-making is often about preparing for the future. The scenario-based foresight approach used by STOA has the potential to inspire anticipatory policy-making and support the EP’s preparedness for future challenges and opportunities related to long-term trends in the field of science and technology.
The first scientific foresight pilot project helped Parliament’s Legal Affairs Committee (JURI) and the working group on Robotics and Artificial Intelligence to prepare a legislative initiative resolution, adopted in the EP plenary in Strasbourg on 16 February 2017, through which MEPs called on the European Commission to propose rules on robotics and artificial intelligence.
Besides foresight-based policy, other ideas inspired by the STOA study on robotics, fuel public debate, for instance a tax on the added value of robots as compensation for possible job losses, and the calls for ethical guidelines for robot designers and users.
Foresight essentially has the power to teach us about the possible opportunities and challenges that may be the consequences of ongoing techno-scientific developments. It helps us feel more comfortable with uncertainty, because we know more about the types of consequences we have to prepare for, and how we can work towards desirable futures and avoid undesirable ones.
This briefing explains how STOA uses the originally designed scientific foresight approach in practice.
The first key element of the approach is choosing topics which might cause disruptive societal changes in the future. We make sure that we have the most accurate and up-to-date information on the topic. Analysis of possible impacts follows – in a facilitated brainstorming environment – from multiple perspectives, involving experts from multiple disciplines, especially social scientists, to discuss the state of the art and possible impacts with technical experts, and involving relevant stakeholders. Multiple perspectives are guaranteed by ‘STEEPED’ (Social, Technological, Economic, Environmental, Political, Ethical and Demographic), a scheme which is used as a checklist to provide seven different lenses for a topic, to ensure that all possible outcomes are explored. The outcomes of the facilitated brainstorming sessions are usually long lists of hopes and fears, potential intended and unintended impacts of possible future developments of these techno-scientific trends, including soft impacts (those impacts that are not easy to measure – for example affecting health, environment and safety – and for which it is not easy to assign responsibility). The identified possible consequences of future technology developments are incorporated in a set of diverse imagined scenarios, constructed with the help of professional scenario developers. Finally, exploring these imagined possible future scenarios results in a list of opportunities and challenges.
It is these opportunities and challenges that provide guidance for MEPs in anticipating possible future developments through the work they do today.
Read this briefing on ‘Foresight: a policy tool for anticipating technology trends‘ on the Think Tank pages of the European Parliament.
[…] on the different types of impacts of new technologies that one can envision, while explaining the foresight approach we are using at the European Parliament. I emphasised that not only intended & unintended […]
Having worked with STOA in the early 1990’s on projects which started with a Technology Assessment of security policies and progressing to concrete new policy recommendations which have subsequently been implemented, I really welcome this report and approach.
The potential unforseen consequences of certain developments in robotics and AI were addresed last week by some of the founders of top AI and robotics companies (such as the founder of Google’s deepmind and Tesla). They called on the UN to ban the future production of autonomous robots that can decide for themselves whom to kill saying it would create a fourth revolution in warfare. The focus of the experts is the UN Certain Conventional Weapons committee which is discussing an outright ban. However, groups such as ICRAC (the International Committee For Robot Arms Control) who are participating in the CCW have highlighted the coming internal security role of armed robots at borders etc – and this aspect will not be covered by the CCW process. I suggest that STOA work with ICRAC to convene an expert meeting on this dimension.
Dr Steve Wright
Reader in Applied Global Ethics
Leeds Beckett University