Written by Mihalis Kritikos,
Exploring the relationship between ethics and technological innovation has always been a challenging task for policy-makers. Ethical considerations concerning the impact of research and innovation (R&I) are increasingly important owing to the quickening pace of technological innovation and the transformative potential and complexity of contemporary advances in science and technology. The multiplication of legal references to ethical principles and the mushrooming of ad hoc ethics committees indicate the institutional embedding of ethics into the scientific research process as such, but also into an increasing array of technological trajectories. Yet the rapid development of disruptive technologies means that social and ethical norms often struggle to keep up with technological development. But what if disruptive technologies were to challenge traditional ethical norms and structures?
In a traditional technological setting, ethics is mostly seen as a constraining procedural requirement of a legal nature that needs to be met at the outset of the scientific research endeavour. This is frequently the case in technological domains where human participation, clinical trials or animal experimentation are planned. Obtaining prior ethics approval remains a primary challenge for scientists and technology developers. However, as ethical requirements become stricter and new technology-related ethical challenges arise, concepts such as ‘ethics by design’ and ‘responsible innovation’ are gradually being mainstreamed in several policy contexts such as nanonotechnologies, gene editing and emerging information technologies. In these domains, ethics has gradually become part of the design process, building on methodologies, such as the value sensitive design approach, ethically aligned design, the recently adopted blockchain ethical design framework or even the ongoing work on the 7000 – model process for addressing ethical concerns during system design, that provide a way to ensure that social and moral values and ethical principles are protected, and human values accounted for in a comprehensive manner.
Can technology challenge established ethical norms and structures?
Most ethicists, regulators and policy-makers tend to treat moral beliefs as independent variables, as if these were immune to technological influences, exhibiting either moral futurism or moral presentism and neglecting the interaction or even co-production of technology and ethical norms. Besides being a long-lasting object of ethical action, new technologies, which are inducing profound social and cultural changes, have started affecting the relative importance of various moral principles, human values and normative orientations. Their advancement may also reshape our assumptions and practices in relation to ethics, and may alter social norms around what is ‘acceptable’, ‘normal’ and ‘ethical’. New and emerging technologies, such as genome testing and profiling technologies, strain our traditional moral theories and may blur the lines of what ‘ethical’ means by influencing, for example, the distribution of social roles and responsibilities, moral norms and values, or identities.
Scientific notions and technological concepts such as gene editing and autonomous machines are penetrating existing ethical categories, and triggering the reconsideration of traditional ethical norms, such as autonomy and human responsibility. Wearable cameras pose challenges to traditional ethical guidelines around informed consent, anonymity and confidentiality, data protection and privacy, mostly because of the growing accessibility of personal information. Robotic technologies also affect the central categories of ethics: our concepts of agency and responsibility, and our value frameworks. Some scholars have even argued that ‘people’s moral judgements depend on the digital context in which a dilemma is presented’ and that, when faced with high-conflict moral dilemmas, people are more likely to opt for a utilitarian solution if they are responding on a smartphone rather than on a personal computer. Big data transforms traditional concepts of ethical research, and moves ethical analysis to less concrete notions, such as data discrimination and privacy-conscious engineering. With their ever-increasing power, breadth and multi-functional integration, emerging technologies are becoming increasingly intrusive, interfere with private life and also question the authority of institutional ethics-governance procedures to cope with technological novelties that invalidate traditional ethics-governance instruments.
What do the disruptive effects of technology upon ethics mean for European policy-making?
Achieving compliance with ethical standards has become a legal requirement in many areas of EU law including the rules for the commercial authorisation of medicinal and biotechnology products, the essential requirements for receiving EU research funds and the processing of personal data. At the EU level, all ex-ante ethical assessments of technological and scientific proposals are performed on the basis of ethics checklists that refer to a variety of ethical values, rights and principles. Such ethics checklists and compliance requirements, which are part of the legal framework for the evaluation and selection of EU-funded research proposals, are becoming increasingly incomplete as they do not recognise the dynamic character of morality and its interaction with technology. The same applies to the opinions of expert groups on ethics that often remain external to the dynamic and disruptive nature of technological development. In order to connect the ethics of technology more closely with the day-to-day work of R&I practitioners as well as with rapid technological advancements, reflexive procedures without an a priori, fixed ranking of principles are needed for the resolution of contextual value conflicts.
The recently adopted European Parliament resolution on civil law rules on robotics – comprising a ‘code of ethical conduct for robotics engineers’, a ‘code for research ethics committees’, a ‘licence for designers’, and a ‘licence for users’ – is a step in the right direction, as it introduces a detailed process-based architecture for technology ethics in a rapidly evolving technological domain. The charter on robotics contained in the resolution combines an ex-ante ethics-by-design approach with a reflexive framing and a meta-ethical analysis of the governance process employed for the embedding of ethics into the structures for the development of this disruptive technology. This legislative initiative resolution should be part of a wider paradigm shift that could include the introduction of new ethical principles (such as the right not to be measured, related to possible misuses of artificial intelligence and the internet of things, and the right to meaningful human contact, relating to possible misuses of care robots). It could also trigger the development of novel models of ethical assessment that will enrich evaluation procedures and initiate public debates on the need to depart from the prevailing ‘ethics as a constraint’ approach. Such a paradigm shift may need to introduce procedural steps that would allow technology design choices and socio-technical trajectories to be deliberated upon from an ethical viewpoint, as an essential part of the wider technology assessment and policy-framing practice.
All these considerations could constitute elements of a new EU-wide social contract on responsible innovation that might possibly place ethics by design at the epicentre of the technology development cycle. Such a contract could render anticipatory technology ethics tools fully operational and bring forward the role and limitations of ethical expertise as a source of epistemic authority that claims to represent the entirety of societal concerns. At the same time, the introduction of research integrity legal standards, ethical impact assessments, ethics audits or follow-ups and harmonised accreditation procedures for research ethics committees may need to be considered as an immediate response to the ambiguity of claims, and the challenges associated with value pluralism and moral uncertainty concerning emerging technologies.