Please accept YouTube cookies to play this video. By accepting you will be accessing content from YouTube, a service provided by an external third party.
If you accept this notice, your choice will be saved and the page will refresh.
Challenges to the design of ethical algorithmsThe embedding of ethical principles in algorithmic decision-making would have its challenges, given the proprietary nature of algorithms and the need to safeguard privacy. Certain questions need to be asked before implementing an algorithmic decision-making system. Should the ethical assumptions in the algorithm be transparent and easy for users to identify? What about the development teams that create them – are they sufficiently diverse? Will people affected by these decisions have any influence over the system? Even if algorithms were made transparent, how could they be understood by multiple stakeholders of varying technical algorithmic literacy? Could ethical principles such as fairness and the right to privacy be encoded in the system and, if so, what should those principles be and who should decide upon their choice and weighting? Do our societies have universal, moral standards that can be codified? In response to a mounting number of news articles about the ethics of algorithms, various market solutions that offer ‘algorithmic accuracy, bias and fairness’ certification are starting to emerge, including the AI fairness toolkit, Audit-AI by Pymetrics, Facebook’s Fairness Flow and ORCAA. Recognition of the need to operationalise moral judgement for the development of autonomous vehicles and to integrate artificial moral agents that can manage complexity into new technologies has led to a series of algorithm design initiatives based on various ethical theories, such as one by the National Science Foundation. The emergence of artificial intelligence and advanced machine-learning may lead to self-driving cars being equipped with an ethical knob that could set key patterns of behaviour. IBM has meanwhile developed a new set of open-source software in order to help developers deal with black-box algorithms and understand how the artificial intelligence they use makes decisions.
What does developing ethical algorithms mean for European policy-making?When it comes to ethical impacts of algorithmic decision-making systems, there are as yet no established certification models and procedures that could expressly address ethical considerations, including bias and transparency, in the domain of algorithms. This is partially due to a lack of existing standards on these issues to certify against. Developing ethical principles and codes for algorithms means identifying the decision-making principles and norms and the allocation of roles and responsibilities of the decision system. The IEEE global initiative for ethical considerations in artificial intelligence and autonomous systems, the Toronto Declaration and Facebook’s Fairness Flow indicate the need to set socially oriented goals and benchmarks for the development of algorithms. Given, however, that algorithms are unstable objects of ethical scrutiny, their ethics could still be investigated via the use of algorithmic impact assessments, and ‘algorithmic audits’ may need to become a legal requirement when implementing any systems of this kind. These audits would address ethical questions, such as the legitimacy of the use of an algorithmic decision-making system in certain contexts (e.g. evidence-based sentencing or lethal weapons), and be performed by ethical committees, accreditation bodies and certification agencies. Their aim should be to evaluate the proposed uses of algorithmic decision-making in highly sensitive and/or safety-critical application domains and investigate suspected cases of rights violations in the frame of the same technological context. Moreover, these instruments could help system developers and decision-makers revisit some of their own assumptions of what an algorithm actually is, and explain decisions in areas such as credit, for instance. Interestingly, the requirement for data controllers to provide data subjects with ‘meaningful information about the logic involved’ in an automated decision-making process – introduced by the General Data Protection Regulation (GDPR) – may pave the way for the development of practical algorithmic ethics that address virtues, consequences and norms. Shedding light on the assumptions built into the algorithm or disclosing the code of the system or information about its logic demands a careful examination of the relevant rules concerning intellectual property rights that may set limits on accessibility. To conclude, EU policy-makers have a unique opportunity to lead the world in the ethical regulation of the digital revolution, by promoting the development of a general ethical framework governing the design, implementation and development of algorithms. These should remain under human oversight and control and be responsive to bias complaints and to the findings of reports on other undesired effects.
Read this At a glance on ‘What if algorithms could abide by ethical principles?‘ on the Think Tank pages of the European Parliament.