Written by Philip Boucher.
The word ‘innovation’ is often used as shorthand for improved technical, economic and social processes. However, any specific innovation involves the redistribution of costs and benefits, creating winners and losers. For some, regulation of technology should be avoided in case it hinders innovation, while others see regulation as essential, to mitigate risks on the path to innovation. However, regulation and innovation are not a zero-sum game. Debates about regulatory (in)action and its impact on innovation would benefit from greater specificity about which innovation paths are considered desirable, for whom, and how policy choices would help to achieve them. This paper explores the relationship between regulation and innovation in the context of artificial intelligence (AI).
AI is a collection of technologies that have the capacity to analyse their environment and respond ‘intelligently’ with some degree of autonomy. Notoriously difficult to define, AI straddles the boundary between current and future technology. Today’s AI plays a substantial and increasing role in our personal and professional lives. In some cases, algorithms are almost visible, as they personalise news feeds, recommend products and give directions. More often, they inform (and sometimes implement) ‘upstream’ decisions in industrial, commercial and public-sector processes. Those affected can rarely understand or even examine these algorithms, although they do have profound impacts, both positive and negative. Tomorrow’s AI is often projected in wild scenarios ranging from the obsolescence of employment with health and wealth for all, to mass surveillance, disempowerment and unseen depths of inequality. More likely, we will see moderate elements of both extremes, with impacts distributed unevenly across populations, although this depends to some extent upon decisions about regulatory (in)action and the resulting innovation pathways.
Like AI, innovation is difficult to define and evaluate. While instinctively considered a good thing, any specific innovation involves the redistribution of costs and benefits in ways that are not always welcomed by everyone and may only be revealed years later. Innovation in AI is no exception. We often hear that AI regulation should be avoided in case it hinders innovation, although proponents of this approach rarely specify how this would promote an innovation path while at the same time ensuring optimal distribution of the costs and benefits involved. Following the lead of the better regulation guidelines, it is good practice to first set out desirable criteria for innovation paths and outcomes – including the distribution of costs and benefits – before examining how a range of possible regulatory approaches, including a baseline approach of no regulatory action, could help to achieve them.
In many ways, regulation has earned its bad reputation when it comes to innovation. Some heavily regulated sectors have been slow to respond to appetites and opportunities for innovation, perhaps because of inexperience or protectionism. Some regulations encouraging the adoption of ‘best available techniques’ and ‘end-of-pipe solutions’ have promoted short-term innovation at the expense of more ambitious transformative innovation. However, regulation provides the necessary preconditions to enable market access for innovations, provides firms considering major investment with certainty, and can be used to articulate ambitious visions for development. Regulation is also important in establishing the conditions and context of innovation, including as regards labour, capital, certainty and competition. There are also cases where innovation leads to disruptive social, economic or security impacts that demand regulatory responses.
In some cases, the inherited wisdom of ‘regulation hinders innovation’ may hold true. However, it is likely that a combination of carefully designed and implemented measures for at least some aspects of technology development would provide the optimal conditions for a desirable innovation path.
In the context of AI, four broad approaches to the regulation-innovation relationship can be identified. Some are characterised by the ‘carrot’ of incentivising specific AI applications to reap their benefits and seize the opportunities they offer, others by the ‘stick’ of restraining specific AI applications in order to mitigate their risks. These approaches are not mutually exclusive; they can go hand in hand, alongside moments of regulatory inaction, to optimise the conditions for the preferred innovation path.
The first broad approach is to directly regulate AI innovation in order to shape how algorithms are developed and applied. ‘Carrot’ policies could include mission-oriented innovation programmes to promote ‘moonshots’ that deliver benefits far beyond what can be achieved incrementally, for example, to bypass automation of private vehicles in favour of an ambitious shared-ownership model. ‘Stick’ policies can promote innovation by responding to concerns that might inhibit potential adoption, for example, with moratoria on controversial applications such as biometric identification and lethal autonomous weapons.
The second broad approach is to shape the context in which AI is developed and adopted in order to influence the pace and direction of innovation. ‘Carrot’ measures could include boosting capital, skills, data and SME support, as well as completing the digital single market to reduce friction in terms of legal compliance, administrative burden and consumer choice. ‘Stick’ measures could include digital taxes and penalties for uncompetitive practices. Again, these stick policies can promote innovation by enhancing competition, which has a demonstrably positive link to innovation.
The third broad approach is to respond indirectly to specific outcomes and impacts as they emerge. While such measures may have a weaker influence on the pace and direction of innovation itself, they play an important role in ensuring that the innovation path remains desirable. Examples include providing a safety net for workers at risk of displacement and ensuring the continued effectiveness of measures to defend fundamental rights with regard to democratic processes, non-discrimination and consumer protection. Equitable distribution of costs and benefits, alongside protection measures for citizens and consumers, could be key conditions for the acceptability of innovation paths.
The fourth broad approach involves innovation in regulation itself, changing how policies are designed and implemented to better fit the specificities of AI. Novel approaches, such as ‘regulatory markets‘, would see firms compete to meet demands set by regulators. Temporary spaces or ‘sandboxes’ can liberate regulators and innovators to perform controlled experiments with policies and technologies and observe the results before deciding whether to scale them up. Anticipatory innovation governance also recommends early-stage experiments to establish constant feedback loops between innovation and regulation. Well-crafted regulation is not only compatible with AI innovation, but is its essential precondition. Poorly designed policy choices – including both regulation and inaction – can damage both AI development and public confidence. There are no simple solutions to complex socio-technical challenges, but there certainly are some emerging lessons for policy-makers: promote synergies, leaving behind the ‘zero-sum game’ assumption that regulation is in direct competition with innovation; take a long-term view, as restricting some developments in the short term can deliver innovation payoffs in the long term by ensuring competition, inspiring public trust or leapfrogging incremental steps; level the playing field, as a more even distribution of costs, benefits and opportunities is conducive to innovation; focus on objectives and outcomes, as AI develops more quickly than policy, so detailed prescriptions could be quickly outdated; regulate innovatively, first making use of novel approaches such as sandboxes, experiments and co-regulation, and then harmonising, to benefit from best practices, economies of scale and interoperability; recognise diversity, taking account of local and regional conditions to ensure a fairer distribution of costs, benefits and opportunities; deployinnovative procedures for public administration and procurement to improve performance, accumulate in-house expertise and promote a culture of innovation in the public sector; and promote confidence thatcitizens’ and consumers’ rights will be respected, and that firms’ regulatory environment will remain stable and supportive. Indeed, policy-makers are increasingly embracing a range of regulatory options as a means – and not a barrier – to achieve the right kind of AI innovation.
Read this ‘at a glance’ on ‘What if AI regulation promoted innovation?‘ in the Think Tank pages of the European Parliament.
Listen to policy podcast ‘What if AI regulation promoted innovation?’ on YouTube.