Members' Research Service By / January 18, 2024

United States approach to artificial intelligence

While efforts to regulate artificial intelligence (AI) both globally and in the United States intensify, the prospects for broad Congress-passed legislation remain doubtful.

© pinkeyes / Adobe Stock

Written by Marcin Szczepański.

While efforts to regulate artificial intelligence (AI) both globally and in the United States intensify, the prospects for broad Congress-passed legislation remain doubtful. In October 2023, President Biden issued a wide-reaching executive order on safe, secure and trustworthy AI. It is a positive step, but implementation will be challenging.

Current global artificial intelligence policy landscape

According to the September 2023 Global AI Legislation Tracker, ‘countries worldwide are designing and implementing AI governance legislation commensurate to the velocity and variety of proliferating AI‑powered technologies. Legislative efforts include the development of comprehensive legislation, focused legislation for specific use cases, and voluntary guidelines and standards’. Stanford University has reported a massive increase in the number of countries with laws containing term ‘AI’ – growing from 25 countries in 2022 to 127 in 2023. While these individual jurisdictions, including the EU, advance with their own frameworks and approaches, multilateral efforts to coordinate them are also intensifying, be it through uptake of the AI principles of the Organisation for Economic Co-operation and Development or discussions in the United Nations and G7. The Centre for Strategic & International Studies explains that in essence these unilateral and multilateral laws and guidelines focus on the need to ‘balance the potential risk of AI systems against the risk of losing the economic and social benefits the new technology can bring’.

Developments in the United States

Against this background, the United States (US) has also been taking steps to regulate AI. The first federal laws on AI have been enacted over the past few Congresses, either as standalone legislation or as AI-related provisions and clauses placed in broader acts. Worth particular mention is the National Artificial Intelligence Initiative Act of 2020 (H.R. 6216), which established an American AI Initiative and guidance on AI research, development and evaluation activities at federal science agencies. Other acts have obliged certain agencies to drive AI programmes and policies across the federal government – such as the AI in Government Act (H.R. 2575) and the Advancing American AI Act (S.1353). Altogether, in the 117th Congress, at least 75 bills were introduced that focused on either AI and machine learning or related provisions. Six of those were enacted. The 118th Congress, as of June 2023, had introduced at least 40 AI-relevant bills, none of which has been enacted. Altogether, since 2015 nine bills have been passed. In November 2023, as many as 33 legislative pieces were still pending for consideration by US lawmakers. In January 2023 the White House Office of Science and Technology Policy published its Blueprint for an AI Bill of Rights and the National Institute of Standards and Technology released an AI Risk Management Framework. In the summer of 2023 two more broad policy frameworks, SAFE Innovation Framework for AI Policy and Blumental & Hawley Comprehensive AI Framework, seeking bipartisan support, were announced to guide the Congress in developing future AI legislation. Furthermore, in April 2023 – in a joint statement – four federal agencies underlined that their enforcement powers applied to AI and that advanced technology should not be an excuse for breaking the law. At state level, Stanford University reports that, between 2016 and 2022, 14 states passed legislation, the leader being Maryland with seven AI-related bills, followed by California with six, and Massachusetts and Washington with five.

President Biden’s executive order on AI

In an increasingly dense legislative environment, US President Joe Biden issued an executive order (EO) on the ‘Safe, Secure, and Trustworthy Development and Use of AI’ on 30 October 2023. It builds on earlier work, such as an EO directing agencies to combat algorithmic discrimination and the securing of voluntary commitments from major US companies (such as Amazon, Google, Meta, Microsoft and OpenAI) to drive safe, secure, and trustworthy AI development. The order covers eight policy fields.

First, it focuses on new standards for AI safety and security. For instance, it requires developers of the most powerful AI systems to share their safety test results and other critical information with the US government. Government agencies are tasked with developing standards, tools and tests to help ensure AI systems are safe, secure, and trustworthy. New standards will also be set to protect against the risks of using AI to engineer dangerous biological materials and protect US citizens from AI-enabled fraud and deception. The administration is due to establish an advanced cybersecurity programme to develop AI tools that find and fix vulnerabilities in critical software. The National Security Council and White House Chief of Staff have been tasked with developing a national security memorandum to direct further action on AI and security.

To protect citizens’ privacy from AI-related risks, the order prioritises federal support to accelerate the development and use of privacy-preserving techniques; it strengthens relevant research and technologies, requires evaluation of how agencies collect and use commercially available data, and calls for the development of guidelines for federal agencies to evaluate the effectiveness of privacy-preserving techniques. To advance equity and civil rights, the EO calls for clear guidance for landlords, federal benefits programmes and federal contractors, and measures to address logarithmic discrimination and fairness certainty throughout the criminal justice system by developing best practice on AI use. To protect consumers, patients and students, the order calls for measures to advance the responsible use of AI in healthcare and shape its potential to transform education.

To support workers, the EO calls for the development of principles and best practice to mitigate harm and maximise the benefits of AI, and for a report on AI’s potential labour-market impacts and on ways to mitigate them through stronger federal support. To promote innovation and competition, the order seeks to catalyse research across the US and calls for new measures to support fair, open, and competitive AI ecosystems, and for efforts to attract highly skilled immigrants and non-immigrants with expertise in critical areas to study, stay and work in the US. To advance US leadership abroad, the EO requires the expansion of bilateral and multilateral AI engagements, accelerated development and implementation of vital AI standards, and promotion of the safe, responsible, and rights-affirming development and deployment of AI abroad to address global challenges.

Finally, to ensure responsible and effective government use of AI, the EO seeks new guidance for agencies using AI, more efficient and rapid contracting to acquire AI products and services, and government-wide acceleration in hiring AI experts. The EO was followed by detailed implementation guidance from the Office of Management and Budget.

Reactions

Most observers have characterised the EO using terms such as ‘landmark’ and ‘sweeping’. The polling of both Democrat and Republican supporters revealed that there is strong bipartisan support for its main provisions. While some commentators have also hailed the ‘rare example of bipartisan consensus on Capitol Hill’, the EO did not entirely escape criticism from the Republicans for lacking a ‘light touch and market-driven approach’, placing ‘regulatory burdens that could hinder the development of … technology’ and based on an unlikely timetable. Conversely, the Democrats hailed the EO’s ambitious scope. The Center for Strategic and International Studies (CSIS) sees the EO as confirmation that the US will not follow the EU risk classification embedded in the AI Act. The CSIS also sees a limit to what an EO can achieve unless Congress passes broad legislation, which it deems unlikely. It assesses that, unlike in the EU, the most likely outcome over the next few years ‘is a bottom-up patchwork quilt of executive branch actions’. The Brookings Institution considers that the EO sends a strong signal to the international community that the US is finally taking a stance on AI. It argues, however, that data privacy legislation is necessary to create effective and resilient AI. Industry has largely welcomed the EO, with some reservations concerning its heavy-handedness. Many experts consider it a good first step entailing major implementation challenges.

The EU and the US cooperate on AI under the Trade and Technology Council (TTC). They are implementing a joint roadmap on evaluation and measurement tools for trustworthy AI and risk management. The TTC expert groups have listed 65 key AI terms essential to understanding risk-based approaches to AI, accompanied by their EU and US interpretations and shared EU-US definitions. They have also mapped the respective involvement of both the EU and the US in standardisation activities, aiming to identify relevant AI-related standards of mutual interest. They are compiling a catalogue of existing and emerging risks, including an understanding of the challenges posed by generative AI. In January 2023, the EU and the US signed an administrative arrangement on ‘AI for the Public Good‘, aimed at boosting AI research collaboration.


Read this ‘at a glance’ note on ‘United States approach to artificial intelligence‘ in the Think Tank pages of the European Parliament.


Related Articles

Be the first to write a comment.

Leave a Reply

Discover more from Epthinktank

Subscribe now to keep reading and get access to the full archive.

Continue reading