Written by Marcin Grajewski.
On 14 June 2023, the European Parliament adopted its negotiating position on the draft Artificial Intelligence Act, strengthening some provisions to protect people better from potential violations of their fundamental rights. Following this vote, Parliament will start negotiations with the national governments and the European Commission on the final shape of the legislation, which will be a decisive step in the European Union becoming the world’s first region to regulate artificial intelligence (AI).
The recent launches of AI tools capable of generating direct textual answers to questions, notably ChatGPT, and the development of general-purpose AI technologies are expected to revolutionise the application of AI in society. The US and China are also working on AI regulation. In addition, the US and EU hope to negotiate a voluntary code of conduct for AI firms.
The Artificial Intelligence Act would regulate AI according to the level of risk: the higher the risk to individuals’ fundamental rights or safety, the greater the system’s obligations. The EU’s proposed high-risk list includes AI in critical infrastructure, education, human resources, public order and migration management. Parliament’s position on the proposal seeks to ban real-time remote biometric identification systems in publicly accessible spaces and most ‘post’ remote biometric identification systems, as well as AI predictive policy systems, based on gender, race, ethnicity, citizenship status, religion, or political orientation.
This note offers links to recent reports and commentaries from some major international think tanks and research institutes on artificial intelligence. More publications on the topic can be found in a previous edition of What think tanks are thinking.
ChatGPT and health care: implications for interoperability and fairness
Brookings Institution, June 2023
Metaverse economics part 1: Creating value in the metaverse
Brookings Institution, June 2023
Around the halls: What should the regulation of generative AI look like?
Brookings Institution, June 2023
Why the EU must now tackle the risks posed by military AI
Centre for European Policy Studies, June 2023
AI governance must balance creativity with sensitivity
Chatham House, June 2023
Future-proofing AI: regulation for innovation, human rights and societal progress
Foundation for European Progressive Studies, June 2023
Regulating AI: workers’ intellect versus Big Tech oligarchs
Foundation for European Progressive Studies, June 2023
Future-proofing AI: regulation for innovation, human rights and societal progress
Foundation for European Progressive Studies, June 2023
Artificial Intelligence in the Covid-19 Response
Rand Corporation, June 2023
The regulators are coming for your AI
Atlantic Council, May 2023
The US government should regulate AI if it wants to lead on international AI governance
Brookings Institution, May 2023
Senate hearing highlights AI harms and need for tougher regulation
Brookings Institution, May 2023
Are the FTC’s tools strong enough for digital challenges?
Brookings Institution, May 2023
Machines of mind: The case for an AI-powered productivity boom
Brookings Institution, May 2023
The politics of AI: ChatGPT and political bias
Brookings Institution, May 2023
The age of competition in generative artificial intelligence has begun
Bruegel, May 2023
The UK’s competition authority is ready to regulate big tech
Centre for European Reform, May 2023
Strict ban on china will cost us dearly in science
Clingendael, May 2023
Artificial Intelligence enters the political arena
Council on Foreign Relations, May 2023
Here’s what to expect on China, AI, green energy, and more when EU and US officials meet in Sweden
European Policy Centre, May 2023
The US-EU Trade and Technology Council: Assessing the record on data and technology issues
European Policy Centre, May 2023
ChatGPT’s work lacks transparency and that is a problem
Rand Corporation, May 2023
Why US technology multinationals are looking to Africa for AI and other emerging technologies: Scaling tropical-tolerant R&D innovations
Atlantic Council, April 2023
The EU and U.S. diverge on AI regulation: A transatlantic comparison and steps to alignment
Brookings Institution, April 2023
How artificial intelligence is transforming the world
Brookings Institution, April 2023
Artificial intelligence is another reason for a new digital agency
Brookings Institution, April 2023
Workforce ecosystems and AI
Brookings Institution, April 2023
The problems with a moratorium on training large AI systems
Brookings Institution, April 2023
With the AI Act, we need to mind the standards gap
Centre for European Policy Studies, April 2023
Recalibrating assumptions on AI towards an evidence-based and inclusive AI policy discourse
Chatham House, April 2023
AI has escaped the ‘sandbox’: Can it still be regulated?
European Policy Centre, April 2023
The world needs a time out on AI development
Friends of Europe, April 2023
Large language models: Fast proliferation and budding international competition
International Institute for Strategic Studies, April 2023
Comparing Google Bard with OpenAI’s ChatGPT on political bias, facts, and morality
Brookings institution, March 2023
How generative AI impacts democratic engagement
Brookings institution, March 2023
A high-level view of the impact of AI on the workforce
Bruegel, March 2023
Artificial intelligence adoption in the public sector: A case study
Bruegel, March 2023
Like it or not, the EU needs American cloud services
Centre for European Reform, March 2023
Access to data and algorithms: For an effective DMA and DSA implementation
Centre on Regulation in Europe, March 2023
ChatGPT has opened a new front in the fake news wars
Chatham House, March 2023
Artificial intelligence, diplomacy and democracy: from divergence to convergence
Friends of Europe, March 2023
Read this briefing on ‘Artificial intelligence‘ in the Think Tank pages of the European Parliament.
Be the first to write a comment.