Written by Philip Boucher, Mihalis Kritikos, and Carys Lawrie
This year’s STOA Annual Lecture focused on how media and other information is managed and distributed in the age of artificial intelligence (AI) – including how AI can be used to disseminate information and misinformation – and also in implementing new measures to counteract fake news. The lecture marked STOA’s 30th anniversary, and was dedicated to the memory of the inaugural STOA Chair and author of the European Parliament report that led to STOA’s creation, Rolf Linkohr (1941-2017).
STOA Chair Eva Kaili (S&D, Greece) opened the lecture by asking how far we can trust algorithms to make decisions for us, before introducing Carlos Moedas, European Commissioner for Research, Science & Innovation, who congratulated STOA on 30 years of championing evidence-based policy-making for EU citizens. He described AI as a political as well as technical challenge and stated that while AI in itself is not a threat, we need to act responsibly. Using literary quotes to highlight the difference between fact and fiction, he emphasised the need to explain the process of science, including its limitations and biases, to establish places of trust, and to promote research integrity through a tougher enforcement of the rules. Echoing discussions at last month’s STOA event on rational optimism, he concluded that we should not fear intelligent machines, but machines that are incompetent. Rather than turn our back on the new media age, he argued, we should embrace its potential, while also raising awareness about its pitfalls. He also referred to the launch of a new High-Level Expert Group and public consultation on fake news and online disinformation, which aims at getting a grasp on this phenomenon and formulating recommendations to combat it.
The keynote lecture was given by Nello Cristianini, Professor of Artificial Intelligence at the University of Bristol, who took the audience through a potted history of AI, which provides the new algorithms, and as such is indispensable to the World Wide Web infrastructure. He went on to explain how the widespread use of ‘data in the wild’ as a natural resource to train algorithms has led to several forms of discrimination and wider concerns about the way that our data is used: ‘We cannot function without being part of the system, but the system can monitor our activities’. Cristianini recognised the potential of forthcoming legislation, including the General Data Protection Regulation (which comes into force in 2018 and includes articles on the right to explanations and opt-outs on algorithmic decision-making). He called for imaginative implementation (‘new ideas, laws and technologies’), going beyond the introduction of box-ticking disclaimers, such as followed previous legislation on cookies, to ensure that we can really benefit from the measures. He also explained that, when we agree to put our data online, we open the way for data brokerage and targeted election campaigns, but mistakes happen all the time when machines use ‘data in the wild’.
You can access the presentations and watch the webstream of the event here, or continue the discussion using the hashtag #MediaInAi.
This set the scene for a panel discussion, moderated by David Wheeldon (Sky), which opened with Michail Bletsas (MIT Media Lab) suggesting that, while we know how to build new technologies, we do not always fully understand how they work. Nonetheless, he is convinced that whereas the replacement of journalists by AI has not worked out so well, we do not have to fear the technology so much as we should fear some naive leaders who do not understand their consequences. Michiel Kolman (Elsevier and International Publishers Association) noted that the sheer volume of content available makes it difficult to discern good-quality information. He argued that ‘tech giants’ have a responsibility to help us to cut through low quality information and could learn some lessons from traditional publishers, who combine artificial and human intelligence in their peer review and fact-checking processes. Andeas Vlachos (University of Sheffield) agreed on the need to combine minds and machines, discussing how human fact-checkers must rise to the challenge of scalability to deal with the huge volumes of content produced by modern media. Vlachos also warned that, while there is substantial demand for fact-checking services, people do not automatically prefer to believe fact-checked material, and suggested that we distinguish between facts, which should be checked, and estimates and opinions, which perhaps should not.
The audience then heard from two of those ‘tech giants’. First, Jon Steinberg described Google’s quest for improvements in their search algorithms, comparing it to the battle against spam. He explained how Google aims to help people to find quality news content, based on criteria such as accountability and transparency, while providing funding to quality newsrooms and banning publishers that gain money by disseminating misleading information. Richard Allan from Facebook picked up on the comparison with anti-spam measures and made a point about different responses to censorship, asking why we accept the filtering of spam emails but are not ready to do the same with other internet content. Allan described the relative ease of building new social media networks as an opportunity to foster technology competition, and asked whether it is up to industry or government to lead the struggle against fake news.
The panel concluded with a case study on ‘algorithms in action’ presented by Yannis Kliafas (Athens Technology Center) and Wilfried Runde (Deutsche Welle), who introduced the Truly Media platform, which supports journalists in creating optimal possible presentations of the truth. They referred to journalism as the best obtainable version of the truth, whilst raising the question about what the truth actually is. They also announced their new partnership with Amnesty International, which will begin using the platform next year.
The presentations inspired many interesting questions and comments from the public, notably about how technical action and media literacy might be mobilised to counteract fake news, and how we can raise funds to cover the costs associated with making ‘true news’.
Following the discussion, Eva Kaili and Paul Rübig (EPP, Austria) announced the launch of a new European Science-Media Hub (ESMH) as an authoritative centre for networking and education in the field of science journalism and a powerful tool for the dissemination of knowledge created at STOA, the European Parliamentary Research Service (EPRS), and more widely. The objective of creating this new platform, to be based in Strasbourg and operational from 1 January 2018, is to break down the existing silos and support communication between scientists, the media, and wider society, by promoting greater scientific literacy and enriching the work of media professionals through a central point of contact across Europe.
In his concluding remarks, Ramón Luis Valcárcel Siso (EPP, Spain), Vice-President of the European Parliament responsible for STOA, stated that he was hugely impressed by the speeches and the debate, the questions asked by the audience, and by the wide interest this event had attracted in general. He referred especially to the worrying influence exerted by algorithms today, and the need to establish a framework that will ensure their transparent, accountable and responsible use.