Who should get to decide the impact of Artificial Intelligence (AI) on health, wellbeing, and society? The Civic Data Cooperative put that question forward to four thought leaders at the Big AI Debate: Who gets to decide on February 23rd, 2024. Our speakers spent an evening in Liverpool debating key questions on AI, data ethics, and public participation.
The Debate
Professor Iain Buchan, W.H. Duncan Chair in Public Health Systems, Associate Pro Vice Chancellor for Innovation, and Civic Health Innovation Labs Director at the University of Liverpool, opened the debate. Reflecting on the history of public health data innovation in Liverpool, Professor Buchan challenged the speakers to hold a mirror back to the needs of the public that AI could help solve in public health.
Dr Emily Rempel from the Civic Data Cooperative invited the speakers to debate on four core questions around AI and public participation. Professor David Leslie, Dr Arne Hintz, Dr Ed Pyzer-Knapp, and Reema Patel shared their expertise. To kick off the discussion, Dr Rempel shared the perspectives of eight Liverpool City Region residents on the impact of AI in Liverpool. You can see their responses above and on YouTube.
Question #1 Public understanding of the impact of AI
When media and publics imagine the future of AI, what comes to mind is often a science fiction dystopia or utopia where AI either destroys or saves the world respectfully. These sociotechnical imaginaries, although important to critique and debate, often mask both the potential and impact of AI now and in the future.
A 2023 report on developing better images of AI found “abstract, futuristic or science-fiction-inspired images of AI hinder the understanding of the technology’s already significant societal and environmental impacts.”
Similarly, the UK’s newly developed AI Safety Institute’s focuses on the most advanced current AI capabilities to ensure that the UK and the world are not caught off guard by progress at the frontier of AI in a field that is highly uncertain.
How do you imagine the future of AI? How do cultural depictions and existential fears around AI mask the societal impact and potential of AI?
Oftentimes these narratives are generated by those who are socioeconomically empowered in the ecosystem. They are the ones who are in so-called positions of decision-making whilst being at the helm of big tech It’s in their interests to have distractive narratives because it allows an out running of any sort of meaningful regulation. So we have to be very conscious that the stakes of the discourse are very high when it comes to public good and how communities would be able to handle this technology.
– Professor David Leslie, Director of Ethics and Responsible Innovation Research, The Alan Turing Institute
Question #2 Addressing social inequities and harms from AI
Artificial Intelligence at its most basic definition is the use of digital technology to create systems capable of performing tasks commonly thought to require human intelligence. Building systems capable of doing everything from writing a poem to creating a new antibiotic requires billions of lines of data.
The data used to train AI can reinforce inequalities and racism making it harder to access good healthcare or fair justice for all communities. A recent example is a study where a hospital algorithm was systematically assigning Black patients a lower risk score than White patients. If addressed, the inequity in the health risk score would increase the percentage of Black patients receiving additional help in hospital from 1 in 5 (17.7%) to just under half (46.5%).
What kind of inequities and social justice harms are you most concerned about being reinforced or created by AI technologies?
Discrimination is of course a key issue. Facial recognition, automated hiring systems have been analysed to discriminate based on gender and ethnic background. There are cases of predatory pricing when car insurance becomes more expensive for people living in neighbourhoods with a large percentage of minorities… Underlying that is also the issue of data analysis being inevitably about discrimination. It’s what it does in many ways. To categorise. To sort. To understand better. To, in a way, differentiate service provision according to needs and other criteria and we need to be careful about this. There are some possible outcomes in terms of discrimination. It is not a simple area, and not a yes or no answer, but there are problematic areas that need to be looked into.
– Dr Arne Hintz, Co-Director, Data Justice Lab, Cardiff University
Question #3 Aligning public preferences to industry use of AI
In a 2022/2023 survey of the UK population by the Ada Lovelace Insitute, respondents listed assessing the risk of cancer as having the highest perceived benefit for AI use, whereas driverless cars and targeted advertising were perceived as most concerning. Conversely, a 2022 report by the Department for Digital, Culture, Media, and Sport found that health services ranked as the lowest adopter of AI technology.
What makes one industry more likely than another to adopt AI? How can the mismatch between the use of AI by industry and how the public want AI to be used be addressed?
One of the reason we don’t see AI ‘curing cancer’ is because it’s quite hard to do and just throwing huge amounts of data at it on its own is not going to fix it…For certain types of AI, the infrastructure is unimaginable. Facebook just bought 350,000 GPUs. The UK government put a target for their national AI for 2000. If there’s going to be a mismatch in the underlying capability, there’s going to be a mismatch in what gets done….Addressing that mismatch, if we are serious about wanting to be a superpower, we have to be serious about the way we enable that to happen. So, not only understanding that we need proper infrastructure that is accessible to anyone that has a good idea but also understanding what that means in terms of how we physically construct these things.
– Dr Ed Pyzer-Knapp, Head of Research Innovation, IBM Research, UK & Ireland
Question #4 Leveraging public participation and deliberation in AI
Public consultation and deliberation is one way to widen who gets a say in how AI impacts society. The 2023 People’s Panel on AI developed several recommendations on where public groups should play an increased role in AI decision-making. This included a global governing body for AI containing experts alongside publics. As well as a continued national conversation on AI, based on the history of jury service where UK residents are already trusted to make life-impacting and significant decisions.
Considering the existing state of democracy in the UK, how can public deliberation be used to better represent the public voice in AI?
Why do this? Because it is important and valuable that people come first…There is a sense that we have lost control over the ability to shape these technologies partly because they don’t really evolve on a four-to-five-year time frame. So, our democratic systems and processes aren’t actually up to scratch with the rate at which they are developing and changing our societies and our economy. So that invites a rethink on how we actually do democracy.
– Reema Patel, Policy Lead, ESRC Digital Good Network
Next in the debate on AI in Liverpool
We ended the debate by opening the room to questions. Audience members wanted to learn more about everything from AI literacy to alternative forms of AI governance to accountability when things go wrong. We will use these core questions to plan future seminars and debates. For now, thank you to our speakers, audience members, and staff. This is a first step in the conversation on the future of AI in Liverpool.