News emerged this week of bias in the AI system used to detect benefit fraud in the UK. Documents released under the Freedom of Information Act by the Department for Work and Pensions (DWP) revealed that an AI system showed bias according to people’s age, disability, marital status, and nationality when recommending people to investigate for welfare fraud. An internal assessment of the system found it incorrectly selected people from some groups more than others. In the summer, the DWP said the AI system did not pose a risk of discrimination or unfair treatment to customers, in part because the final decision as to whether somebody receives a welfare payment or not is made by a human. However, the DWP documents reveal there has been no fairness analysis undertaken in respect of potential bias.
A glitch appeared on ChatGPT this week that prevented the chatbot from saying the name ‘David Meyer’. Users prompted the tool to say the name, using a variety of prompts to trick it into uttering the name, but to no avail. Requests were eventually met with the answer ‘I’m unable to produce a response’. This caused concern amongst users around tech companies censoring information on their platforms, as speculation grew about who this chap was, and why his name couldn’t be revealed. OpenAI finally fessed up that there was a glitch in their bot, and that one of their tools had mistakenly flagged the name and prevented it from being said. They declined to comment on whether the glitch was related to a ‘right to be forgotten’ procedure.
We published a report this week looking at the role of the UK government as a data provider for AI. Governments typically collect and steward vast amounts of high-quality data on their citizens and institutions, from official statistics releases to national archives. Our report outlines a set of actionable recommendations to help the government ensure equitable access to data by acting as a data provider for AI. And we were also pleased to be involved in another piece of research, published by the Global Partnership on AI (GPAI) as part of the research project ‘From co-generated data to generative AI’. We’ve got more data-centric AI research coming before Christmas so keep your eyes peeled for the next couple of weeks.
We’re thrilled to announce a new Data as Culture commission. ‘Constant Washing Machine’ was created by award-winning artists Blast Theory, artists in residence for the University of Sheffield’s national research project FRAIM: Framing Responsible AI Implementation & Management. ‘Constant Washing Machine’ responds to and reflects the complex web of ideas, perspectives, and people at the heart of ‘Responsible AI’ practice. Join us for an online event marking the launch of the artwork on Tuesday 10 December, 16:30-18:45 GMT. Tickets are available now.
And finally, in news that is a) totally unsurprising and b) probably something that cropped up this time last year, AI has come to the rescue of people seeking inspiration buying Christmas presents. A recent survey by Accenture found that 95% of surveyed consumers agree that generative AI could help them find a better gift, while 90% value the recommendations AI provides. However…and let’s face it, unsurprisingly…some people find the answers generated rather generic and uninspiring. Humbug.
Until next time…
David and the Comms team
PS:
Join us for our inaugural Data Ethics Professional webinar #1: operationalising Data Ethics at PwC with Jessica Dervishi, Chief Data Officer at PwC and recipient of the TechWomen100 award, Monday 13 January 2025, 12:00 book here.