Hello ODI Supporter,
Last Friday, the US President announced that he would direct all federal agencies to immediately stop using technology developed by Anthropic, with US Defence Secretary Pete Hegseth labelling the AI developer a “supply chain risk”. Anthropic, quite possibly with one eye on events in Venezuela and the enormous military build-up in the Middle East, had voiced concern that the US government might potentially use its tools, such as Claude, for "mass surveillance" and "fully autonomous weapons". The move saw a spike in popularity for Claude over the weekend, knocking ChatGPT off the top spot in Apple’s chart of free apps. But other events over the weekend took centre stage as US and Israeli forces launched coordinated attacks on Iran, which took out the supreme leader and many senior officials, and killed more than a thousand civilians. Retaliatory strikes by Iran have killed dozens across the Middle East.
Anthropic’s AI model, Claude, has reportedly been used by the US military, incorporated into systems developed by Palantir to rapidly analyse vast amounts of data on potential targets, including drone footage, telecommunications intercepts and human intelligence. Such tools can reduce combat planning time enormously, but many fear human decision-making could be sidelined in the process. And while this use seems to have been a red line for Anthropic, OpenAI took a rather different view. OpenAI signed a contract with the Pentagon, and then amended it within days, with CEO Sam Altman admitting it “looked opportunistic and sloppy”. The company faced a backlash from users, with thousands cancelling subscriptions. Meanwhile, more than 1,000 Google and OpenAI employees signed an open letter calling for clear limits on the military uses of AI, and to push back against government officials attempting to pressure AI companies into abandoning ethical boundaries. By Wednesday, Altman admitted his company cannot control the Pentagon’s use of AI…just as the Anthropic chief was reported to be back in talks with the US defence department. We’ve seen the results this week of AI churning through huge amounts of data to identify threats and targets, but even with humans still in the loop, many experts worry human oversight could end up just being a form of “rubber stamping”.
Elsewhere, news emerged this week that a senior NHS official urged colleagues to add more patient data into a Palantir-built platform, whilst being paid to advise…Palantir! Matthew Swindells, the joint chair of four major north-west London hospital trusts, told fellow senior NHS executives in 2024 that patient data from GPs in north-west London should be added to a platform that Palantir had developed for the NHS, which was intended for operational data such as waiting lists, staffing and operating theatre schedules. Medical trade unions and NHS staff have voiced concerns about Palantir working in the health sector, given its ties to security and defence (see above). Board papers for one trust chaired by Swindells stated that he was “to be excluded from any decision-making in relation to Palantir”.
“Something strange is happening in the dark corners of data centres, smart phones and game engines…” Alan Warburton released his new film this week, Image Empire, an animated fairytale that helps describe how “large world models” fuse the real and the virtual together in potent new forms, bringing game logics to our working lives. The project was conceived by Alan as an experiment to see what might happen when art is launched on LinkedIn, so if you have an account, join in the conversation. If not, you can take a look over on the ODI website.
We’re hosting a hackathon with the Financial Reporting Council (FRC) to accelerate the use of UK structured data. We’re looking for data users, FinTechs, preparers, and subject-matter experts to join us in prototyping solutions and testing the limits of current data availability. It takes place on Friday 27th March, 9:30 am to 5 pm at the FRC offices in London (13th Floor, 1 Harbour Exchange Square, London, E14 9GE). If you’re interested, sign up now.
And finally… A farmer in Cornwall is using AI, which can map where bees collect pollen, as well as help with planting, weeding and understanding the soil. Cameras and sensors around Ian Sexton’s beehives capture flight activity, while pollen gathered by the bees has been identified and traced to specific fields where it has been collected. Researchers at the University of Plymouth analyse the data using AI to monitor bee population health. Sexton is also using a robot in his lavender fields, which can help him with lots of time-saving jobs, such as planning, weeding, spraying, and collecting soil information, showing how technology can be deployed on smaller farms.
Until next time.
David and Jo
PS: Ever wondered why measuring impact is hard, and why design matters? Check out this blog by Hannah Foulds from Applied Works and find out more.
PPS: Projects by IF and The National Archives are looking for people with experience publishing or re-using public sector data to take part in a one-hour, incentivised remote interview to help improve awareness and understanding of the UK Government Licensing Framework. Contact Amy Huckfield at amy@projectsbyif.com for more details.