Last Friday, the US President announced that he would direct all federal agencies to immediately stop using technology developed by Anthropic, with US Defence Secretary Pete Hegseth labelling the AI developer a “supply chain risk”.  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­    ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­  
View in browser

Your round-up of the latest, greatest data stories

The Week in Data

Hello ODI Supporter,

 

Last Friday, the US President announced that he would direct all federal agencies to immediately stop using technology developed by Anthropic, with US Defence Secretary Pete Hegseth labelling the AI developer a “supply chain risk”. Anthropic, quite possibly with one eye on events in Venezuela and the enormous military build-up in the Middle East, had voiced concern that the US government might potentially use its tools, such as Claude, for "mass surveillance" and "fully autonomous weapons". The move saw a spike in popularity for Claude over the weekend, knocking ChatGPT off the top spot in Apple’s chart of free apps. But other events over the weekend took centre stage as US and Israeli forces launched coordinated attacks on Iran, which took out the supreme leader and many senior officials, and killed more than a thousand civilians. Retaliatory strikes by Iran have killed dozens across the Middle East.  

 

Anthropic’s AI model, Claude, has reportedly been used by the US military, incorporated into systems developed by Palantir to rapidly analyse vast amounts of data on potential targets, including drone footage, telecommunications intercepts and human intelligence. Such tools can reduce combat planning time enormously, but many fear human decision-making could be sidelined in the process. And while this use seems to have been a red line for Anthropic, OpenAI took a rather different view. OpenAI signed a contract with the Pentagon, and then amended it within days, with CEO Sam Altman admitting it “looked opportunistic and sloppy”. The company faced a backlash from users, with thousands cancelling subscriptions. Meanwhile, more than 1,000 Google and OpenAI employees signed an open letter calling for clear limits on the military uses of AI, and to push back against government officials attempting to pressure AI companies into abandoning ethical boundaries. By Wednesday, Altman admitted his company cannot control the Pentagon’s use of AI…just as the Anthropic chief was reported to be back in talks with the US defence department. We’ve seen the results this week of AI churning through huge amounts of data to identify threats and targets, but even with humans still in the loop, many experts worry human oversight could end up just being a form of “rubber stamping”.  

 

Elsewhere, news emerged this week that a senior NHS official urged colleagues to add more patient data into a Palantir-built platform, whilst being paid to advise…Palantir! Matthew Swindells, the joint chair of four major north-west London hospital trusts, told fellow senior NHS executives in 2024 that patient data from GPs in north-west London should be added to a platform that Palantir had developed for the NHS, which was intended for operational data such as waiting lists, staffing and operating theatre schedules. Medical trade unions and NHS staff have voiced concerns about Palantir working in the health sector, given its ties to security and defence (see above). Board papers for one trust chaired by Swindells stated that he was “to be excluded from any decision-making in relation to Palantir”.

 

“Something strange is happening in the dark corners of data centres, smart phones and game engines…” Alan Warburton released his new film this week, Image Empire, an animated fairytale that helps describe how “large world models” fuse the real and the virtual together in potent new forms, bringing game logics to our working lives. The project was conceived by Alan as an experiment to see what might happen when art is launched on LinkedIn, so if you have an account, join in the conversation. If not, you can take a look over on the ODI website.

 

We’re hosting a hackathon with the Financial Reporting Council (FRC) to accelerate the use of UK structured data. We’re looking for data users, FinTechs, preparers, and subject-matter experts to join us in prototyping solutions and testing the limits of current data availability. It takes place on Friday 27th March, 9:30 am to 5 pm at the FRC offices in London (13th Floor, 1 Harbour Exchange Square, London, E14 9GE). If you’re interested, sign up now. 

 

And finally… A farmer in Cornwall is using AI, which can map where bees collect pollen, as well as help with planting, weeding and understanding the soil. Cameras and sensors around Ian Sexton’s beehives capture flight activity, while pollen gathered by the bees has been identified and traced to specific fields where it has been collected. Researchers at the University of Plymouth analyse the data using AI to monitor bee population health. Sexton is also using a robot in his lavender fields, which can help him with lots of time-saving jobs, such as planning, weeding, spraying, and collecting soil information, showing how technology can be deployed on smaller farms. 

 

Until next time. 

 

David and Jo

 

PS: Ever wondered why measuring impact is hard, and why design matters? Check out this blog by Hannah Foulds from Applied Works and find out more.

 

PPS: Projects by IF and The National Archives are looking for people with experience publishing or re-using public sector data to take part in a one-hour, incentivised remote interview to help improve awareness and understanding of the UK Government Licensing Framework. Contact Amy Huckfield at amy@projectsbyif.com for more details.

Follow us on Bluesky

From the outside world

Trump orders government to stop using Anthropic in battle over AI use

BBC

US President Donald Trump has said he would direct every federal agency to immediately stop using technology from AI developer Anthropic.

 

Anthropic’s AI model Claude gets popularity boost after US military feud

The Guardian

Claude climbs to top of app store charts in US and UK after being blacklisted by Pentagon over ethics concerns.

 

Iran war heralds era of AI-powered bombing quicker than ‘speed of thought’

The Guardian

Speed and scale of US military’s AI war planning raises fears human decision-making may be sidelined.

 

OpenAI makes changes to ‘opportunistic and sloppy’ Pentagon deal

Financial Times

Sam Altman says company is working with defence department on provisions covering mass surveillance.

 

OpenAI changes deal with US military after backlash

BBC

OpenAI says it is making changes to the "opportunistic and sloppy" deal it struck with the US government over the use of its technology in classified military operations.

 

Hundreds of Google and OpenAI employees sign open letter urging limits on military AI

Techradar

AI workers call for limits on surveillance and autonomous weapons.

 

Sam Altman admits OpenAI can’t control Pentagon’s use of AI

The Guardian

CEO’s claims come amid increased scrutiny of US military’s use of the technology and ethics concerns from AI workers.

 

Anthropic chief back in talks with Pentagon about AI deal

Financial Times

Dario Amodei holding discussions with deputy to Pete Hegseth to reach a compromise on military use of the technology.

 

AI could be giving US lethal edge in Iran war - but there are dangers

Sky news

Artificial intelligence can parse vast amounts of data and use it to flag targets, rank threats and suggest priorities. But experts are worried human oversight could be eroded to a dangerous form of "rubber stamping".

 

NHS official pushed to add patient data to Palantir platform while also advising company 

Financial Times

Matthew Swindells has been joint chair of four major hospital trusts in north west London since April 2022.

 

AI helping farmer with data about bees and crops

BBC

Mapping where bees collect pollen, planting, weeding and understanding the soil are all at the forefront of artificial intelligence (AI) being used by a farmer in Cornwall.

 

From the ODI

Head of Research

Reporting to the Director of Research, the Head of Research is responsible for scoping, selling and delivering ODI’s research to support the creation of an open, trustworthy data ecosystem. 

 

Image Empire

AI technology is quickly establishing itself in every corner of videogame production, from world-building to storytelling and beyond, but is AI a powerful tool for creativity or a threat?

 

Solid World Feb 2026

Mapping the future: the next era of the decentralised web.

 

Data Ethics Professional #11: ethical AI in action

Facing the challenge of embedding ethical considerations into your operations, with Global Fishing Watch. 

 

Introduction to Data Ethics and Responsible AI 

Course, Tuesday 17 March, 1–4 pm, Book now

Master the tools to identify and mitigate ethical risks in AI and data projects.

Strategic Data Skills 

Course, Tuesdays from 31 March, 6 weeks, Book now

Empower your decision-making with practical data skills and AI-assisted learning - no coding required.

 

The Week in Data

The Week in Data is our weekly round up of the latest news in data. If you haven't already, you can subscribe here. 

Subscribe

Want to change how you receive these emails?

You can Manage preferences or unsubscribe from all emails from the ODI.

LinkedIn
Bluesky Social

The Open Data Institute, 4th Floor, Kings Place, 90 York Way, London, N1 9AG

Unsubscribe Manage preferences