The UK’s communications watchdog has announced new rules for social media and other platforms to keep children safe online. Tech firms have until 25 July to comply with more than 40 measures or risk large fines of up to £18 million or 10% of their global revenue - and in extreme cases, being shut down. These measures apply to apps, video platforms, search engines, and gaming sites. Social media algorithms which serve up content for users must filter out harmful content from children’s feeds, while the platforms must have ‘effective’ age checks that children under 18 can be identified and kept away from harmful content. While a clear mandate to protect children now exists in the Online Safety Act, campaigners including the Children’s Commissioner for England, the Molly Rose Foundation, and the NSPCC say that the measures do not go far enough. Just this week, Meta announced it was using AI in the US to find accounts they suspected of being owned by teenagers, and moving them to Teen Accounts. However, online child safety advocates said that young users of Instagram could still be exposed to ‘serious risks’ even with the new controls.
Elsewhere, many people have been vociferously venting their spleens over the arrival of Meta AI on Whatsapp, with one journalist recalling U2’s hubristic album launch with Apple in 2014. The chatbot is designed to “answer your questions, teach you something, or help come up with new ideas" and is “entirely optional”. But far from finding it helpful, many users are pretty livid about the intrusion. Especially as “entirely optional” doesn’t give you the opportunity to actually remove it. Or does it? Forbes ran an article saying Whatsapp had quietly given details of a way to turn it off, which was then trumpeted by The Standard. Now, it would be pretty remiss of me to just leave it at that. I’ll admit, I followed those instructions in the articles and…Nada. So if you manage to remove it from chats and groups as described, let me know, yeah?
UK licensing bodies this week announced plans for a collective licence agreement that could allow authors to be paid when their works are used to train AI models. The government is currently reviewing responses to its consultation on copyright exceptions for data and text scraping by AI companies, and has suggested an opt-out model for copyright holders, which didn’t go down well, to say the least. AI companies themselves have said it’s not economically viable to seek licensed copies of all the books needed to train AI. It is hoped the new model could be available this summer and allow copyright holders to be paid for the use of their works.
We published a new report this week outlining the work behind AI data, which is often invisible to decision-makers and the public. The report aims to outline data’s role in AI supply chains, and how its use is evolving within the changing AI landscape. And next week, Data as Culture curator Hannah Redler-Hawes will be speaking at an event: AI is shaping culture - who’s shaping AI? The event is on Monday 28 April, 17:00 BST at 3Space in Brixton, London. Tickets are free, so get yours now!
And finally…web users have discovered something rather amusing happens if you type in a totally made-up idiom into Google and tag the word ’meaning’ on the end. In a bid to be incredibly helpful, Google’s AI tool fills in the blanks for you and makes up the meaning in its overview, with some pretty funny results. All totally hallucinated. Who knows how long this apparent flaw will stay up there? Because, as we all know, you can’t lick a badger twice.
Until next time
David and Jo