Instagram's tools designed to protect teenagers from harmful content are failing to stop them from seeing suicide and self-harm posts, a study has claimed. Researchers also said the social media platform, owned by Meta, encouraged children 'to post content that received highly sexualised comments from adults'. The testing, by child safety groups and cyber researchers, found 30 out of 47 safety tools for teens on Instagram were 'substantially ineffective or no longer exist'.

Meta has disputed the research and its findings, saying its protections have led to teens seeing less harmful content on Instagram. 'This report repeatedly misrepresents our efforts to empower parents and protect teens, misstating how our safety tools work and how millions of parents and teens are using them today,' a Meta spokesperson told the BBC.

'In addition to finding 30 of the tools were ineffective or simply did not exist anymore, they said nine tools 'reduced harm but came with limitations'. The researchers said only eight of the 47 safety tools they analysed were working effectively - meaning teens were being shown content which broke Instagram's own rules about what should be shown to young people. This included posts describing 'demeaning sexual acts', as well as autocompleting suggestions for search terms promoting suicide, self-harm or eating disorders.

'These failings point to a corporate culture at Meta that puts engagement and profit before safety,' said Andy Burrows, chief executive of the Molly Rose Foundation - which campaigns for stronger online safety laws in the UK. It was set up after the death of Molly Russell, who took her own life at the age of 14 in 2017.

The study was carried out by the US research centre Cybersecurity for Democracy and experts including whistleblower Arturo Béjar.

The researchers shared with BBC News screen recordings of their findings, some of these including young children who appeared to be under the age of 13 posting videos of themselves. In one video, a young girl asks users to rate her attractiveness. They claimed Instagram's algorithm 'incentivises children under-13 to perform risky sexualised behaviours for likes and views'. "is_safe": false