Whistleblowers Reveal Meta and TikTok's Dangerous Engagement Strategies

A shocking report unveils that social media giants Meta and TikTok have knowingly amplified harmful content in pursuit of user engagement, as revealed by whistleblowers and internal documents.

Social media giants made decisions which allowed more harmful content on people's feeds, as internal research showed how outrage fueled engagement, whistleblowers told the BBC.

More than a dozen whistleblowers and insiders have laid bare how the companies took risks with safety on issues including violence, sexual blackmail, and terrorism as they battled for users' attention.

An engineer at Meta, which owns Facebook and Instagram, described how he had been told by senior management to allow more borderline harmful content - which includes misogyny and conspiracy theories - into user feeds to compete with TikTok.

They sort of told us that it's because the stock price is down, the engineer said.

A TikTok employee provided the BBC access to internal dashboards of user complaints and showed how staff had been instructed to prioritize cases involving politicians over reports of harmful posts featuring children. They made these decisions to maintain relationships with political figures, avoiding threats of regulation or bans.

The whistleblowers provided insights into how both companies responded to the explosive growth of TikTok, with Meta launching Instagram Reels without sufficient safeguards to protect users, according to senior researcher Matt Motyl. The internal research indicated that comments on Reels had significantly higher instances of bullying, harassment, and violence compared to other areas of Instagram.

Despite the alarming findings, Meta stated that it does not deliberately amplify harmful content for financial gain, while TikTok dismissed the claims as fabricated, asserting that it invests heavily in technology to prevent harmful content from being promoted.