Social media giants made decisions which allowed more harmful content on people's feeds, after internal research into their algorithms showed how outrage fueled engagement, whistleblowers told the BBC.

More than a dozen whistleblowers and insiders have laid bare how the companies took risks with safety on issues including violence, sexual blackmail and terrorism as they battled for users' attention.

An engineer at Meta, which owns Facebook and Instagram, described how he had been told by senior management to allow more borderline harmful content - which includes misogyny and conspiracy theories - in user's feeds to compete with TikTok.

In response to the whistleblowers' claims, Meta said: Any suggestion that we deliberately amplify harmful content for financial gain is wrong. TikTok said these were fabricated claims and the company invested in technology that prevented harmful content from ever being viewed.

The whistleblowers who spoke to the BBC documentary, Inside the Rage Machine, offer a close-up view of how the industry responded following the explosive growth of TikTok, whose highly engaging algorithm for recommending short videos upended social media, leaving rivals scrambling to catch up.

Calum, a young man who claimed he was radicalised by algorithm from the age of 14, stated that the videos prompted feelings of anger and resentment, leading him to adopt extreme viewpoints. This raises significant concerns about how algorithms shape ideologies among young and impressionable audiences.

The whistleblower from TikTok expressed deep concerns over the safety priorities of the platform, suggesting that political content was given greater attention than vital cases involving children. This prioritization poses serious risks to users, highlighting the dangerous intersection of politics and social media safety.

As the whistleblowers continue to advocate for change, their testimonies reflect a growing urgency for more responsible practices in handling content within social media ecosystems.

}