Social media giants made decisions which allowed more harmful content on people's feeds, after internal research into their algorithms showed how outrage fueled engagement, whistleblowers told the BBC.

More than a dozen whistleblowers and insiders have laid bare how the companies took risks with safety on issues including violence, sexual blackmail, and terrorism as they battled for users' attention.

An engineer at Meta, which owns Facebook and Instagram, described how he had been told by senior management to allow more borderline harmful content - which includes misogyny and conspiracy theories - in user's feeds to compete with TikTok.

They sort of told us that it's because the stock price is down, the engineer said.

A TikTok employee gave the BBC rare access to the company's internal dashboards of user complaints - as well as other evidence of how staff had been instructed to prioritize several cases involving politicians over a series of reports of harmful posts featuring children.

Decisions were made to maintain a strong relationship with political figures to avoid threats of regulation or bans, not because of the risks to users, the TikTok staffer said.

The whistleblowers who spoke to the BBC documentary, Inside the Rage Machine, offer a close-up view of how the industry responded following the explosive growth of TikTok, whose highly engaging algorithm for recommending short videos upended social media, leaving rivals scrambling to catch up.

A senior Meta researcher, Matt Motyl, said the company's competitor to TikTok, Instagram Reels, was launched in 2020 without sufficient safeguards. Internal research shared with the BBC showed comments on Reels had significantly higher prevalence of bullying and harassment, hate speech, and violence or incitement than elsewhere on Instagram.

The company invested in 700 staff to grow Reels, while safety teams were refused two specialist staff to deal with protecting children and 10 more to help with the integrity of elections, another former senior Meta employee said.

Motyl gave the BBC dozens of what he described as high-level research documents showing all sorts of harms to users on these platforms. Among them was evidence that showed Facebook was aware of problems caused by its algorithm.

The algorithm offered content creators a path that maximizes profits at the expense of their audience's wellbeing and the current set of financial incentives our algorithms create does not appear to be aligned with our mission to bring the world closer together, according to one internal study.

It said Facebook can choose to be idle and keep feeding users fast-food, but that only works for so long.

In response to the whistleblowers' claims, Meta said: Any suggestion that we deliberately amplify harmful content for financial gain is wrong. TikTok said these were fabricated claims and the company invested in technology that prevented harmful content from ever being viewed.