In a BBC investigation, more than a dozen insiders claimed both platforms allowed toxic material to spread because it drives user activity and advertising revenue.
A former Meta engineer alleged that management relaxed restrictions on “borderline” harmful content to compete with TikTok, even as internal research linked such posts to higher engagement.
Another former staff member said safety teams were often overruled by product teams focused on growth, with features launched despite known risks of abuse, hate speech and misinformation.
At TikTok, a whistleblower claimed moderation decisions were sometimes influenced by political considerations rather than user safety.
He said cases involving politicians were prioritised over reports of cyberbullying and exploitation involving minors.
Experts warn the platforms’ algorithms tend to promote anger-driven and divisive content, contributing to online radicalisation and the spread of harmful material.
Both companies have denied the allegations, insisting they have strict safety policies and continue to invest in protecting users, especially teenagers.






