Social Media Companies have Defended Their Content Moderation Practices to a Parliamentary Committee Investigating Online Misinformation, Arguing that they already have effective priorses and Systems in places to deal With the spread of false information on their platforms.
On 25 February 2025, The Commons Science, Innovation and Technology Committee (SITC) Grilled Tech Giats X, Tiktok And meta as part of its inquiry into online Misinformation and harmful algorithms,
Opening the session, committee chair chi onwurah said the topic has unusually “strong public interest” Diseminated at Industrial Scale ”. She added that this particular session would primarily focus in on the spread of disinformation during the Soutport riots in 2024,
In the wake of the fatal stabbing of three girls in southport on 29 July 2024, Social Media Became Awash with Unsubstantited Rumors that the perpetrator was an asylum seeker of Muslim Faith.
While this was laater confirmed to be complete False, Islamophobic far-Right Rioting Broke Out in more than a dozen English towns and cities over the next five days, SPCifial Targeting Mosques, Hotels Housing asylum seekers, immigration centers and random people of color.
Responding to mps 'questions about social media firms' Response to the Unrest, Chris Yiu, Director of Public Policy for Northern Europe at Meta, Said the Organization's TRUST and SAFETY TEAMS TEAMS TOMS TOMSTY TOUTOK DOWN Around 24,000 posts for Breaking Its Policies on Violence and Incitement, and a Further 2,700 for Breaking Its Rules on Dangerous Organizations.
“One thing that I think is challenging in these fast-moving incidences is that it can be different to establish the facts on the groups in real time,” He said. “I think that is somewashing which we will need to have a reflection on and understand how we could do that.”
Community Guidelines
Alistair Law, Director of Public Policy and Government Affairs for the UK and Ireland at Tiktok, Added That While The Vast Majority of Content on the Platform DURM DURM DURM DURM DURM DURTARM DURTARM DURTARES Tens of Thousands of Posts Containing “Violent comments” was removed by the company for violating its community guidelines.
Echoing Yiu, Law also said it can be deficult to “Establish the Vature of Potential Claims” in Fast-Moving Events, Adding that there Needs to be Collaboration Throughout the “Wider Value Chaain” of Information Shaking, Which Includes Broadcast Media, As News Coverage and Social Media Content Can Create Negative Feedback Loops Around Misinformation.
Wilfredo Fernández, Senior Director for Government Affairs at X, Said the company has “very clear protocols in place on how to deal with this content and the Challenges that is the Challenges that Arise In the Affener in the Affermath of an out Like this ”.
He added that x's “Community notes” model was alive to provide useful context to users, and that prominent far-right figures like Tommy Robinson andrew Tate Received Such notes in relation to their posts auti Soutport Attacks. “X has no power to place or remove a note,” said fernández. “It's completes by people.”
In response to mps highlighting institutes of blue-tick x accounts making posts about the location of Immigrants and encouraging rioters to go there (and which did not receive commanity notes) Said the company did take Various Actions on Tens of Thousands of Posts. “To sit here and say we get the right call every time, that would be wrang,” He said.
Labor mp emily Darlington also challenged fernández over extreme extra messages Threatened with hanging after sharing a petition to save her local post office.
Darlington said she reported the post as harmful and violent speech, in violation of x's rules that statement that state expressing desire for violence is not allowed, and listed other Violent or RCESTS MADENTS MADETS MADE by the Same Account.
Asked by Darlington Whether this was acceptable, fernández said the comments were “abhorrent”, but that while he would have made have content moderation teams review the account for terms of terms of service vies Could not make any assurations that it would be removed.
Meta was also criticized for its removal of third-party fact-check Examples of Meta Users Posting Racist, Antisemitic or Transphobic comments on the platform.
“We have received feedback that… some areas of debates weed suppressed too much on our platform and that some conversions, whilst challenging, should have a space to be lucky,” SAID YOU.
Both Onwurah and Darlington Pushed Back, Arguing That Some Things – Such as Statements Denying The Existens of Transtence of Transport or Deriding Immigrants – Should be not be characterized as use.
The representatives from meta and tiktok said that wheile the scale of social media use presents clear content models problems, Each firm tax down upwards of 98% of Violent content.
Processes and Systems
Responding to questions about where ther Online Safety Act (OSA) Being in Force at the time of the routs would have changed their approach at all, every company said they alredy have processes and systems in place to deal with misinformation crieses.
In the wake of the riots, Ofcom warned that social media firms will be obliged by the osa To deal with disinformation and content that is hateful or provokes violence, noting that it “will put new duties on tech firms to protect their users from Illegal content, who is the actual content Involving hatred, disorder, provoking violence or certain instals of disinformation ”.
The Online Harms Regulator Added that when the Act Comes Into Force in Late 2024, Tech Firms will then have Three Months to Assess The Risk of Illegal Content on his platforms. They will then be required to take appropriate steps to stop it appearing, and act quickly to remove it when it is done aware of it.
“The Larget Tech Firms will in due course need to go even further – by consistently applying their terms of services, which often iften ifn banning things like Hate Hate Speake, Inciting Violence and HARMFUL Disinformation, “said ofcom, adding that it will have a broad range of enforcement power at its disposal to deal with non-comPLIANT FIMMS.
“These include the power to impose significant financial penalties for breaches of the safety duties,” it said. “The regime focuses on platforms' systems and processes raather than the content itself that is on their platforms.”
However, while a number of the Online Safety Act's Criminal Offense Were already in force at the time of the unrest-Including these related communications, False communications and tech companies' Non-Compliance with Information Notices-Some SAID At the Time Sai At the Time Said At the Time It Any of these would be applicable to those using social media to organise racist riots.
“INTEAD, the police are likely to have to offers the public order under the public order Act 1986, which is the main piece of legislation which penalies the use of violes and/or Indian Groups, “Said Mark Jones, A Partner at Payne Hicks Beach. “Whilst the home secretary may have said 'if it's a crime offline, it's a crime online', and whilst that may be correct, the only safety act provides no additional supporting criminal Law Covering Incidents of Incitement of Violence. “