Listen to this Post
The UK
the Key Details
The Online Safety Act mandates that companies providing online services—including social media platforms, search engines, messaging apps, gaming sites, and more—must remove illegal content from their platforms. The law requires these service providers to act proactively in identifying and removing harmful content, including content related to terrorism, hate speech, child abuse, fraud, and suicide encouragement.
Upon its passing, companies were given until March 16, 2025, to complete a risk assessment of their platforms and determine how they plan to address illegal content. The UK’s communications regulator, Ofcom, provided further guidelines on these risk assessments in December 2024. Starting March 17, 2025, Ofcom will be empowered to enforce these regulations, imposing penalties on non-compliant companies. The penalties can be as severe as £18 million or 10% of the company’s global revenue, whichever is higher. In extreme cases, Ofcom may take legal action to block access to non-compliant sites in the UK.
Industry experts offer varied perspectives on the law. Mark Jones, a partner at Payne Hicks Beach law firm, argues that compliance with the Online Safety Act should go beyond a simple checklist. Companies need to be proactive in identifying illegal content and demonstrating accountability. On the other hand, Jason Soroko, a senior fellow at Sectigo, warns that the act could harm smaller platforms by creating a financial burden and pushing explicit content to unregulated spaces. Additionally, automated systems used for content moderation might struggle with context, leading to over-blocking of legitimate content and fueling concerns about censorship.
Iona Silverman, a partner at Freeths law firm, views the Act as a potential tool to combat harmful online content but stresses that Ofcom must take a robust approach to ensure compliance from the largest platforms. She also highlights concerns over recent actions by major platforms like Meta, which have shown signs of potential non-compliance, such as discontinuing its third-party fact-checking program.
What Undercode Says:
The Online Safety Act represents a pivotal shift in the way online platforms are governed in the UK. It introduces significant responsibilities for service providers to ensure that harmful content is swiftly identified and removed, with strict penalties for those who fail to comply.
One of the most pressing challenges of this law is its broad scope. The Online Safety Act targets a wide range of online content, from terrorist propaganda to hate speech, fraud, and child sexual abuse. This comprehensive approach is crucial to ensuring that harmful material is not only flagged but eradicated across platforms. However, the task of monitoring, detecting, and removing such content presents considerable logistical and financial challenges, especially for smaller or independent platforms. These companies may not have the resources to implement robust content moderation systems, which could put them at risk of penalties or, in the worst case, force them to exit the market.
Additionally, the automation of content detection tools presents another hurdle. While AI systems can identify specific patterns in content, they are still far from perfect. Context is often a crucial factor in determining whether content is harmful or not, and automated systems are not yet sophisticated enough to capture nuance. As a result, platforms may face backlash over over-removal of content that is legitimate, which could harm free speech and innovation online.
Another concern is the vagueness surrounding what constitutes “harmful content.” With such a broad definition, platforms may be forced to overblock content, leading to potential censorship. This may be especially challenging for platforms that operate in multiple jurisdictions, each with its own interpretation of harmful content.
Despite these challenges, there is significant potential for the Online Safety Act to reduce illegal content on major platforms, particularly if Ofcom enforces the law with clarity and consistency. Iona Silverman’s point about Ofcom’s approach being critical for success cannot be overstated. Without robust enforcement, large platforms may sidestep compliance, as evidenced by Meta’s recent decisions to alter its content filtering system. The law’s ability to tackle harmful content hinges on its adaptability and the willingness of regulators to fine-tune it to reflect the complexities of the online ecosystem.
What the UK government and regulators need to recognize is that the success of the Online Safety Act requires collaboration with tech platforms, not just enforcement. These platforms should be seen as partners in ensuring online safety rather than mere subjects of regulation. Moreover, proactive measures, such as the development of better detection technologies and clear guidelines, will be vital in ensuring that content moderation is both fair and effective.
Fact Checker Results:
- The Online Safety Act is a well-intended but challenging piece of legislation aimed at curbing illegal online content.
2. While its intentions are clear, the
- The vagueness around “harmful content” leaves room for potential over-blocking, affecting free speech and innovation.
References:
Reported By: https://www.infosecurity-magazine.com/news/uk-online-safety-act-ofcom/
Extra Source Hub:
https://www.digitaltrends.com
Wikipedia
Undercode AI
Image Source:
Pexels
Undercode AI DI v2