The UK's Landmark Online Safety Bill Becomes Law Amidst Controversy
After years of debate, revisions, and intense scrutiny, the United Kingdom's ambitious and highly controversial Online Safety Bill has finally received royal assent, officially becoming law. Now known as the Online Safety Act, this legislation represents a significant attempt by the UK government to introduce comprehensive regulation for online platforms, with the stated goal of making the country “the safest place in the world to be online.” The journey to enactment has been fraught with challenges, balancing the imperative to protect users, particularly children, from online harms against concerns regarding freedom of expression, privacy, and the technical feasibility of some of its requirements.
The Act introduces a wide range of new obligations for tech companies that operate services accessible in the UK. These obligations are designed to tackle various online harms, from the most egregious illegal content to material deemed harmful to children or even adults. Specific issues the bill aims to address include:
- Underage access to online pornography
- Combating the activities of “anonymous trolls”
- Preventing scam advertisements
- Addressing the nonconsensual sharing of intimate deepfakes
- Eradicating the spread of child sexual abuse material (CSAM)
- Disrupting the dissemination of terrorism-related content
The breadth of these targets highlights the government's intention to create a safer digital environment across multiple vectors of potential harm. However, the path to achieving these goals is complex, and the implementation of the Act will not be instantaneous.
A Phased Approach to Implementation by Ofcom
With the bill now law, the responsibility for its enforcement falls primarily to the UK's communications regulator, Ofcom. Recognizing the complexity and scale of the task, Ofcom plans to roll out the new rules in phases. This phased approach is intended to give online platforms time to understand and implement the necessary changes to comply with the Act.
The implementation roadmap outlined by Ofcom consists of three distinct phases:
Phase One: Tackling Illegal Content
The initial phase focuses on the most serious forms of illegal content. This includes obligations for platforms to respond effectively to terrorism-related content and child sexual abuse material (CSAM). A consultation detailing proposals on how platforms should handle these duties is expected to be published shortly after the Act becomes law, signaling the immediate priority placed on these critical areas of online harm.
Phase Two: Child Safety and Pornography Access
The second phase delves deeper into child safety measures and preventing underage access to pornography. This involves platforms implementing robust systems to protect children from inappropriate content and ensuring that age verification or age assurance measures are in place for sites hosting pornography. An initial consultation specifically addressing pornography sites is anticipated in the coming months, with broader consultations on other child safety duties following later.
Phase Three: Transparency, Scam Ads, and Empowerment Tools
The final phase covers a broader set of obligations, including the requirement for certain platforms to produce transparency reports detailing their content moderation practices and compliance efforts. This phase also includes duties related to preventing scam advertisements and providing users with “empowerment tools” that give them greater control over the content they encounter on platforms. Ofcom expects to publish a register of “categorised services” – essentially identifying the large or high-risk platforms that will be subject to the most stringent obligations, such as transparency reporting – by the end of the following year.

Image: Ofcom
This phased approach underscores the complexity of regulating the vast and dynamic online landscape. It acknowledges that platforms will need time and guidance to develop and implement the necessary systems and processes to meet the Act's requirements. Ofcom's role will be crucial in translating the legal text into practical codes of practice that platforms must follow.
Enforcement and Penalties
The Online Safety Act is backed by significant enforcement powers. Companies found to be in breach of their obligations under the Act could face substantial penalties. These include fines of up to £18 million (approximately $22 million USD at the time of the Act's passage) or 10 percent of their global annual turnover, whichever amount is higher. For the largest global tech companies, this 10 percent figure could translate into billions of dollars, providing a powerful financial incentive for compliance.
Furthermore, the Act includes provisions for holding senior managers accountable. In cases of serious non-compliance, company bosses could potentially face prison sentences. This personal liability clause is intended to ensure that responsibility for online safety is taken seriously at the highest levels within tech organizations.
Controversies and Criticisms
Despite the government's framing of the Act as world-leading legislation essential for protecting vulnerable users, it has been the subject of intense criticism from various quarters throughout its passage through Parliament. Two of the most significant areas of concern have revolved around the potential impact on encrypted messaging services and the implications for platforms that host user-generated content without collecting extensive user data.
The Encryption Debate
Perhaps the most vocal opposition has come from providers of end-to-end encrypted messaging services, such as WhatsApp and Signal. These platforms rely on encryption to ensure that only the sender and intended recipient can read messages, making it impossible for third parties, including the service provider itself, to access the content of communications. This strong privacy protection is a core feature for many users and is seen as vital for secure communication.
The controversy stems from a clause in the Act that empowers Ofcom to require tech companies to identify child sexual abuse content (CSAM), even “whether communicated publicly or privately.” While the intent is to prevent the spread of horrific illegal material, encrypted messaging services argue that complying with this requirement would necessitate building “backdoors” or implementing client-side scanning technologies that would fundamentally undermine the security and privacy provided by end-to-end encryption. They contend that if they are forced to scan messages before they are encrypted or after they are decrypted on the user's device, the communication is no longer truly end-to-end encrypted. This, they argue, would compromise the privacy of all users, not just those engaged in illegal activity, and could set a dangerous precedent globally.
Representatives from these companies have stated that they would rather cease operating in the UK than compromise their core security principles. This potential withdrawal of widely used communication services highlights the significant tension between national security/child protection objectives and fundamental digital privacy rights inherent in strong encryption.
Implications for Platforms with Minimal Data Collection
Another area of concern has been raised by organizations like the Wikimedia Foundation, which operates Wikipedia. Wikipedia is a collaborative platform that hosts a vast amount of user-generated content and is designed to collect minimal data on its users, including their ages. The Online Safety Act's strict obligations for protecting children from inappropriate content could pose significant challenges for such platforms.
Without collecting age data, it becomes difficult for Wikipedia to implement age-specific protections or content filtering measures that the Act might require. The Wikimedia Foundation has argued that applying the same stringent requirements designed for social media platforms that collect extensive user data to a service like Wikipedia is inappropriate and could impact its operational model and the principles of open knowledge sharing it embodies. This raises questions about how the Act will be applied to diverse types of online services and whether a one-size-fits-all approach is appropriate.
Ofcom's Stance and Supporters' Views
Amidst the criticism, Ofcom's chief executive, Melanie Dawes, has sought to clarify the regulator's role and push back against the narrative that the Act will turn Ofcom into a censor. Dawes stated that their new powers are “not about taking content down.” Instead, she emphasized that Ofcom's job is to “tackle the root causes of harm” by setting new online standards and ensuring that “sites and apps are safer by design.” She also stressed that Ofcom will take “full account of people’s rights to privacy and freedom of expression” in its implementation of the Act. This suggests an intent to balance safety objectives with fundamental rights, though critics remain skeptical about the practical application, particularly concerning encryption.
On the other side of the debate, child safety advocates have largely welcomed the passage of the Act. Organizations like the National Society for the Prevention of Cruelty to Children (NSPCC) view it as a crucial step forward. Peter Wanless, the NSPCC chief executive, hailed the Act as a “watershed moment” that will make children “fundamentally safer in their everyday lives.” He highlighted that tech companies will now be “legally compelled to protect children from sexual abuse and avoidable harm,” a key objective that has driven much of the political will behind the legislation.
The Path Ahead
With the Online Safety Act now on the statute book, the focus shifts to the crucial phase of implementation. Ofcom's consultations and codes of practice will determine how the Act's broad provisions are translated into specific requirements for online platforms. The tech industry will be closely watching these developments, assessing the technical and operational changes required for compliance. The debates surrounding encryption, in particular, are far from over, and the potential for legal challenges or platforms withdrawing services remains a possibility.
The Act represents a significant regulatory intervention in the digital space, reflecting a global trend towards greater government oversight of online content and platform responsibility. Its success will ultimately depend on Ofcom's ability to implement the rules effectively, fairly, and in a manner that genuinely enhances online safety without unduly infringing on privacy, security, and freedom of expression. The coming months and years will be critical in shaping the future of the internet in the UK under this new legal framework.
The passage of the Online Safety Act marks a pivotal moment in the UK's approach to regulating the internet. It is a complex piece of legislation with far-reaching implications for tech companies, online users, and the fundamental architecture of online communication. While celebrated by those focused on combating online harms, it continues to be viewed with apprehension by those concerned about its potential impact on privacy, security, and the open nature of the internet. The real test of the Act will lie in its practical implementation and whether it can achieve its ambitious safety goals while navigating the intricate balance of rights and responsibilities in the digital age.