Privacy Policy
Code of Ethics

The Online Safety Act Becomes Law

The UK, in its pursuit to create “the safest place in the world to be online,” has recently transformed the Online Safety Bill into law. While the bill carries the aspirations of many who hope for a more secure online space, it also ushers in concerns from various stakeholders. Let’s take a closer look at this significant piece of legislation and its implications.

The Online Safety Bill has been in the works for several years, reflecting the government’s commitment to creating a safer online environment. The bill sought to address “legal but harmful” content, which includes healthcare disinformation, posts promoting suicide or eating disorders, and political misinformation. While some critics believed the bill granted excessive power to tech giants, many lauded it for its thoughtful approach to a rapidly evolving issue. The bill’s widespread political support was noteworthy. However, as the bill transitioned through parliament, it expanded in scope and became more controversial. Its over 200 clauses now demand platforms to address a plethora of illegal content and maintain a “duty of care” to their users.

The Online Safety Act regulates online media and speech in accordance with a parliamentary act of the United Kingdom. The draught bill was introduced and published on May 12, 2021, and underwent its third reading on September 19, 2023. The BBC reports that, in accordance with the 2019 Online Harms White Paper, the Act grants the relevant Secretary of State the authority to designate, suppress, or record a broad spectrum of “harmful” speech and media, subject to parliamentary approval. With royal assent, the Act was enacted into law in the United Kingdom on October 26, 2023. Its primary objective is to impose new obligations on tech companies regarding the design, operation, and moderation of their platforms. Among the most pressing issues it seeks to address are the underage access to online pornography, the emergence of “anonymous trolls”, scam advertisements, and intimate deepfakes.

The Act establishes a novel obligation of care for online platforms, mandating that they address “harmful” content generated by their users, whether it be illegal or lawful. It also grants Ofcom the authority to restrict access to specific websites. Large social media platforms are required to maintain access to journalistic or “democratically important” content, including user comments on political parties and issues, and refrain from removing it. Child Sexual Abuse Material (CSAM) scanning is mandated by the Act for all platforms, including end-to-end encrypted messengers, notwithstanding expert cautions that such a mechanism cannot be implemented without compromising the privacy of users. The Act confers substantial authority upon the Secretary of State to issue directives to Ofcom, the media regulator, concerning the execution of its duties. This authority extends to directing Ofcom regarding the substance of codes of practice. This has prompted concerns regarding the potential erosion of Ofcom’s authority and independence due to the government’s unrestrained emergency-like powers in the regulation of speech.

Even though the bill is now law, online platforms are given a grace period before complete compliance is mandatory. Ofcom, responsible for enforcing the act’s provisions, has outlined a phased approach. The first phase involves platforms responding to illegal content, notably terrorism-related and child sexual abuse material. The details of this phase are expected to be publicized on November 9th. Protection of women and girls, child safety, and pornography comprise the second phase of the Act. Due in December 2023, Ofcom’s initial consultation will establish draft guidance for services that host pornographic content. The Act will require additional consultations regarding child safety responsibilities in the spring of 2024; by the spring of 2025, a preliminary set of guidelines on safeguarding women and girls will be published.

Platforms failing this duty would be liable to fines of up to £18 million or 10% of their annual turnover, whichever is higher. In extreme cases, company executives could face imprisonment, underscoring the government’s seriousness in enforcing online safety. Additional duties for categorized services are the subject of phase three. These responsibilities, which include the publication of transparency reports and the implementation of user empowerment measures, are applicable to services that satisfy specific criteria pertaining to their user base size or high-risk functionalities. In the spring of 2024, Ofcom intends to issue guidance to the Secretary of State concerning categorization and to formulate guidance on its methodology for conducting transparency reporting. More information on Ofcom’s roadmap can be found here.

Despite its noble intentions, the Online Safety Act has not been without its critics. Messaging platforms, such as WhatsApp and Signal, have voiced concerns over clauses that could potentially compromise end-to-end encryption. Some service providers even hinted at exiting the UK market rather than complying with these rules. Similarly, the Wikimedia Foundation expressed worries that the bill’s stringent child protection measures could pose challenges for platforms like Wikipedia, which minimizes data collection on users.

János Tamás Papp JD, PhD is an assistant professor at Pázmány Péter Catholic University, Hungary, and a legal expert at the Department of Online Platforms of the National Media and Infocommunications Authority of Hungary. He has taught civil and constitutional law since 2015 and became a founding member of the Media Law Research Group of the Department of Private Law. He earned his JD and PhD in Law at the Faculty of Law and Political Sciences of the Pázmány Péter Catholic University. His main research fields are freedom of speech, media law, and issues related to freedom of expression on online platforms. He has a number of publications regarding social media and the law, including a book titled „Regulation of Social Media Platforms in Protection of Democratic Discourses”.

Print Friendly, PDF & Email