States and Congress have been enacting or debating different approaches to online “content moderation” by social media and other internet platforms. California’s “Content Moderation Requirements for Internet Terms of Service” bill (“AB 587”) goes into effect on Jan 1, 2024. In short, AB 587 requires social media companies to disclose their processes to take down or manage content and users on their platforms. AB 587 takes a somewhat different approach to social media content regulation than previously enacted laws in Texas and Florida. The Texas and Florida laws also address the content management practices of social media companies, but go beyond requiring disclosures and also prohibit specific conduct in order to restrict putative viewpoint discrimination. The Eleventh Circuit partially repudiated the Florida law because the associated content moderation requirements violate social media companies’ First Amendment rights to exercise editorial judgment on their platforms. The Fifth Circuit, on the other hand, upheld a similar Texas law because the court believed that content moderation based on viewpoint would constitute censorship and that a platform’s content moderation activity is not speech protected by the First Amendment.
Two years after the UK Government first put forward its intention to introduce a new regime to address illegal and harmful content online, the UK Government published the Online Safety Bill (“Bill”) on 12 May 2021. The Bill imposes duties of care on providers of digital services, social media platforms and other online services to make them responsible for content generated and shared by their users and to mitigate the risk of harm arising from illegal content (e.g., by minimising the spread of such content). The Bill also aims to ensure that users are able to express themselves freely online and requires platforms to consider the importance of freedom of expression when fulfilling their duties.