Submission
Privacy Policy
Code of Ethics
Newsletter

How Donald Trump’s Assassination Attempt Highlighted a New Era in Online Discourses

In the wake of the attempted assassination of former President Trump, the landscape of social networks and content moderation has seen a significant shift. This incident has not only highlighted the rapid dissemination of information via social media but also underscored a growing sense of resignation among tech giants regarding their role in moderating content.

The shooting event itself demonstrated how swiftly social media can disseminate information. Platforms like X (formerly Twitter) and Threads were able to provide real-time updates, videos, and reactions faster than traditional news networks. This immediacy, coupled with the availability of primary source material, highlighted the unique role social media plays in modern news dissemination. Photographers captured historic images that were uploaded and shared almost instantaneously, providing the public with a raw and unfiltered look at the event.

Despite this efficiency in reporting real-time events, there was a noticeable absence of content moderation. Graphic images of the shooting circulated widely without any attempts to moderate them. Concurrently, conspiracy theories proliferated across platforms, especially on X, with little to no intervention to curb their spread. This unregulated flow of misinformation suggests a significant shift in how platforms manage content, moving away from their previously aggressive stance on moderation. In the immediate aftermath, social media platforms became battlegrounds for news, commentary, analysis, conspiracy theories, fabrications, and jokes. Unlike previous incidents where platforms swiftly intervened to curb misinformation, this time saw a notable reduction in the role of trust and safety teams.

Ironically, many of the conspiracy theories on X came from the left. Liberals, fearful that the shooting would give Trump an insurmountable advantage in the election, baselessly suggested that Trump’s campaign orchestrated the incident. This shift in narrative marked a departure from the typical right-wing conspiracy theories that have become common on social media. Right-wing platforms like Trump-backed Truth Social and conservative forums such as Patriots.win were rife with their own conspiracy theories. Theories ranged from accusations of a Democratic plot to claims of an internal “deep state” conspiracy. Some even alleged that the Secret Service’s failure to prevent the attack was deliberate, attributing it to a weakening of the agency by diversity initiatives.

Historically, platforms like Facebook, under the leadership of Mark Zuckerberg, took an active stance on content moderation. Zuckerberg famously resisted becoming the “arbiter of truth”, establishing a pseudo-Supreme Court for content moderation on Facebook platforms. However, the intensity and volume of content moderation efforts peaked during the COVID-19 pandemic, when platforms were under immense pressure to control misinformation. During the pandemic, social networks were heavily scrutinized and often criticized for how they handled misinformation, leading to significant controversies and even lawsuits. The Supreme Court recently ruled that the government could communicate with social networks about misinformation, a decision that reflects the high stakes of content moderation debates. Yet, as the Trump shooting incident unfolded, it became apparent that the era of proactive content moderation might be fading.

As Casey Newton writes on Platformer, the role of social media in spreading misinformation illustrates a deeper societal issue: the public’s demand for conspiracy theories. Comedian Josh Gondelman’s viral post on X encapsulated this sentiment, acknowledging the public’s appetite for outlandish narratives. This phenomenon, described by researcher Renee DiResta and journalist David French, highlights how the rise of social media and the decline of mainstream journalism have led to “bespoke realities,” where people seek information that aligns with their pre-existing beliefs. During the Trump Administration, platforms faced enormous pressure from lawmakers, regulators, and journalists to restrict the spread of misinformation. However, under Elon Musk’s leadership, X has significantly reduced its trust and safety workforce and ceased labeling posts from elected officials. This ideological shift has changed how misinformation is managed and disseminated on the platform. Unlike in 2020, when platforms actively labeled misleading posts, this time they largely allowed misinformation to spread unchecked. This hands-off approach can be problematic, but it also raises questions about the balance between free speech and misinformation control. Platforms should intervene to prevent violence against minority groups incited by false information, but they must also navigate the challenge of moderating real-time events where the truth is still emerging.

The idea that social networks bear any responsibility for the spread of harmful information appears to be losing traction, marking a pivotal change in the information environment. This shift is further evidenced by the public turn of figures like Mark Zuckerberg, who once championed Facebook’s content moderation efforts. Over time, Zuckerberg has retreated from this stance, focusing instead on product development and innovation within Meta, leaving the contentious topic of content moderation to others like Nick Clegg. In the summer of 2023, Meta announced a significant shift in its approach to content moderation with the launch of Threads, its new Twitter alternative. This change aims to mitigate the political controversies Meta has faced over its moderation policies on Facebook. Rather than Meta deciding what content is appropriate, Threads will empower users to control the type of content they see, similar to recent features introduced on Facebook. This move aligns with a broader trend in social media toward giving users more control over their online experience, as seen in platforms like Mastodon and Bluesky. Meta’s goal is to avoid the pitfalls of past moderation controversies by decentralizing content control, while still enforcing guidelines against hate speech and harassment. This new approach marks a departure from traditional social media models and signals a shift toward a more user-driven internet experience. This move symbolizes a broader industry trend towards resignation and even nihilism regarding the efficacy and value of content moderation.

The diminishing value proposition of content moderation is clear. Social media companies no longer see significant value in maintaining extensive content moderation practices. This shift can be seen as a response to the relentless criticism and the complex nature of defining and enforcing moderation policies. Consequently, platforms have gradually moved away from their previous commitments to curb misinformation and harmful content. This resignation reflects a broader societal trend where consensus on the role and responsibilities of social networks in content moderation has fractured. The lack of agreement on what content moderation should entail has led many in the industry to throw up their hands in defeat. The belief that content moderation is no longer a worthwhile endeavor has taken hold, allowing tech giants to focus on other priorities.


János Tamás Papp JD, PhD is an assistant professor at Pázmány Péter Catholic University, Hungary, and a legal expert at the Department of Online Platforms of the National Media and Infocommunications Authority of Hungary. He has taught civil and constitutional law since 2015 and became a founding member of the Media Law Research Group of the Department of Private Law. He earned his JD and PhD in Law at the Faculty of Law and Political Sciences of the Pázmány Péter Catholic University. His main research fields are freedom of speech, media law, and issues related to freedom of expression on online platforms. He has a number of publications regarding social media and the law, including a book titled „Regulation of Social Media Platforms in Protection of Democratic Discourses

Print Friendly, PDF & Email