
New Czech Checks on Disinformation? How the Czech Constitutional Court Redefined the Boundaries of Alarmist Speech
The decision of the Czech Constitutional Court (further referred to as “the Court”) file no. I. ÚS 1927/24 concerning the spread of disinformation on social media raised some eyebrows, including ours. The Court decided in favour of the complainant, who was found guilty of the crime of spreading alarmist messages (§ 357 of the Act 40/2009 Sb., Penal Code) because lower courts had violated his right to freedom of expression. We claim that the argumentation on which the court based its decision, is flawed and may discourage relevant authorities from prosecuting similar cases of potentially dangerous disinformation.
The structure of our contribution is as follows. Firstly, we introduce the complainant, a well-known figure of the (so-called) alternative media, summarize the proceedings before the lower courts and the decision and argumentation of the Court. Then, we follow with the critique of the Court‘s argumentation, specifically the lack of empirical basis for its claims about the debate on social media, its refusal to qualify statements combining facts and opinions as false and its broad emphasis on the protection of any political speech which might harm rather than protect debate in the public interest.
The Complainant and the proceedings
Ladislav Vrabel, an activist linked to anti-establishment groups and disinformation networks, has been active since the immigration crisis in 2015. He gained prominence during the COVID-19 pandemic and after Russia’s invasion of Ukraine, spreading pro-Kremlin narratives, opposing Czech support for Ukraine, both online and at anti-government demonstrations he co-organized. Vrabel leads the political movement “Czech Republic First! ” which received 0.23% of votes in the last EP elections. Vrabel financially benefits from his activities as he raised one million CZK (around 40,000 Euros) mostly through contributions from his followers of whom he has approximately 18,000 on Facebook, his main platform.
In this case, Vrabel was prosecuted for a speech in which he criticizes plans of the Czech government to purchase F-35 fighter jets and claims that the government will use them to provoke nuclear war with Russia. In a live broadcast on Facebook and YouTube, he alleged that Czechia aims to trigger nuclear retaliation from Russia and urged his audience to take action. The stream garnered over 20,000 views across both platforms.
Vrabel was prosecuted for a crime of spreading alarmist messages, which aims to tackle the spread of false messages that can arouse significant concern in their recipients. Those, who intentionally spread such messages can be sentenced up to two years in prison. The first instance court found him guilty and imposed a suspended four-month prison sentence. The second instance court returned the case and tasked the first instance to properly examine and evaluate all available evidence. The first instance court came to guilty verdict again (adding more thorough arguments) and the second instance court approved the decision but replaced the suspended prison sentence with a less strict monetary penalty of 10,000 CZK (around 250 Euros). Vrabel appealed to the Supreme Court, which upheld the lower courts’ decision, stating that his intentional dissemination of false claims, given the geopolitical context, can be prosecuted as an alarmist message, without infringing his right to freedom of expression.
The Court’s Argumentation
Article 17 of the Czech Charter of Fundamental Rights and Freedoms protects freedom of expression, allowing limitations only if they meet three conditions: (1) prescribed by law, (2) pursue a legitimate interest, and (3) are necessary in a democratic society. In this case, the legal basis was the crime of spreading alarmist messages, with legitimate interest being the prevention of public unrest. The lower courts justified this limitation as necessary to prevent public unrest, the Constitutional Court disagreed on the necessity of the limitation.
The Court emphasized the importance of freedom of expression in democratic societies, even in the Internet age, where technology allows all types of rather problematic content, including disinformation. On the other hand, according to the court, the same technology allows for quick and effective mitigation of false information, as other users can provide factual corrections. As such, social media can serve as marketplaces of ideas, where the possibility of correction of any falsities reduces the need for state interventions, including criminalization. In Vrabel’s case, the Court also claimed that the debate about the falsity of his speech, in the comment section of the video posted limited any potential to raise significant concerns.
Furthermore, the Court classified Vrabel’s speech as political speech, which requires stricter scrutiny for limitations and is often opinion-based rather than factual. As such, the speech qualifies as a hybrid statement, which combines a statement of facts with derived opinions and thus cannot be strictly classified as false. Additionally, the speech is, according to the court, within the limits of exaggerated political speech and the statement “enjoy the last few weeks of your life” isa “clear hyperbole.” The court dismissed claims that such speech could cause significant public concern, noting that political speeches are often exaggerated and the audience is used to such exaggerations.
The critique
A Rather Naive Perception of Debates on Social Media
We find the Court’s assumption, that social media can be likened to marketplaces of ideas where falsities are efficiently tackled by correct information rather naive and ungrounded in empirical evidence. The comparison of speech has already received criticism (see for example Marshall, W.P. The truth justification for freedom of speech), pointing to shortcomings of this approach, including unequal access to the market by all actors (resulting in unequal opportunities to share their ideas) and irrationality of the actors, amplified by the current post-truth era. Social media exacerbate these shortcomings, as they are curated by the principles of the attention economy, which treats human attention as a scarce resource and shapes the environment to maximize engagement – by showing users what they will most likely react to (either supportively or not) and therefore spend more time on the platform. Sadly, the content users react to the most is of the extreme kind, such as hate speech or disinformation. Thus, the algorithms tend to amplify problematic content over the “unextreme” one.
By showing users this kind of content, social media are fuelling cognitive biases, such as selective exposure and confirmation bias, which drive users to seek information that is in the line with their original opinions and more easily dismiss any conflicting information. Users can find themselves in so-called filter bubbles with like-minded users and algorithms can lead them further into rabbit holes, showing them more extreme content in order to increase their engagement, while increasing polarization as well. These effects were well documented in the story of Carol and Karen, Facebook’s internal study publicized by whistleblower Frances Haugen. Carol and Karen were fictional middle-aged women from North Carolina, whose profiles were created for the experiment. In the beginning, Carol liked the official profiles of Donald Trump, Melania Trump or Fox News, and Karen chose to follow Elizabeth Warren and Bernie Sanders. Both then followed recommendations provided by the algorithms, which led Carol all the way to pages such as” Trump is Jesus” and groups promoting QAnon conspiracy theory, and Karen to anti-Trump pages with content like a picture of Trump with an anus instead of his mouth.
Online fact-checking is a tool which aims to tackle falsities by providing correct information and the current research shows that fact-checking can be an effective tool to tackle online disinformation, with lower effectiveness in cases of political misinformation when receivers of misinformation support the one who spreads them. Additionally, only a fraction of online content is subjected to fact-checking and users are able to avoid any fact-checking altogether. Therefore, the empirical evidence to claim that online speech can be efficiently tackled with more speech is lacking.
In this case, the debate under Vrabel’s livestream could present differing opinions on the subject and potentially mitigate the concerns that his speech could have raised. On the other hand, even the comments on social media are filtered by personalised algorithms, which preset the most relevant comments to the user. That means users in some cases cannot even encounter the comments with factual corrections. Therefore, the Court in the current case overestimated the potential of comment section to mitigate potential harm while on the other hand underestimating the ability of social media to drive divisions and proliferate problematic content. If this approach is followed by general courts in future cases, it might lead to underestimating the danger of similar cases of online disinformation and harmful speech.
Courts’ Evaluation of Hybrid Statements Basically Prevents Their Future Prosecution as a Crime of Spreading Alarmist Messages
The Court has classified claimant’s speech as a hybrid statement, contrary to the lower court’s classification as a mere statement of facts. Hybrid statements are based on statements of facts, which can be classified as either true or false, and derived opinions, which are assessed based on their adequacy (see Constitutional Court’s decision file no. I. ÚS 2946/23). According to the Court, the statement of fact in the given case is the plan of the Czech Government to buy F-35 fighter jets – a true piece of information – on which the claimant has based his assumptions of possible nuclear war between Russia and Czechia – a derived opinion, which cannot be evaluated as either true or false. The Court concludes that it is impossible to evaluate the veracity of hybrid statements in general, thus hybrid statements cannot be classified as falling under the scope of the crime of spreading alarmist messages.
However, the Czech criminal-law doctrine has assumed that even hybrid statements can be evaluated as false information and prosecuted under the crime of spreading alarmist messages if statements of facts are only insignificant parts in messages containing other distorted statements. The Court therefore departed from previous academic views on the problem and set constitutional-law boundaries for the prosecution of the crime. That can significantly limit any future prosecution of similar cases of online disinformation. No matter how exaggerated the speech will be and what excessive claims it contains – as long as it is a hybrid statement based on a correct factual claim – it cannot be evaluated as false and prosecuted, based on freedom of expression.
Different Shades of Political Speech
Additionally, the Court has put strong emphasis on the protection of political speech. An exact definition of political speech does not exist, but in general, any speech concerning issues that are in public interest (like politics and public affairs) is necessary for democratic debate and its limitations require stricter scrutiny no matter whether it is actually capable of contributing to the public interest debate (see Lingers v. Austria, 8. 7. 1986, č. 9815/82, Monnat v Switzerland, 21. 9. 2006, č. 73604/01, § 58, I. ÚS 4022/17, I. ÚS 1933/24, p. 19., II. ÚS 577/13, point 20).
In our opinion, the current era of social media which allows the proliferation of all types of problematic content, the strong protection of any political speech – no matter its actual content or form – can harm public interest debate rather than foster it. Social media allow politicians and political activists to spread ‘cheap speech’, which Richard L. Hansen defines as spreading information, including political information, which often lacks social value and can be used to facilitate disinformation campaigns or proliferate other content, which might undermine democratic processes. Divisive statements and fearmongering (such as that used by Vrabel) aim to flood public debate with further divisions for the sake of the personal gain of the speaker rather than to contribute to any debate in public interest. Proliferation of such content can also be used to raise finances, as Vrabel has previously repeatedly asked his supporters for financial contributions, including in the livestream which contained the F-35 speech for which he is prosecuted (although Vrabel claims to spend the money on activities related to the anti-COVID protests, he previously spent the money from his supporters on golden ingots).
Intentional falsehoods and ‘cheap speech’ harm public debate. Therefore, courts should evaluate the content and form of the given political speech. If the speech contains any above-described features, such as fearmongering, intentional sowing of division, overexploitation of emotions or any connection to speaker’s financial interest, the applied scrutiny for its limitations should not be as strict as for regular political speech.
Conclusion
We have shown that the Constitutional Court’s justification is not based on adequate empirical evidence, does not follow any particular case-law and contradicts previous doctrine. As the Constitutional Court’s decisions can serve as models for similar cases, we are concerned that this decision can discourage relevant authorities from prosecuting similar cases of potentially dangerous disinformation and problematic online content. This approach may be harmful, as Czechia is constantly targeted by Russian propaganda and disinformation campaigns, and false narratives about the involvement of the Czech government in the war in Ukraine (such as Vrabel’s speech here) are often featured among online disinformation narratives in Czechia. The potential chilling effect that this decision can have on future prosecution of disinformation is concerning. Rather than uphold freedom of expression in Czechia and foster democratic debate, the decision can make life easier for Russian propaganda by departing from the criminal law doctrine regarding the crime of spreading alarmist messages, making it almost impossible to prosecute any hybrid statement disinformation as such crime.
Tina Mizerová is a PhD researcher at the Institute of Law and Technology at the Faculty of Law of the Masaryk University in Brno. Her research focuses on regulation aimed at tackling online disinformation. Besides law, she graduated journalism and political sciences, has experience from media and also serves as a lecturer and board member in Fakescape, an organization focused on media literacy education.
Jan Martínek is currently finishing Master’s in Law at Masaryk University in Brno. In his thesis, he focuses on the regulation of internet platforms. He is a lecturer and a board member Fakescape (an organisation focused on media literacy education) and an intern at the Analytics Department of the Constitutional Court.