Breaking the platforms: why nations are going to war with social media
Breaking the platforms: why nations are going to war with social media
Brazil’s block on X and France’s arrest of Telegram’s founder signal a new, aggressive approach to reigning in disinformation and extremism online.
A direct line can be drawn from the blocking of X (formerly Twitter) in Brazil this past weekend to the arrest of Telegram's founder and CEO in France last week. Both incidents, along with potential future actions, reflect a growing recognition among governments and authorities that the traditional regulatory approaches used over the past decade to address the harms of social platforms have failed. These methods were never effective, and there was never a realistic chance they would work. Authorities are coming to the realization that to effectively curb the spread of disinformation, hate speech, incitement to violence, dangerous conspiracies that destabilize regimes, and foreign interference from countries like Russia, China, and Iran, they must adopt more extreme measures. In the current environment, actions such as arresting top executives and blocking platforms may be the only ways to compel companies to take their responsibilities seriously in limiting harmful content.
Modern social media, which can trace its origins to the launch of Facebook in February 2004, was founded on an optimistic vision: to connect the world through digital communication, creating a global village where family members, friends, and acquaintances, near and far, could exchange experiences, opinions, pictures, and memories, fostering understanding and closeness among people from different places.
It is difficult to pinpoint exactly when this vision began to falter, if it ever existed beyond the enthusiastic declarations of entrepreneurs to investors and the public. However, 2016—when Russia used Facebook to disrupt the U.S. election while Mark Zuckerberg appeared either uncomprehending or indifferent—stands out as a key moment. Today, it is evident that the current state of various platforms is a stark contrast to that original vision.
Facebook has become a battleground dominated by irrelevant content, X (formerly Twitter) has turned into a playground for trolls and bots spreading conspiracies and dangerous narratives, WhatsApp is a conduit for fake news and destructive rumors, Telegram is the platform of choice for terrorist organizations and extremist incitement, and Instagram and TikTok promote dubious influencers. All of these platforms, without exception, bombard users with distorted, false, inciting, and dangerous content. The stories are numerous and well-known: Bin Laden's inciting letter that became a viral hit on TikTok, Iranian bots spreading divisive narratives on Twitter, Hamas accounts using TikTok to disseminate horrific images from the October 7th attack, and the pervasive incitement, anti-Semitism, and racism on all platforms.
Over the years, governments and regulators have tried to stem the tide of harmful content spilling from social platforms into the offline world, contaminating everything it touches. CEOs and executives have been summoned to testify before legislative bodies, regulators have devised plans in cooperation with companies to remove harmful content quickly, laws have been passed to give governments broader tools to fight and penalize platforms, and the public and media have pressured companies to take action.
In response, the companies have insisted that they are fighting back, developing automated tools, and hiring staff to identify and remove harmful content swiftly. They publish periodic reports boasting of their success in removing harmful content, much of it, they claim, automatically before users even see it. It’s possible that their efforts have made some difference—without them, the situation might be far worse. However, for the average user, who struggles to scroll through their feed or read comments without encountering excessive incitement, it doesn’t feel that way.
Governments also feel that not enough has been done. Now, after years of trying conventional methods with unsatisfactory results, it seems that the time has come to take more drastic measures, hitting platforms where it hurts the most—in their revenue and the personal freedom of their leaders—in the hope that this will finally bring about change.
Brazil offers perhaps the clearest example of the need for new and radical thinking in dealing with the threats posed by social media platforms. In 2022, as the country was inundated with fake news and conspiracy theories during the presidential elections, the Supreme Court granted Judge Alexandre de Moraes sweeping powers to order platforms to remove content that threatened the democratic regime.
Moraes used this authority extensively, issuing thousands of orders for platforms to remove inflammatory and harmful posts, sometimes within hours. He also ordered X to block 140 accounts, mostly of right-wing figures, including prominent commentators and members of Congress, who were spreading information that questioned the legitimacy of the 2022 election results, in which then-incumbent President Jair Bolsonaro lost. Experts say this was one of the most effective efforts to purge fake news from online discourse, crippling right-wing attempts to illegally overturn the election results.
However, in recent weeks, X stopped complying with Moraes' orders. After the judge threatened to arrest the company’s legal advisor in Brazil, X shut down its offices in the country. In response, the judge ordered Brazil's communications regulator to block access to X within 24 hours. Moraes also imposed daily fines of 50,000 reals (about $8,900) on citizens attempting to bypass the block, such as by using a VPN to disguise their location. It remains unclear how these citizens might be identified without highly intrusive surveillance methods.
While the decision may be extreme and has drawn criticism from experts—Prof. David Nemer of Harvard University’s Center for Internet and Society told the New York Times, "It's too much. A warning to all of us."—what options remain when a foreign company refuses to comply with a lawful court order, shuts down its local offices to avoid sanctions, and defies efforts to enforce compliance? Moraes also froze local funds of Musk's satellite Internet service, Starlink, in an attempt to collect $3 million in fines imposed on X, but this measure alone may not be sufficient to compel the company to comply.
The same logic underlies the recent arrest and indictment of Telegram’s founder and CEO, Pavel Durov, in France. Durov built Telegram as an anti-censorship platform, a haven for all voices—even those of Nazis or Hamas terrorists—and he prided himself on defying demands from governments and regulators to remove harmful content. His arrest and indictment in France were among the few ways to pressure the company into action.
The cases of X and Telegram are extreme, involving platforms that have flagrantly and deliberately ignored orders to remove or block harmful content (in Telegram’s case, this even includes pedophilic content, which is universally acknowledged as dangerous). These platforms have shown contempt, belligerence, and provocation, leaving Brazil and France with little choice but to take the actions they did.
However, the other platforms are not much better—just more sophisticated in their operations. They don’t openly declare themselves as free-for-alls, nor do they brazenly defy court orders, but in practice, they do not do enough to cleanse themselves of the harmful content that overwhelms them. And once a precedent is set—such as arresting a founder or blocking a platform—the threshold for similar actions against other companies becomes significantly lower.
Other countries are now closely watching what is happening in Brazil and France. The outcomes of these actions will likely set the course for future measures. If Telegram and X cave to pressure, it is only a matter of time before we see similar actions taken against other platforms and their executives in other parts of the world.