On Thursday 3rd October, Europe’s highest court ordered Facebook to take down content that was sufficiently similar to comments that had been ruled defamatory.
Specifically, the European Court of Justice ruled that “EU law does not preclude a host provider such as Facebook from being ordered to remove identical and, in certain circumstances, equivalent comments previously declared to be illegal.”
Furthermore judges said: “EU law does not preclude such an injunction from producing effects worldwide, within the framework of the relevant international law.” This would appear to be at odds with its ruling last week that the so-called “Right to be Forgotten” – whereby search engines must delist outdated or irrelevant information from searches for individual names – does NOT extend to international territorial scope.
In the most recent case, Austrian MP, Eva Glawischnig-Piesczek, sought an order that Facebook Ireland remove a comment published by a user, registered under a false name, that was harmful to her reputation including certain allegations that the Austrian courts had already found to be insulting and defamatory. The post although on the user’s own page, could be accessed by any Facebook user.
Under the eCommerce Directive, a host provider such as Facebook is not liable for stored information if it has no knowledge of its illegal nature, or if it acts expeditiously to remove or disable access to that information as soon as it becomes aware of it. At the same time however the same law bans general monitoring of information uploaded by users.
The Austrian Supreme Court asked the ECJ to strike a balance between the different interests at stake, to rule on the territorial reach of illegal information take-downs and whether similar content should be proactively removed.
The ECJ decided that national authorities could order a host provider “to remove information, the content of which is identical to the content of information which was previously declared to be unlawful, or to block access to that information, irrespective of who requested the storage of that information.”
The decision adds that hosting providers should remove information covered by the injunction or block access to that information worldwide, as long as EU countries comply with international law.
The ruling has surprisingly united the tech industry and digital rights activists in condemning the decision … for different reasons.
European digital rights group EDRi, was appalled the court explicitly did not rule out the possibility for Facebook et al to use automated monitoring.
Hosts can be ordered to remove content “…provided that the monitoring of and search for the information concerned are limited to information conveying a message the content of which remains essentially unchanged compared with the content which gave rise to the finding of illegality … and provided that the differences in the wording of that equivalent content are not such as to require the host provider to carry out an independent assessment of that content (thus, the host provider may have recourse to automated search tools and technologies).”
“This ruling could open the door for exploitative upload filters for all online content,” said Diego Naranjo, EDRi’s Head of Policy. “Despite the positive intention to protect an individual from defamatory content, this decision could lead to severed freedom of expression for all internet users, with particular risks for political critics and human rights defenders by paving the road for automated content recognition technologies.”
“If the obligation to block future content applies to all users on a large platform like Facebook, the court has in effect considered it to be in line with the eCommerce Directive that courts demand automated upload filters and blurred the distinction between general and specific monitoring in its previous case law. EDRi is concerned that automated upload filters for identical content will not be able to distinguish between legal and illegal content, in particular when applied to individual words that could have very different meanings depending on the context and the intent of the user,” he continued.
According to EDRi the judgement seems to be departing from previous case law regarding a ban on general monitoring obligations (for example Scarlet v. Sabam). “Imposing filtering of all communications in order to look for one specific piece of content, using non-transparent algorithms, is likely to unduly restrict legal speech,” added Naranjo.
The Computer & Communications Industry Association (CCIA) is also concerned about threats to free speech, but primarily in terms of territorial reach. “The ruling essentially allows one country or region to decide what internet users around the world can say and what information they can access. What might be considered defamatory comments about someone in one country, will likely be considered constitutional free speech in another,” said CCIA Europe Senior Manager, Victoria de Posson.
She added that “few hosting platforms, especially startups, will have the resources to implement elaborate monitoring systems.”
The Center for Data Innovation, Senior Policy Analyst, Eline Chivot agreed: “Today’s decision by the European Court of Justice sets a precedent that will have negative global implications on freedom of speech. European courts can now order internet companies to remove posts, photographs, and videos in other countries.”
“What is prohibited in one nation may not be in another, including within the European Union and between its member states. For example, laws on what is considered defamatory speech vary widely between countries. And this precedent will embolden other countries, including those with little respect for free speech, to make similar demands. This ruling opens a Pandora’s box that Europe must attempt to shut immediately,” she said.