Future Tense

U.K. Is Coming Around to Google’s Side on Right to Be Forgotten

Compared with the rest of Europe, the U.K. seems positively reasonable on the right to be forgotten.

Photo by CARL COURT/AFP/Getty Images

Back in May, the European Court of Justice issued a landmark ruling on the controversial “right to be forgotten,” allowing EU citizens to request that search providers remove certain personal information from search engine results. Google, seeing itself in the crosshairs, started complying with information removal requests immediately. Two months later, journalists working for the BBC, the Guardian, and the Daily Mail—three of the United Kingdom’s most prominent media outlets—suddenly found their own work censored in search results as a result of new laws. At the time, some journalists suggested Google displayed almost deliberate clumsiness in information removal compliance. And in the United Kingdom, it may have done the trick.

Last week, the European Union Committee of the U.K. House of Lords issued a sharp rebuke of the ECJ ruling: “The ‘right to be forgotten’ … is misguided in principle and unworkable in practice.” While the report is simply opinion, it certainly creates an unforgiving environment for future proposals regarding the issue. The House of Lords inquiry on the matter took place in early July, around the same time the British media’s outrage over the new policy implementation was peaking. The timing couldn’t have been better.

Much of the debate surrounding the ECJ ruling focuses on the definitions of “data controllers” and “data processors.” The EU Directive on Data Privacy uses broad language to define both, designating a data controller as “people or bodies that collect or manage personal data.” (By this definition, aren’t we all data controllers?) A data processor, in turn, merely processes data on the controller’s behalf. Cloud storage is one such example, where a provider stores data for a client. And, until May, most experts considered search engines to be processors. Companies like Bing and Google use complex, automated algorithms to scan the Web before providing access via hyperlink to information created and maintained by another party.

Back in June, ECJ Advocate General Niilo Jääskinen stated that search providers “cannot be considered as ‘controller’ of the processing of such personal data.” The recently issued House of Lords report also says that Google should be considered a data processor. But evidently, the ECJ disagreed. May’s landmark ruling classified search engines as “controllers,” mandating that they handle all information removal requests. This creates a significant financial burden for search providers—not all are as large as Google—which are now responsible for establishing the internal frameworks associated with hundreds and thousands of information removal requests. The move also forces search providers to play judge and jury, deciding for themselves which information is “inaccurate, inadequate, irrelevant or excessive.

So if the U.K. makes an effort, could Google be officially reclassified as a data processor and thus absolved of the responsibility of enforcing the right to be forgotten? It’s unclear. The ECJ ruling from May applies to a specific case, Google Spain v AEPD, and there is no appeals process.It’s now up to smaller courts to interpret the ruling during similar cases.  And European Parliament may soon make things even more complicated, as proposed amendments would grant data subjects a “right to erasure” from third parties as well. (“Erasure” has become an increasingly popular term in lieu of “forgotten,” referencing the removal of information at a data subject’s request.) In the United States, protection of third-party providers (as outlined in Section 230 of the Communications Decency Act) is a basis for free speech on the Internet. Classifying third-party providers as data controllers would mean that sites relying primarily on user content—like YouTube, Facebook, or WordPress—could be required to take down user-submitted blog posts, videos, or other content when confronted with a information removal request from a data subject.

Forcing companies to decide which information is removed—at best a lazy solution by authorities—creates a system whereby governments could censor speech without consequences. A powerful politician (with a powerful legal team) would seemingly be more likely to achieve a information removal versus an ordinary citizen. By mandating that search providers process all erasure requests, governments maintain a degree of separation that absolves them from accusations of censorship. And the politicians who initiate information removal requests would be acting within their newfound rights, immune from questioning.  

Imagine French politician Marine Le Pen having the ability to ask search providers to restrict access to activist websites she considers “inaccurate or excessive.” After all, given her party’s efforts to mainstream their image, intense focus on xenophobic comments from the past could be seen as excessive, and may no longer be relevant in current elections. Should her racist past be expunged from the search results of new French voters? With overbroad terminology, neither a government nor a search provider could be trusted to remain objective when presented with such an erasure request.

In typical fashion, the U.K. has distinguished itself from its continental peers, questioning the well-intentioned plan with potentially disastrous consequences. The House of Lords committee urged their peers to “persevere in their stated intention of ensuring that the regulation no longer includes any provision on the lines of the Commission’s ‘right to be forgotten’ or the European Parliament’s ‘right to erasure’.” In other words, the U.K. is not on board with any form of the law. And for Google, a brief respite in a storm of controversy.