Europe's Online Surveillance Laws Face New Headwinds
Online surveillance laws in the UK and EU face renewed resistance as they wind their way through the legislative process.
The UK Parliament has all but capitulated on plans to oblige platforms such as WhatsApp and Signal to monitor their users' private chat messages. Popularly referred to as the 'spy clause', this is a provision of the Online Safety Bill, a 2021 legislative initiative intended to make the UK the "safest place in the world to be online".
The tacit acknowledgement that monitoring encrypted messages is technically untenable came from Lord Parkinson of Whitley Bay, one of Online Safety Bill's co-sponsors:
Let me be clear: there is no intention by the Government to weaken the encryption technology used by platforms, and we have built strong safeguards into the Bill to ensure that users’ privacy is protected.While the safety duties apply regardless of design, the Bill is clear that Ofcom cannot require companies to use proactive technology on private communications in order to comply with these duties.
Some tech platforms (a number of which threatened to depart the UK market in protest of the spy clause) rushed to declare Lord Parkinson's admission a victory. However, the spy clause remains part of the bill and will likely be passed into law.
Monitoring end-to-end encrypted (E2EE) communication without compromising security is a formidable technical challenge. This is because E2EE messages are cryptographically secured so as to be only visible to sender and the recipient. Scanning the content of these messages is only possible using invasive techniques such a client-side scanning - essentially a form of spyware. Late last year, Apple announced it was giving up on its efforts to scan encrypted content on iCloud and the iPhone using its sophisticated NeuralHash algorithm citing user privacy concerns.
Yet some UK MEP's seem to believe that where Apple, one of the world's foremost technology companies, has failed, others may yet succeed. Paul Scully, an MP responsible for tech and digital economy, pointed out that Ofcom (Office of Communication - the UK's regulatory and competition authority for broadcasting) will still be able to “to direct companies to either use, or make best efforts to develop or source, technology to identify and remove illegal child sexual abuse content — which we know can be developed.” In other words, it appears that lawmakers are merely biding their time until the technology matures.
The Online Safety Bill attempts to broadly regulate how tech platforms combat "harmful content". The bill's two sponsors are both conservative MEPs and earlier drafts betray its puritan origins. For example, provisions related to "legal but harmful" adult content (content deemed "offensive" but that does not constitute a legal infraction) were stripped from the bill late last year. Had they remained, tech platforms would have been tasked with implementing measures to prevent users from being exposed to content about "topics such as self-harm and eating disorders, as well as misogynistic posts." Critics decried the proposal's broad formulation (offensive to whom?) and suggested that it would incentivize platforms to "overblock" content, possibly having a chilling effect on freedom of speech online.
UK Culture Secretary Michelle Donelan, the Online Safety Bill's other co-sponsor alongside Lord Parkinson, claims that once the Online Safety Bill becomes law, "[t]he onus for keeping young people safe online will sit squarely on the tech companies' shoulders."
While UK politicians have been busy hammering out of the 'how' of regulating harmful content online, they have neglected to explore the 'why' aspect of this question. Is protecting users from such content really a responsibility that is best situated with tech platforms? After all, these typically profit-driven companies will only ever do the bare minimum required to ensure that they are compliant with the law. While this is not to say that platforms should have no responsibility for the content that they purvey and monetize, it's clearly the consumers of this content who have more at stake here.
The idea that government should intervene on the supply side to prevent people from coming to harm is vaguely reminiscent of the US War on Drugs. Now widely regarded as a failure, the War on Drugs attempted to stymie the illegal drug trade by taking the fight to drug dealers and manufacturers while doing comparatively little to address demand or otherwise tackle the root causes of drug abuse. This yielded predictable results. Supply shifted away from the regions under siege (South and Central America at first) and skyrocketing prices made the drug trade more lucrative than ever. The Mexican cartels' rise to power since the early aughts is largely as a result of these interventions. Meanwhile, studies conducted as early as the 1990s asserted that methods such as drug treatment were 23 times more effective than the supply-side war on drugs.
As with the War on Drugs, the Online Safety Bill woefully neglects the other half of the equation when it comes to protecting people from harmful content online. Where are the proposals for demand-side intervention? For example, education programs to help children identify and extricate themselves from potentially harmful situations? Or public information campaigns to raise awareness of misinformation?
EU responds to criticism of its chat control proposal
In early July, Professors Carmela Troncoso and Bart Preneel published a scathing critique of the EU's proposed Child Sex Abuse Regulation (CSAR) - the continental analogue to the Online Safety Bill. As with the Online Safety Bill, the CSAR proposal will require tech platforms to monitor encrypted conversations on their networks. European privacy campaigners refer to this element of the legislative proposal as 'chat control'.
The article by academics Troncoso and Preneel highlighted the absence of viable technical options to monitor encrypted channels:
The problem is that the underlying technologies are not mature and highly intrusive; moreover, we do not see a realistic path to substantially improve them in the next decades.
Ylva Johansson, the EU commissioner in charge of of the CSA Regulation, responded to these criticisms by contrasting the article with another one penned by a sex abuse survivor. The author of the latter is the former Head of Crimes Against Children at Interpol, who was horrifically assaulted by his grandfather when he was a boy.
While not to make light of this account, it appears to have been strategically included by Commissioner Johansson to serve as a distraction. It raises the stakes by highlighting the human dimensions of the CSAR proposal without concretely addressing the criticisms leveled at it.
Most importantly, Commissioner Johansson's response fails to address the matter of inadequate technology raised by the academics. Instead, she resorts to absolutes, claiming that if the CSAR proposal is not taken forward, all detection of harmful content will stop:
Once the temporary legislation to allow companies to detect material lapses next year, detection will be rendered legally impossible. Remember, that is detection that has been practised effectively for more than ten years and has been instrumental in rescuing children from abuse.
Commissioner Johansson neglects to mention that the detection she's alluding to did not happen on encrypted networks because it isn't technically feasible.
Indeed, this is the crux of the problem. Legally mandated monitoring of encrypted content would force platforms to either deactivate encryption or introduce some form of client-side device scanning.
Encryption is the only way to guarantee truly private communication over the internet. Disabling or weakening encryption will make the internet less secure for everyone. Furthermore, this also a markedly anti-consumer strategy - an unusual move for the EU, which typically prides itself on the "high levels of product safety and consumer protection" afforded by the Single Market.
Client-side scanning is prone to errors, false positives and function creep - that is, the distinct possibility that the software will be used for something beyond its intended purpose. With decades to go before these solutions might reach maturity, it seems imprudent for the EU to impose them on platforms now.
In light of the recent developments in the UK, it's now time for the EU to follow suit and remove the provisions related to chat control from the CSAR . A principled stand based on facts would recognize that privacy empowers while surveillance weakens. It would seek to make encryption sacrosanct.