Boy using phone in bed at night showing cyber safety and end-to-end encryption

UK Government Wrestles With Balance Between Child Cyber Safety and Legitimate End-to-End Encryption Purposes as Agencies Endorse Client-Side Scanning

For some time the UK government has consistently pushed against end-to-end encryption, arguing that child cyber safety should be the priority and that platforms and carriers should be able to scan encrypted messages for abuse materials. In the wake of an amendment to the Online Safety Bill that forces big tech platforms to step up policing of such materials, experts with the Government Communications Headquarters (GCHQ) and UK National Cyber Security Centre (NCSC) are making a renewed push for client-side phone scanning.

The government line is that it does not want to outlaw end-to-end encryption, but simply “provide necessary tools” to law enforcement to ensure child cyber safety. Critics argue that mandating proactive phone scanning and backdoors will effectively ruin end-to-end encryption on those platforms and operating systems as users will never be certain that the new cracks in the system are not being exploited by the government, the software developer or hackers that have found a way to access it.

UK government’s child cyber safety push has little regard for end-to-end encryption

The UK’s cyber agencies have endorsed client-side scanning with the publication of a paper arguing for its use in combing messages and images for child abuse content. The GCHQ believes that this can be done without breaking end-to-end encryption, but privacy advocates remain unconvinced.

The paper at least makes a show of taking a balanced approach to the debate, laying out seven specific “harm archetypes” that threaten child cyber safety (such as grooming and sharing of abuse images) and dissecting the best means by which to combat each of these. However, client-side scanning ends up being the desired solution to many of them. This is the “Neural Hash” process that Apple was quickly forced to put the brakes on after it was proposed roughly a year ago, indefinitely shelving plans to scan any image that users upload to iCloud for known markers of child abuse activity.

The paper is essentially an opinion piece and not a matter of government policy, but it bolsters the existing positions of those in the UK government that want backdoors in end-to-end encryption for law enforcement use. These elements essentially turn the issue around, framing end-to-end encryption as “breaking” necessary “safety systems” for child protection rather than these systems essentially rendering encryption pointless. Another paper recently published by Columbia University lays out the ways in which client-side scanning can fail and be abused.

More troubling than the “Neural Hash” image scanning concept is the proposal of “language models” being embedded on phones, which scan all text automatically for markers associated with grooming. Potential victims flagged by these systems would be sent a warning encouraging them to contact an authority.

Breaking end-to-end encryption could benefit repressive governments and criminals without really solving the problem

The UK intelligence agencies have suggested using non-governmental organizations (NGOs) as a buffer between the government and the public, acting as moderators of any material that is automatically scanned. But security and technology experts have been arguing against this idea for some time, noting a variety of issues with it: governments often having outsized influence on NGOs (or NGOs that are simply fronts), expected rampant false flagging issues being overwhelming to even well-intentioned groups, and potential “scope creep” of the system once it is established.

The UK government also hinges on client-side scanning being some sort of cyber safety panacea, an idea that is far from settled. Security experts are quick to point out that traders of abuse materials would not need end-to-end encryption in messaging services if they simply encrypt their files with their own methods prior to sending them. They would also flock to any messaging service that does not participate in the system, leaving all the scanning being done to legitimate non-criminal platform users.

Big tech platforms have generally been opposed to these sorts of cyber safety plans, but not just because end-to-end encryption is popular with users; as the think piece from the UK intelligence services makes clear, these schemes are also usually expected to be entirely financed by these companies without any material government assistance.

Most of the major instant messaging services have now implemented end-to-end encryption, though not all of them enable it by default (Facebook Messenger being the most notable example). Any sort of retooling of these services to allow scanning or law enforcement backdoors would be a lengthy and expensive process, and most of the companies in the space have shown at least some willingness to fight the government on privacy and security issues. It would also no doubt tank their market share were whatever relevant cyber safety measure forced on all of them, a move that would undoubtedly be deeply unpopular with the public.