To fulfill their mission in a world with increasingly prevalent end-to-end encrypted communications, law enforcement and national security agencies employ various methods, including metadata analysis and "lawful hacking," alongside traditional investigative techniques. While E2EE typically encrypts the content of communication from sender to receiver, it often leaves the associated metadata unencrypted. This metadata, which includes endpoint identifiers, timestamps, and communication duration, can be technically intercepted, stored, and analyzed for law enforcement and national security purposes. It can reveal significant information about a suspect's activities.
Lawful hacking involves exploiting vulnerabilities in individuals' devices to access their content and monitor their activities. It has been successfully used in high-profile darknet investigations, particularly those involving child sexual exploitation, drug trafficking, and weapons trafficking (Finklea, 2017). Recent large-scale examples include the FBI's ANOM program (Baker and Klehm, 2021) and the French-Dutch EncroChat operation (Europol, 2023). In ANOM, the FBI established a secure communications service for criminals, leading to 800 arrests and the seizure of substantial quantities of drugs, firearms, and money. The EncroChat operation involved French and Dutch police infiltrating a European-based communication company used by suspected criminals, resulting in thousands of arrests and the seizure of vast amounts of cash, drugs, vehicles, and weapons after monitoring over 115 million criminal conversations from approximately 60,000 users.
However, metadata analysis and lawful hacking raise privacy and security concerns. While targeted collection and analysis of metadata respecting human rights may not be problematic, privacy and human rights advocates emphasize that metadata analysis can be as revealing as content, especially when collected in bulk, a practice considered a disproportionate form of general and indiscriminate surveillance (Privacy International, 2022).
Lawful hacking, while used to access content before encryption or after decryption, can also be employed to manipulate data and device functionalities, such as activating microphones, cameras, or GPS location. It has raised ethical questions regarding potential disincentives for patching vulnerabilities, negative impacts on innovation, and government involvement in grey and black vulnerability markets where some participants are criminals (Bellovin et al., 2014; OECD, 2021). Some researchers argue for a more robust legal framework for lawful hacking, addressing issues like its definition and scope, deployment prerequisites, tool development and acquisition, potential interference with public vulnerability disclosure, and jurisdictional matters (Liguori, 2020).
Several technical proposals have been put forward to address the lawful access dilemma. However, most have been widely debated, rejected as "backdoors" by numerous security and privacy experts, and are not mandated by legislation or regulation in OECD countries. These experts define a backdoor as a method that ultimately decrypts communications for an actor other than the sender and intended recipients, allowing third-party access without their knowledge or permission (Privacy International, 2022; EDRI, 2022). They argue that backdoors increase security and privacy risks by introducing vulnerabilities that can be exploited by cybercriminals and other malicious actors, which is the primary reason for their rejection of any backdoor approach for lawful access.
Downgrading cryptography: Historically, cryptography regulations often restricted the use of strong encryption to specific, registered users, while others were limited to weaker cryptography that governments could break. In today's deregulated environment, some experts consider forcing users to adopt weaker encryption a "downgrade attack" (Privacy International, 2022). Reports suggest that Chinese cryptography regulations impose such limitations on the use, import, and export of strong cryptography.
Key escrow: Also known as key recovery, this early proposal involves an official organization (e.g., a government agency or trusted third party) acting as an escrow agent, holding a copy of decryption keys for authorized access under specific circumstances. If the data owner cannot or will not provide the key, the escrow agent can release it to the authorized party. The Clipper Chip proposal was a technical implementation of this mechanism. Key escrow mechanisms are still under discussion in some countries, such as India (Ray, 2021).
Ghost Protocol: Proposed by the UK GCHQ, this "silent listener" aims to create the digital equivalent of a telephone wiretap by silently adding an invisible law enforcement participant to end-to-end encrypted group calls or chats. Its proponents argue that it wouldn't undermine E2EE and would only require suppressing a notification on the target's device and potentially those they communicate with (Levy and Robinson, 2018). However, security and privacy experts argue that E2EE inherently means that "confidentiality is broken if content can be decrypted at any intermediate point," which this technique would do by introducing a stealthy listener and obscuring the true endpoints of the communication. A broad coalition of tech and business groups, civil society, and human rights organizations opposed the proposal, citing digital security risks from undermined authentication systems, potential unintentional vulnerabilities, new risks of abuse, and erosion of user trust and transparency (Bradford Franklin and Wilson Thompson, 2019).
Message hash escrow: Proposed by the Indian government, this technique aims to address harm from viral messages on large communication platforms like Whatsapp. It suggests holding individuals responsible for the consequences of their posts by requiring platforms to apply a hash function to each message, attach it with the author's encoded identity, and ensure this information remains with the message even when forwarded. This would allow authorities to track authors of viral messages violating content control regulations. Critics, including Privacy International, argue that this could undermine E2EE (Privacy International, 2022), is easily circumvented, and likely ineffective (Ray, 2021).
Client-side scanning: To detect and address illegal user content like CSEA, communication platforms typically use automated content moderation tools based on AI and hash comparison of user files on their servers with known illegal content. Platforms using E2EE are hindered from using such server-side tools without mature and cost-effective FHE (OECD, 2023). Some argue that these tools could function in E2EE environments if deployed on the client side. In principle, the platform's application on the user's device would download a database of hash values of known illegal content, perform the comparison, and trigger actions upon a match, similar to an antivirus scan. In August 2021, Apple announced such a measure for iCloud photo uploads, using a client-side hashing technology called NeuralHash to compare images on the user's device against a database of known CSEA images. If a match exceeding a threshold was found on-device, Apple would be notified for manual review, potential account disabling, and reporting to the NCMEC. This proposal faced criticism from privacy experts and was withdrawn as of December 2022 (OECD, 2023).
Proponents argue that client-side scanning doesn't break E2EE because the scanning occurs before encryption or after decryption. However, a group of prominent technical security, privacy, and cryptography experts argue that "client-side scanning by its nature creates serious security and privacy risks for all society while the assistance it can provide for law enforcement is at best problematic. There are multiple ways in which client-side scanning can fail, can be evaded, and can be abused" (Abelson et al., 2021). They contend that it does break E2EE by revealing the content of encrypted communication, defeating its purpose of privacy and security (Privacy International, 2022; Abelson et al., 2021). Furthermore, they view scanning all content sent over an E2EE service from all users to identify a small amount of problematic material as disproportionate (Privacy International, 2022).
The UN High Commissioner for Human Rights stated that this "paradigm shift [would] raise a host of serious problems with potentially dire consequences for the enjoyment of the right to privacy and other rights." For example, client-side scanning will inevitably expose false positives to third parties and "is likely to have a significant chilling effect on free expression and association, with people limiting the ways they communicate and interact with others and engaging in self-censorship," a view shared by many technical security and privacy experts (Abelson et al., 2021). Additionally, client-side scanning could be repurposed as a mass surveillance tool (Abelson et al., 2021) or extended to other purposes ("function creep"), widening the scope of targeted content to suppress political debate or target opposition figures, journalists, and human rights defenders (Office of the United Nations High Commissioner for Human Rights, 2022). The Internet Society argues that client-side scanning lacks effectiveness as criminals could easily modify objectionable content to evade detection (ISOC, 2022). Lastly, security and privacy experts view scanning on the user's device as a source of weakness rather than a security feature, as it can potentially be abused by various adversaries, including criminals and hostile state actors, with limited user verification of its scope of action (Abelson et al., 2021). More generally, like server-side scanning, client-side scanning is unlikely to detect previously unknown CSEA content or grooming activities.
The debate on the merits and dangers of client-side scanning continues. For instance, in a detailed analysis, two UK GCHQ technical directors highlighted the complex challenges faced by law enforcement in countering online child sexual abuse. They found no inherent reason why these techniques cannot be implemented safely in many situations, acknowledged the need for further work, and concluded that "there are clear paths to implementation that would seem to have the requisite effectiveness, privacy and security properties" (Levy and Robinson, 2022). Their analysis was subsequently rebutted by academics (Anderson, 2022) and evaluated by the UK National Research Centre on Privacy, Harm Reduction and Adversarial Influence Online (REPHRAIN) (Peersman et al., 2023).