The European Commission’s 2022 proposal for a regulation on child sexual abuse material, commonly known as ChatControl, would oblige online services to detect, report, and remove prohibited content through client-side scanning. This work examines ChatControl as a case study in how regulatory ambitions can outpace technical reality. It argues that the proposal rests on an unverified assumption—that universal message inspection before encryption is both feasible and rights-compatible. By analyzing the policy’s technical architecture alongside its institutional motivations, the study shows how cryptographic concepts are being reinterpreted as instruments of governance. End-to-end encryption, once a guarantee of private communication, is reframed as a conditional privilege contingent on compliance with state-defined scanning regimes. The talk situates this transformation within broader debates on proportionality, technological sovereignty, and the political use of ``safety’’ as justification for surveillance. Ultimately, it argues that the ChatControl controversy is not only about privacy or child protection but about the boundaries of legitimate scientific authority—what counts as secure, who decides, and under what political pressures.