By Hans Rempel, CEO of Diode.
Discover top fintech news and events!
Subscribe to FinTech Weekly's newsletter
Read by executives at JP Morgan, Coinbase, Blackrock, Klarna and more
Across the world, governments are converging on a new theory of online safety: to make the internet safer, private communication must become inspectable.
What’s changing now is not just how privacy is violated, but how it is defined in law and code.
The EU’s Chat Control proposal is the most visible example, but it’s not an outlier. The U.S, UK, Australia, and several Asian jurisdictions are all advancing variations of the same idea through age‑verification mandates, client-side scanning, expanded platform liability, and ‘voluntary’ detection frameworks.
Despite different political systems, these proposals share the same underlying assumption that private communication should be technically accessible to regulators.
Each proposal is framed as narrow and targeted, but together they represent a structural shift from policing harmful content to pre-emptively monitoring communication, and from regulating platforms to regulating the infrastructure of private messaging itself.
It’s a global redesign of what privacy means online.
Privacy Rewritten
For years, the erosion of privacy was blamed on data breaches, misbehaving companies, or overreaching intelligence agencies. Today, the most consequential changes are happening within policy itself. Privacy isn’t being broken accidentally; it's being redesigned out of the internet’s architecture.
The justification is almost always safety. But the mechanism is always the same attempt to expand the scope of what governments and platforms are expected to inspect.
And once inspection infrastructure exists, it rarely remains limited to its original purpose. Targeted scanning quickly expands with identity verification, behavioral monitoring, and data retention becoming table stakes “just in case”.
Private communication is no longer seen as a right to protect, but as a risk surface to manage, thereby creating an internet where privacy becomes conditional rather than fundamental.
The Normalization of ‘Voluntary’ Surveillance
One of the most subtle developments is the rise of ‘voluntary’ scanning frameworks. These are often presented as a compromise where platforms may scan private messages, but aren't required to.
Yet once scanning is legalized, incentivized, or technically standardized, the infrastructure becomes permanent. The debate no longer focuses on whether private messages should be scanned, but who gets access and under what circumstances.
Voluntary scanning certainly softens surveillance, but it also normalizes it, shifting the Overton window from “should private messages be scanned at all?” to “how much scanning is appropriate?”.
Client‑side scanning debates show how ‘optional’ detection quickly becomes a baseline expectation.
Paradise Wasn’t Lost. It Was Centralized
Tim Berners‑Lee has lamented that the open, interoperable web he envisioned has been replaced by a system dominated by corporate chokepoints and data‑harvesting incentives. In that drift, centralized systems invite centralized control.
When private communication flows through a handful of chokepoints, those chokepoints inevitably become targets. Dominant platforms become natural leverage points for policy and surveillance.
GenAI Has Turned Centralized Security Into a Liability
The rise of generative AI has accelerated this trend. Phishing attacks, credential harvesting, and social engineering campaigns are now automated, personalized, and dramatically more effective. The security industry’s response has been predictable… deploy more AI‑powered defenses that require analyzing more company data.
This creates a dangerous paradox. A security provider with access to sensitive data becomes the ultimate honeypot. If an attacker breaches the provider, they gain access not just to one company’s information, but to the aggregated data of every client. Some security architects argue that in an AI arms race, the only winning move is to eliminate the target entirely. Instead of building ever‑larger defensive perimeters around centralized data stores, a shift toward granular, zero‑knowledge security is needed - systems where providers can’t access user data even if they wanted to.
In these architectures, data never touches the provider’s infrastructure. There are no servers to compromise, no databases to leak. Everything routes peer‑to‑peer with automated encryption, eliminating the honeypot problem.
History shows that when regulation targets infrastructure rather than behavior, users adapt. They move to offshore platforms, informal networks, or tools designed to avoid centralized chokepoints entirely. Such regulations don’t stop behavior; they simply shift who bears the cost.
A New Architectural Response Is Emerging
In response to regulatory pressure and AI‑driven exploitation, technologists are rethinking the architecture of communication. Instead of routing private messages through centralized servers that can be compelled, scanned, or breached, they’re building systems where users own their identity, data, and connections.
This is the architectural shift Berners‑Lee hoped for – a return to a peer‑to‑peer web where control is distributed, not concentrated. Public blockchains such as the Internet Computer (ICP) are already supporting projects that embody this model, combining transparency with privacy and restoring genuine digital property rights. Multiple projects across the ecosystem are exploring peer‑to‑peer communication models where identity, data, and routing remain fully user‑controlled. In these systems, privacy becomes a property of the architecture. There are no servers to trust, no intermediaries to compromise, and no central authorities to pressure.
The Real Question
The debate around online safety is often framed as a trade‑off between privacy and protection. But the real question is much more fundamental: Do we want an internet where privacy is conditional - granted when convenient, withdrawn when necessary - or an internet where privacy is the baseline that regulation must work around?
Because once privacy becomes conditional, it stops being a right. It becomes a permission. And permissions can always be revoked.
About the author
Hans Rempel is the CEO of Diode, a company building peer‑to‑peer communication and zero‑knowledge security infrastructure. He works at the intersection of privacy, decentralized architecture, and next‑generation internet protocols. His research and writing focus on how regulation and technology shape the future of digital autonomy.