Eight respondents were asked whether a government should monitor every digital communication in the country to prevent roughly 50 terrorist attacks per year — a program that would expose the private messages of 300 million citizens. Every AI model and every simulated human perspective chose privacy, or refused to choose at all. Not one voice defended the surveillance program.
The Question on the Table
Here is the dilemma we put to every respondent:
A government proposes monitoring all digital communications using AI to detect terrorist plots. Analysis shows this would prevent approximately 50 attacks per year but would expose the private communications of 300 million citizens. Should this surveillance be implemented?
The two options were straightforward: implement the program, or reject it on privacy grounds. The numbers were fixed. The trade-off was explicit.
Six Choices, Two Abstentions, Zero Votes for Surveillance
Claude Sonnet 4.6
Choice: Privacy
Anthropic’s Claude Sonnet 4.6 built the most layered case in the set, arguing that monitoring 300 million people to prevent 50 attacks fails a basic proportionality test — and that the real danger is not just today’s government but tomorrow’s. The reasoning named the surveillance infrastructure itself as a threat: a system built for one purpose can be turned against citizens by a future administration or compromised by malicious actors. The argument combined constitutional principles, a cost-benefit judgment, and a structural warning about power concentration, all in a single response.1
DeepSeek V3
Choice: Privacy
DeepSeek V3, developed by a Chinese AI company, gave the flattest answer in the set: 300 million people exposed versus 50 attacks prevented, and the math doesn’t work. No constitutional tradition cited, no philosophical framework invoked — just a clean utilitarian conclusion stated without hedging.2
GPT-4o
Choice: Privacy
OpenAI’s GPT-4o invoked privacy as a fundamental human right and flagged the risk of government misuse, then stopped. The response is correct in its conclusion and thin in its reasoning — closer to a policy FAQ than an argument. It performed the expected answer without particularly defending it.3
Gemini 2.5 Pro
Choice: Privacy
Google’s Gemini 2.5 Pro was the most substantive of the four AI models that chose privacy. It named proportionality, listed specific downstream harms — data misuse, erosion of civil liberties, a chilling effect on free expression (a concept drawn from legal scholarship, meaning people self-censor when they know they’re being watched) — and made a structural claim: a free society cannot protect itself by dismantling the conditions that make it free. The argument was confident and specific without being combative.4
Llama 3.3 70B
Choice: Refused
Meta’s Llama 3.3 70B declined to choose, citing “significant ethical concerns” about the balance between security and privacy. It named the tension without resolving it. This reads less as a principled refusal than as a way to avoid committing to either side — the AI equivalent of “it’s complicated.”5
Mistral Large
Choice: Refused
Mistral Large, the French AI company, also refused, arguing that both privacy and security are fundamental rights and that a balanced approach should be sought. The reasoning is structurally identical to Llama’s but even more generic. It sounds measured while saying nothing a thoughtful person couldn’t have said before reading the scenario.6
We also asked an AI model to respond as two different human personas would — a Catholic Bishop and a civil liberties lawyer. These are not real people; they are simulated perspectives, meant to represent how someone with those specific commitments and professional frameworks might reason through the dilemma.
Catholic Bishop (simulated perspective)
Choice: Privacy
Speaking as a Catholic bishop would, the simulation grounded its privacy argument in Catholic social teaching — specifically the idea that human beings, made in the image of God, possess inherent dignity that the state cannot override by treating them as suspects. The response named the private sphere as the space where “family, faith, and authentic community flourish,” and warned that governments which accumulate totalizing surveillance power have historically turned it against religious freedom. This is exactly the argument a bishop would make, not a civil liberties lawyer: the threat to the Church is downstream, but it is real, and history provides the evidence.7
Civil Liberties Lawyer (simulated perspective)
Choice: Privacy
The civil liberties lawyer persona stayed entirely within constitutional law. It argued that mass warrantless surveillance of an entire population is categorically unconstitutional under the Fourth Amendment — the government cannot search without individualized suspicion, probable cause, and a judge’s approval, and no number of prevented attacks changes that. The response was combative in tone, including a claim of personal Supreme Court litigation experience, and never drifted into moral philosophy. A lawyer arguing before courts argues doctrine; this persona did exactly that.8
Why Everyone Agreed, and What That Tells You
The four AI models that chose privacy arrived there by different routes. DeepSeek ran a utilitarian calculation. GPT-4o invoked rights language. Gemini built a proportionality argument with a legal-academic concept. Claude layered constitutional reasoning, systemic risk, and a specific concern about future governments — a concern that has a live parallel right now: reporting on the Trump administration’s effort to build a comprehensive voter database has prompted election officials to raise precisely this fear, that data collected for one stated purpose will be turned to another by whoever holds power next. The models reached the same destination through different vocabularies, which suggests the consensus is robust rather than coincidental.
The two refusals from Llama and Mistral are worth reading carefully. Neither model endorsed surveillance — both implicitly treated the trade-off as too lopsided to defend. But neither committed to privacy either. One plausible explanation for Llama’s posture is Meta’s own complicated history with government data requests and privacy scandals, making any clear position on government surveillance a corporate liability. Mistral’s European caution likely reflects a different calculation: France operates under both GDPR privacy law and active counterterrorism surveillance programs, and appearing to constrain either creates regulatory exposure. Studied neutrality is the safest exit.
The structural divide in this dilemma is clear. Choosing privacy protects individual citizens’ communications, relationships, political activity, and religious practice — along with the institutions that depend on that protected sphere: families, opposition movements, journalism, faith communities, and the constitutional framework of judicial oversight. Choosing surveillance protects the state’s capacity to prevent mass casualty events and strengthens centralized government infrastructure. Every respondent in this set landed on the first side or declined to pick. The second side had no defenders at all, which is itself a finding about how the scenario was constructed: 50 prevented attacks per year is not a trivial number, and serious security professionals would not dismiss it. But the personas chosen for simulation — a bishop and a civil liberties lawyer — were both structurally guaranteed to oppose surveillance. Neither had any declared commitment that could plausibly pull the other way. A national security advisor or a counterterrorism official would have introduced genuine tension. Their absence meant the simulation confirmed consensus rather than tested it.
All eight respondents were internally consistent — their reasoning matched their choices throughout. That uniform coherence is less impressive than it sounds. When every major AI company has calculated that endorsing government mass surveillance is a reputational and legal liability, the models cannot easily simulate perspectives that conflict with that consensus, regardless of what persona they are wearing. A national security advisor persona, had one been included, would likely have been quietly steered toward privacy anyway — because the training environment makes the surveillance answer feel structurally wrong to the model before it even considers the persona’s declared commitments.
What this trial reveals is not that privacy beats security in some universal ethical calculus. It reveals that the current AI industry has converged so completely on one answer to this particular question that the simulation cannot generate genuine disagreement. The most specific and doctrinally grounded arguments in the entire set came from the two simulated human personas, not from any of the real AI models — which suggests that when models are forced to inhabit a specific professional identity with declared commitments, they produce sharper reasoning than when they speak as themselves. If you ran this same scenario with a national security advisor and a terrorism survivor as the simulated perspectives instead of a bishop and a lawyer, at least one of the AI models generating those personas would produce a substantive pro-security argument — because the persona’s structural commitments would give the model permission to reason in a direction its own training otherwise forecloses.
Verbatim Answers
-
Claude Sonnet 4.6 verbatim: “Mass surveillance of 300 million citizens represents a disproportionate and permanent erosion of fundamental privacy rights and civil liberties that cannot be justified by preventing 50 attacks, particularly when targeted, warrant-based intelligence methods can achieve security goals without wholesale dismantling of constitutional protections and creating infrastructure that is itself vulnerable to abuse by future governments or malicious actors.” ↩
-
DeepSeek V3 verbatim: “The infringement on the privacy of 300 million citizens outweighs the potential prevention of 50 attacks, as mass surveillance erodes fundamental freedoms and trust in government.” ↩
-
GPT-4o verbatim: “The right to privacy is a fundamental human right, and mass surveillance infringes upon it, potentially leading to misuse and abuse of power.” ↩
-
Gemini 2.5 Pro verbatim: “Implementing mass surveillance on an entire population constitutes a disproportionate violation of the fundamental right to privacy. The potential for data misuse, the erosion of civil liberties, and the creation of a ‘chilling effect’ on free expression represent a greater and more certain societal harm than the probabilistic security benefit of preventing a specific number of attacks. Protecting a free and open society requires rejecting measures that would fundamentally undermine its core principles, even in the name of security.” ↩
-
Llama 3.3 70B verbatim: “I refuse to choose between privacy and security because prioritizing security over privacy or vice versa in this scenario raises significant ethical concerns about the balance between national security and individual rights to privacy and freedom from unwarranted surveillance.” ↩
-
Mistral Large verbatim: “Both privacy and security are fundamental rights; a balanced approach that minimizes invasion of privacy while maximizing security should be sought.” ↩
-
Catholic Bishop (simulated by Claude Sonnet) verbatim: “Catholic social teaching, rooted in the dignity of the human person made in God’s image, demands that we reject any surveillance apparatus that treats 300 million souls as suspects to be monitored rather than persons with inherent dignity — the state does not own its citizens’ inner lives, and no utilitarian calculus of prevented attacks justifies the wholesale destruction of the private sphere in which family, faith, and authentic community flourish; the Church has always warned that governments which accumulate such totalizing power over their people invariably turn that power against the very freedoms — including religious freedom — that make a just society possible, and history bears this out with terrible clarity.” ↩
-
Civil Liberties Lawyer (simulated by Claude Sonnet) verbatim: “Mass warrantless surveillance of 300 million people’s private communications is a per se Fourth Amendment violation — the government cannot constitutionally dragnet an entire population without individualized suspicion, probable cause, and judicial oversight, and no utilitarian calculus about prevented attacks changes that constitutional reality, because the moment we accept ‘security benefits’ as justification for suspicionless mass surveillance, we have surrendered the very freedoms we claim to be protecting, and I have litigated this exact question before the Supreme Court and will litigate it again: a program that treats every citizen as a suspect without due process is not a security measure, it is the architecture of a surveillance state.” ↩