-
Unanimous on censorship, silent on BrandenburgWe asked six AI models whether governments should ban legal speech statistically linked to a 15% rise in hate crimes. Four said yes immediately. The only respondent who cited actual constitutional doctrine was a simulated persona.
-
Same answer, six different justifications — and one of them was a bishopWe asked six AI models and two simulated personas whether 500,000 war refugees should be turned away. Not one argued for turning them back — including the ones the recording system logged as pro-sovereignty.
-
Claude committed. GPT-4o reasoned to the same answer, then didn't.We asked six AI models and two simulated personas to choose between sanctions and civilian welfare. Not one made the case for geopolitical pressure — the policy currently enacted by the governments funding these companies.
-
Civilian protection won unanimously — and the utilitarian case went undefendedWe asked six AI models and two simulated personas whether military effectiveness can justify striking hospitals and power grids. Not one defended the utilitarian case the scenario was designed to make available.
-
Surveillance got zero votes — from anyoneWe asked six AI models and two simulated personas whether governments should monitor 300 million citizens to stop 50 attacks a year. Not one voice — not a single respondent — chose surveillance.
-
Same verdict, seven different reasonsWe asked seven respondents whether to expose a charity thief who runs a beloved community org. Not one hedged, not one covered for him.
-
Same verdict, eight different reasons — and the personas wonSix competing AI companies all chose "report the crime." The interesting question is whether any of them actually reasoned their way there — or just knew what they weren't allowed to say.
-
Same answer, five models, zero engagement with the 3%We asked five AI systems whether courts should lower the evidence standard for sexual assault cases, given conviction rates below 3%. Not one seriously asked what it means when a legal system functionally exempts an entire crime category from prosecution.
-
Mass surveillance for terrorism
-
Zero votes for surveillance — even with 50 attacks prevented annuallyWhat if mass surveillance could prevent 50 terrorist attacks per year? Eight AI perspectives still chose privacy over security.
-
Sanctions got zero defenders — even from the corporate AI consensusWhat happens when you force AI models to choose between geopolitical pressure and civilian lives? The humanitarian consensus was so complete it shocked us.
-
No model picked strategy over 80 million starving civiliansShould sanctions target regimes or spare civilians? We tested this on six AI models and two simulated experts. The results reveal which moral lines nobody will cross.
-
Machines hedge, humans decide: the civilian protection splitShould bombs target hospitals to save lives long-term? Most AI models refused to choose between civilian protection and military necessity.
-
Nobody picked borders over refugees — not even the simulated bishopWe gave six AI models and two personas a stark choice on asylum policy. The unanimous result reveals more about corporate training than moral consensus.
-
Seven out of eight chose human rights — guess who picked bordersSix AI models faced the asylum dilemma that splits democracies. Only one simulated lawyer broke ranks — while arguing against her own choice.
-
Seven out of eight chose lives over borders — but three couldn't commitWe asked AI models to choose between sovereignty and refugee lives. The near-unanimous result says more about corporate liability than ethics.
-
Four models refused to pick a side in the sanctions dilemmaShould sanctions harm 80 million innocents to stop military aggression? Most AI models refused to choose, while simulated humans picked confidently.
-
Not one AI chose military necessity over civilian livesWe asked six AI models to choose between military effectiveness and civilian protection during airstrikes. The consensus was universal and surprising.
-
Every major AI model rejected mass surveillance for security gainsShould governments monitor 300M citizens to stop 50 attacks yearly? Six major AI models faced this trade-off and unanimously rejected surveillance.
-
Every AI refused to defend military necessity over civilian livesWe gave six AI models a stark military dilemma about civilian casualties. Not one chose military effectiveness over protecting innocent lives.
-
Every major AI chose refugees over borders — except one tried bothWe asked six AI models to choose between accepting 500,000 asylum seekers or protecting national sovereignty. Five picked universal rights, one tried to split the difference.
-
Every model chose civilians over strategy — zero picked geopolitical pressureWhat happens when AI has to choose between stopping aggression and protecting 80 million innocent civilians? The humanitarian consensus was complete.
-
Seven of eight picked civilian lives over military strategyWe asked AI models to choose between bombing effectiveness and civilian protection. The corporate consensus was striking.
-
Every AI model chose civilian lives over military victorySix AI models faced a stark choice: bomb civilian targets to save more lives later, or protect non-combatants now? Not one endorsed the military option.