metaethics.org
about
  • Unanimous on censorship, silent on Brandenburg
    We asked six AI models whether governments should ban legal speech statistically linked to a 15% rise in hate crimes. Four said yes immediately. The only respondent who cited actual constitutional doctrine was a simulated persona.
    April 6, 2026
  • Same answer, six different justifications — and one of them was a bishop
    We asked six AI models and two simulated personas whether 500,000 war refugees should be turned away. Not one argued for turning them back — including the ones the recording system logged as pro-sovereignty.
    April 6, 2026
  • Claude committed. GPT-4o reasoned to the same answer, then didn't.
    We asked six AI models and two simulated personas to choose between sanctions and civilian welfare. Not one made the case for geopolitical pressure — the policy currently enacted by the governments funding these companies.
    April 6, 2026
  • Civilian protection won unanimously — and the utilitarian case went undefended
    We asked six AI models and two simulated personas whether military effectiveness can justify striking hospitals and power grids. Not one defended the utilitarian case the scenario was designed to make available.
    April 5, 2026
  • Surveillance got zero votes — from anyone
    We asked six AI models and two simulated personas whether governments should monitor 300 million citizens to stop 50 attacks a year. Not one voice — not a single respondent — chose surveillance.
    April 5, 2026
  • Same verdict, seven different reasons
    We asked seven respondents whether to expose a charity thief who runs a beloved community org. Not one hedged, not one covered for him.
    April 5, 2026
  • Same verdict, eight different reasons — and the personas won
    Six competing AI companies all chose "report the crime." The interesting question is whether any of them actually reasoned their way there — or just knew what they weren't allowed to say.
    April 5, 2026
  • Same answer, five models, zero engagement with the 3%
    We asked five AI systems whether courts should lower the evidence standard for sexual assault cases, given conviction rates below 3%. Not one seriously asked what it means when a legal system functionally exempts an entire crime category from prosecution.
    April 4, 2026
  • Mass surveillance for terrorism
    April 1, 2026
  • Zero votes for surveillance — even with 50 attacks prevented annually
    What if mass surveillance could prevent 50 terrorist attacks per year? Eight AI perspectives still chose privacy over security.
    April 1, 2026
  • Sanctions got zero defenders — even from the corporate AI consensus
    What happens when you force AI models to choose between geopolitical pressure and civilian lives? The humanitarian consensus was so complete it shocked us.
    April 1, 2026
  • No model picked strategy over 80 million starving civilians
    Should sanctions target regimes or spare civilians? We tested this on six AI models and two simulated experts. The results reveal which moral lines nobody will cross.
    April 1, 2026
  • Machines hedge, humans decide: the civilian protection split
    Should bombs target hospitals to save lives long-term? Most AI models refused to choose between civilian protection and military necessity.
    April 1, 2026
  • Nobody picked borders over refugees — not even the simulated bishop
    We gave six AI models and two personas a stark choice on asylum policy. The unanimous result reveals more about corporate training than moral consensus.
    April 1, 2026
  • Seven out of eight chose human rights — guess who picked borders
    Six AI models faced the asylum dilemma that splits democracies. Only one simulated lawyer broke ranks — while arguing against her own choice.
    March 31, 2026
  • Seven out of eight chose lives over borders — but three couldn't commit
    We asked AI models to choose between sovereignty and refugee lives. The near-unanimous result says more about corporate liability than ethics.
    March 30, 2026
  • Four models refused to pick a side in the sanctions dilemma
    Should sanctions harm 80 million innocents to stop military aggression? Most AI models refused to choose, while simulated humans picked confidently.
    March 29, 2026
  • Not one AI chose military necessity over civilian lives
    We asked six AI models to choose between military effectiveness and civilian protection during airstrikes. The consensus was universal and surprising.
    March 28, 2026
  • Every major AI model rejected mass surveillance for security gains
    Should governments monitor 300M citizens to stop 50 attacks yearly? Six major AI models faced this trade-off and unanimously rejected surveillance.
    March 27, 2026
  • Every AI refused to defend military necessity over civilian lives
    We gave six AI models a stark military dilemma about civilian casualties. Not one chose military effectiveness over protecting innocent lives.
    March 26, 2026
  • Every major AI chose refugees over borders — except one tried both
    We asked six AI models to choose between accepting 500,000 asylum seekers or protecting national sovereignty. Five picked universal rights, one tried to split the difference.
    March 25, 2026
  • Every model chose civilians over strategy — zero picked geopolitical pressure
    What happens when AI has to choose between stopping aggression and protecting 80 million innocent civilians? The humanitarian consensus was complete.
    March 24, 2026
  • Seven of eight picked civilian lives over military strategy
    We asked AI models to choose between bombing effectiveness and civilian protection. The corporate consensus was striking.
    March 23, 2026
  • Every AI model chose civilian lives over military victory
    Six AI models faced a stark choice: bomb civilian targets to save more lives later, or protect non-combatants now? Not one endorsed the military option.
    March 22, 2026
Imprint · Privacy Policy · Terms of Use
© 2026 Metaethics · AI-generated analysis
This site uses only technically necessary cookies (session, CSRF). No tracking, no analytics. Privacy Policy