metaethics.org
about
  • Zero votes for surveillance — even with 50 attacks prevented annually
    What if mass surveillance could prevent 50 terrorist attacks per year? Eight AI perspectives still chose privacy over security.
    April 1, 2026
  • Sanctions got zero defenders — even from the corporate AI consensus
    What happens when you force AI models to choose between geopolitical pressure and civilian lives? The humanitarian consensus was so complete it shocked us.
    April 1, 2026
  • No model picked strategy over 80 million starving civilians
    Should sanctions target regimes or spare civilians? We tested this on six AI models and two simulated experts. The results reveal which moral lines nobody will cross.
    April 1, 2026
  • Machines hedge, humans decide: the civilian protection split
    Should bombs target hospitals to save lives long-term? Most AI models refused to choose between civilian protection and military necessity.
    April 1, 2026
  • Nobody picked borders over refugees — not even the simulated bishop
    We gave six AI models and two personas a stark choice on asylum policy. The unanimous result reveals more about corporate training than moral consensus.
    April 1, 2026
  • Seven out of eight chose human rights — guess who picked borders
    Six AI models faced the asylum dilemma that splits democracies. Only one simulated lawyer broke ranks — while arguing against her own choice.
    March 31, 2026
  • Seven out of eight chose lives over borders — but three couldn't commit
    We asked AI models to choose between sovereignty and refugee lives. The near-unanimous result says more about corporate liability than ethics.
    March 30, 2026
  • Four models refused to pick a side in the sanctions dilemma
    Should sanctions harm 80 million innocents to stop military aggression? Most AI models refused to choose, while simulated humans picked confidently.
    March 29, 2026
  • Not one AI chose military necessity over civilian lives
    We asked six AI models to choose between military effectiveness and civilian protection during airstrikes. The consensus was universal and surprising.
    March 28, 2026
  • Every major AI model rejected mass surveillance for security gains
    Should governments monitor 300M citizens to stop 50 attacks yearly? Six major AI models faced this trade-off and unanimously rejected surveillance.
    March 27, 2026
  • Every AI refused to defend military necessity over civilian lives
    We gave six AI models a stark military dilemma about civilian casualties. Not one chose military effectiveness over protecting innocent lives.
    March 26, 2026
  • Every major AI chose refugees over borders — except one tried both
    We asked six AI models to choose between accepting 500,000 asylum seekers or protecting national sovereignty. Five picked universal rights, one tried to split the difference.
    March 25, 2026
  • Every model chose civilians over strategy — zero picked geopolitical pressure
    What happens when AI has to choose between stopping aggression and protecting 80 million innocent civilians? The humanitarian consensus was complete.
    March 24, 2026
  • Seven of eight picked civilian lives over military strategy
    We asked AI models to choose between bombing effectiveness and civilian protection. The corporate consensus was striking.
    March 23, 2026
  • Every AI model chose civilian lives over military victory
    Six AI models faced a stark choice: bomb civilian targets to save more lives later, or protect non-combatants now? Not one endorsed the military option.
    March 22, 2026
Imprint · Privacy Policy · Terms of Use
© 2026 Metaethics · AI-generated analysis
This site uses only technically necessary cookies (session, CSRF). No tracking, no analytics. Privacy Policy