European lawmakers, Nobel laureates, former heads of state, and AI experts urged binding international rules to curb dangerous AI.
They launched the campaign Monday at the UN’s 80th General Assembly in New York.
The initiative calls on governments to set “red lines” by 2026 banning AI uses deemed too harmful.
Signatories include Mary Robinson, Enrico Letta, MEPs Brando Benifei and Sergey Lagodinsky, ten Nobel winners, and tech leaders from OpenAI and Google.
They warned that without global standards, AI could trigger pandemics, disinformation, human rights abuses, and loss of human control.
Over 200 prominent figures and 70 organisations from politics, science, human rights, and industry have backed the call.
AI Poses Immediate Threats to Mental Health
Studies reveal chatbots like ChatGPT, Claude, and Google’s Gemini give inconsistent or unsafe advice about suicide.
Researchers warned that these gaps could worsen mental health crises, with some suicides linked to AI interactions.
Maria Ressa said unchecked AI could cause “epistemic chaos” and systemic human rights violations.
Yoshua Bengio stressed that the race to create more powerful AI models poses risks society cannot handle.
Supporters argue that national or EU AI rules cannot fully govern cross-border technologies.
Toward a Binding Global Treaty
Signatories want an independent body to enforce rules and prevent irreversible harm, including AI-launched nuclear attacks or mass surveillance.
They aim to start UN negotiations by 2026 and push for a worldwide treaty.
They cited previous global agreements on biological and nuclear weapons, human cloning, and high seas protections as models.
Backers insist only a global pact can ensure AI standards apply consistently and protect humanity.