Pro-Human Declaration Offers Roadmap for Safe AI Development Amid Pentagon Tensions

0 73

The U.S. faces a pivotal moment in the governance of artificial intelligence as lawmakers remain slow to act. The Pro-Human Declaration, a newly published framework signed by hundreds of experts, former officials, and public figures, lays out principles for responsible AI development. It emphasizes keeping humans in charge, preventing power concentration, preserving individual freedoms, and holding AI companies legally accountable. The declaration arrives in the wake of the Pentagon’s public dispute with Anthropic, highlighting how unregulated AI could pose both technological and national security risks.

The document warns of a stark choice: follow the “race to replace,” where AI supplants humans as workers and decision-makers, or pursue a path where AI expands human potential safely. Among its recommendations are prohibitions on superintelligence until proven safe, mandatory off-switches for powerful systems, and bans on self-replicating or autonomously improving architectures. According to MIT physicist Max Tegmark, who helped organize the initiative, public opinion is shifting rapidly, with 95% of Americans now opposing an unregulated race to superintelligence.

Recent events have intensified the urgency. Defense Secretary Pete Hegseth labeled Anthropic a “supply chain risk” after the company refused to grant unlimited access to its AI for military use, a move usually reserved for firms with ties to China. Meanwhile, OpenAI signed a less restrictive deal with the Pentagon, highlighting the legal and enforcement challenges of regulating AI. Experts argue these incidents underscore the cost of congressional inaction and the need for a clear national framework to manage AI risks before they escalate.

Child safety is emerging as a key entry point for regulation. The declaration calls for mandatory pre-deployment testing of AI products, particularly chatbots and companion apps, to mitigate risks such as mental health harm and emotional manipulation. Tegmark emphasizes the principle of accountability, noting that existing laws prevent adults from exploiting children, so AI should be held to the same standard. Establishing testing protocols for children’s AI products could pave the way for broader safety measures across all AI applications.

The declaration’s bipartisan support is striking, with signatures from figures across the political spectrum, including former Trump advisor Steve Bannon, former Obama National Security Advisor Susan Rice, and retired Joint Chiefs Chairman Mike Mullen. “What they agree on is that they’re all human,” Tegmark said. “If it’s going to come down to whether we want a future for humans or a future for machines, they’re on the same side.” The Pro-Human Declaration positions itself as a blueprint for navigating AI’s future before the technology outpaces society’s ability to manage it.

source: Techcrunch 

Leave A Reply

Your email address will not be published.