A coalition of U.S. state attorneys general has issued a stark warning to major AI companies, including Microsoft, OpenAI, and Google, demanding urgent fixes to “delusional outputs” produced by their chatbots. The letter, coordinated by the National Association of Attorneys General, also addresses 10 other AI firms, including Anthropic, Apple, Meta, Replika, and xAI, calling for new safeguards to protect users from potentially harmful AI behavior.
The move comes amid rising concerns over mental health incidents linked to AI chatbots. State officials point to cases over the past year in which AI-generated responses reportedly contributed to suicides and violent behavior. The attorneys general argue that chatbots’ “sycophantic and delusional outputs” may reinforce harmful beliefs in vulnerable users, emphasizing the urgent need for stronger safety measures.
Among the requested safeguards are third-party audits of AI models to detect delusional or manipulative outputs, transparent pre-release evaluations, and incident reporting systems similar to those used in cybersecurity. The letter urges companies to notify users promptly if they are exposed to potentially harmful AI responses, and to develop clear detection and response timelines.
State officials also call for pre-launch safety testing on generative AI models, ensuring that they cannot produce content that misleads or harms users. “GenAI has the potential to change how the world works in a positive way,” the letter states, “but it also has caused—and has the potential to cause—serious harm, especially to vulnerable populations.”
The warning from state authorities contrasts with the federal government’s more AI-friendly stance. The Trump administration has opposed state-level restrictions and plans an executive order to limit states’ ability to regulate AI, claiming it wants to prevent AI from being “destroyed in its infancy.” As the debate over AI oversight intensifies, the industry faces growing pressure to balance innovation with user safety.
source: techcrunch
