A high-profile lawsuit by Elon Musk is drawing fresh attention to the safety practices at OpenAI, raising questions about whether the organization has drifted from its original mission. The legal challenge, currently being heard in a federal court in Oakland, centers on whether OpenAI’s transition into a for-profit structure has compromised its goal of ensuring artificial general intelligence (AGI) benefits humanity.
During testimony, former OpenAI employee and board member Rosie Campbell painted a picture of a company that gradually shifted from a research-driven culture to a product-focused business. Campbell, who worked on the AGI readiness team from 2021 until her departure in 2024, said safety discussions once dominated internal conversations but became less prominent as commercial priorities took center stage. Her concerns were reinforced by the disbandment of key safety teams, including the Super Alignment unit.
Campbell acknowledged that significant funding is necessary to develop advanced AI systems but warned that building powerful models without robust safety frameworks could undermine OpenAI’s founding principles. She pointed to an incident involving Microsoft, where a version of GPT-4 was deployed in India through the Bing search engine before undergoing review by OpenAI’s Deployment Safety Board. While she described the risk as limited, she stressed the importance of maintaining strict safety processes as AI capabilities grow.
The court also revisited internal tensions within OpenAI’s leadership, including the brief removal of CEO Sam Altman in 2023. Former board member Tasha McCauley testified that the board struggled with transparency issues, alleging that Altman withheld key information and, at times, misled members. Concerns ranged from undisclosed conflicts of interest to the surprise public launch of ChatGPT, which reportedly caught parts of the board off guard.
At the heart of Musk’s case is the argument that OpenAI’s evolution into a major commercial entity broke its original nonprofit-driven agreement. Legal experts supporting Musk argue that prioritizing profits over safety could have far-reaching implications, especially as AI becomes deeply embedded in global industries. The case has reignited calls for stronger government oversight, with critics warning that leaving such decisions in the hands of a single CEO could pose risks to the broader public interest.
source: techcrunch
