OpenAI disbands team devoted to artificial intelligence risks
OpenAI, the prominent artificial intelligence research company, confirmed that it has disbanded a team devoted to mitigating the long-term dangers of super-smart artificial intelligence. According to the San Francisco-based firm, the so-called "superalignment" group was dissolved weeks ago, with its members being integrated into other projects and research efforts.
This development comes as advanced AI technology like OpenAI's own ChatGPT model faces increased scrutiny from regulators and growing public concern over its potential risks. This week, OpenAI co-founder Ilya Sutskever and team co-leader Jan Leike announced their departures from the company.
In a post on X, formerly Twitter, Leike warned that OpenAI must become a "safety-first AGI (artificial general intelligence) company," calling on all employees to "act with the gravitas" warranted by the technology they are building. Leike's message struck a somber tone, reflecting the gravity of the task at hand.
OpenAI CEO Sam Altman responded to Leike's post, thanking him for his work at the company and acknowledging that there is "a lot more to do" in ensuring the safe development of advanced AI systems. Altman promised to provide more information on this topic in the coming days.
Sutskever, who served as OpenAI's chief scientist and sat on the board that briefly ousted Altman last year, expressed confidence that the company will "build AGI that is both safe and beneficial." He described OpenAI's trajectory as "nothing short of miraculous." However, Sutskever's departure from the company, along with that of Leike, raises concerns about OpenAI's commitment to responsible AI development.
The dismantling of OpenAI's "superalignment" team comes as the company has recently released an even more advanced and human-like version of the AI technology powering ChatGPT, making it freely available to all users. Altman has previously cited the AI-powered virtual assistant in the movie "Her" as an inspiration for the future of AI interactions.
This shift in focus, away from the team dedicated to mitigating long-term AI risks, coincides with Altman's vision of a future where "digital brains will become as good and even better than our own," as Sutskever warned in a recent TED talk. Sutskever cautioned that the advent of artificial general intelligence (AGI) "will have a dramatic impact on every area of life."
The departure of key figures from OpenAI's safety-focused team, coupled with the company's rapid development of increasingly powerful AI models, has raised concerns among experts and the general public about the potential for unintended consequences and the need for a more robust and sustained commitment to responsible AI governance.