A whistleblower has claimed that OpenAI’s intention to turn a profit could push the artificial intelligence startup to eschew safety precautions.
Former OpenAI research engineer William Saunders expressed concern to the Guardian about rumours that the company planning to restructure and break away from its non-profit board will affect ChatGPT.
Reports that OpenAI’s CEO, Sam Altman, would have a stake in the reorganised company also worried Saunders, who expressed his concerns to the US Senate earlier this month.
“My primary concern pertains to the implications for OpenAI’s safety decision-making governance,” he expressed. “This creates more incentive to race and cut corners if Sam Altman holds a significant equity stake and the non-profit board is no longer in control of these decisions.”
OpenAI’s original mission and the risks of AGI development
Upon its founding, OpenAI was tasked by its charter to develop artificial general intelligence (AGI), or “systems that are generally smarter than humans,” for the benefit of “all of humanity.” However Saunders and other academics and practitioners are concerned about the potential power of an AGI system because of the competitive push to develop such technology and the possibility that safety issues would be ignored.
In written evidence before the Senate, Saunders stated that he “lost faith” in OpenAI’s ability to make ethical decisions regarding artificial intelligence, which is why he left the business. Saunders worked on the now-dissolved OpenAI superalignment team, which was in charge of making sure that potent AI systems follow human ideals and goals.
Potential shift to for-profit model and ethical concerns
Saunders has recently stated that the goals of the current structure, which sees OpenAI having a profit-making corporation that caps returns to investors and staff, could be compromised by a move to a for-profit organisation. Over that threshold, all profits are given back to the nonprofit organisation for “the benefit of humanity.”
According to Saunders, if a company is solely focused on making money and, for example, develops technology that renders a large number of employment obsolete, it is not permitted to donate its profits back to society.
“OpenAI was intended to give the remaining portion to the non-profit and only permit a limited profit for investors and employees,” he stated. Then, OpenAI would contribute to society rather than just keeping the money for themselves if their AI technology resulted in widespread unemployment. The move to a for-profit company implies that this is no longer a top concern.
A request for comments has been made to OpenAI. The company’s charter declares that it is “dedicated to doing the research required to make AGI safe.” It also announced the recent formation of an independent safety and security committee, which included Altman’s resignation as one of its members.
Restructuring as a public benefit corporation—a business with no profit ceiling and a dedication to improving society—is something OpenAI is thinking about doing. The non-profit organisation would own stock in the new company, according to another Reuters story this week.
The non-profit organisation will still exist, according to OpenAI, which declined to comment on the mechanics of the reorganisation.
OpenAI’s response and ongoing discussions about equity
OpenAI’s chair, Bret Taylor, stated in a statement released on Thursday that while the board had talked about awarding Altman a share in the company, no exact amount had been discussed.
There have been rumours that Altman would receive a 7% share in OpenAI, which is looking to raise $6.5 billion in capital and might eventually be valued at $150 billion.
“The board has discussed whether Sam receiving equity compensation would be beneficial to the company and our mission, but no decisions have been made or specific figures discussed,” Taylor added.
The IT news website The Information states that Altman has called reports that he would take a 7% share “ludicrous.”
(Tashia Bernardus)