OpenAI And Google Deepmind Employees Call For Greater Transparency And Protection For Whistleblowers
July 12, 2024

By and large, the advent of LLM-based AI tools into the open market has largely been able to direct the global public’s attention away from the breakthrough technology currently under development in some of the most advanced technology companies in the world: ‘real’ artificial intelligence. The ability to arrange words representative of human thoughts is just one of the elements that go into synthetic sentience. True AI is a far more terrifying prospect than the average Joe may realise. As numerous experts, researchers, policymakers, government agencies, and corporate representatives have pointed out through the years, developing human consciousness outside of humanity can pose a significant threat to humanity. This is why it’s important to question the integrity of the controls put in place around the entire process. But what happens when there is no real way to ensure there are adequate systems in place to ask difficult questions? And what of industrial profit motives, that may have significant incentive to actively conceal developments that may, if published, be sufficient for an immediate cease by those with the authority to do so?

OpenAI And Google Deepmind Employees Call For Greater Transparency And Protection For Whistleblowers

These questions have prompted a group of whistleblowers from prominent AI companies to publish an open letter asking for more transparency in their respective companies and the industry. The signatories to the letter include current and former employees of OpenAI and Google Deepmind. Titled ‘A Right to Warn about Advanced Artificial Intelligence’, the letter asks for the right of employees to disclose the risk surrounding AI development (exempting trade/competitive secrets) to the public given the lack of adequate government oversight. It also discusses an inability to trust corporations with financial incentives to maintain a competitive edge in the market at the expense of addressing the associated risks sufficiently. The letter also points out that the risks the signatories wanted to warn the public about did not constitute legal activity and that there was a clear need for better policies for whistleblower protection.

The letter reads:

“So long as there is no effective government oversight of these corporations, current and former employees are among the few people who can hold them accountable to the public. Yet broad confidential agreements block us from voicing our concerns, except to the very companies that may be failing to address these issues. Ordinary whistle protections are insufficient because they focus on illegal activity, whereas many of the risks we are concerned about are not yet regulated. Some of us reasonably fear various forms of retaliation…”  

Six of the thirteen signatories to the letter have chosen to remain anonymous.

The signatories at least believe the risks are grave enough that withholding them under non-disclosure agreements is a violation of their responsibility towards humanity. The letter warns that AI technologies can ‘further entrench existing inequalities’, ‘manipulate’, ‘misinform’ and should humans lose control, has even the potential to result in ‘human extinction’. And yet, OpenAI executives for example would rather keep these matters under wraps. According to one Vox article, the company had threatened to take back the equity earned and held by employees to coerce them to enter into non-disparagement agreements that would prevent them from ever criticising the company. They were even prohibited from mentioning the agreement’s existence. 

OpenAI And Google Deepmind Employees Call For Greater Transparency And Protection For Whistleblowers

On the other hand, there is every reason to believe that the signatories to the letter are correct in assuming that corporations cannot be trusted to have the safety of humanity at heart. OpenAI’s department dedicated to managing AI technology safety assessment and risk management was effectively dissolved in May this year after some of its most prominent members left the company. The remaining professionals were absorbed into other departments and sections of the company. The responsibility was taken over by Sam Altman himself and several board members of the Safety and Security Committee. The move becomes especially dubious because Altman was fired from his position on the board just last year in part due to his lack of transparency on safety issues. Speaking at ‘The TED AI Show’ last May, former OpenAI board member Helen Toner said that;

“He gave us inaccurate information about the small number of formal safety processes that the company did have in place, meaning that it was basically just impossible for the board to know how well those safety processes were working.”

The letter, for its part, makes three specific requests of AI companies:

  1. AI companies should refrain from entering into or enforcing ‘non-disparagement’ agreements or retaliate against their violation. 
  2. For companies to facilitate a verifiably anonymous process for current and former employees to raise risk-related concerns to the company board, regulators, and appropriate independent organisations. 
  3. For companies to facilitate a culture of open criticism.
  4. For companies to not retaliate against current and former employees who share risk-related (confidential) information publicly when all other avenues have failed. 

Google Deepmind is yet to make a public statement in the open letter. In a statement, an OpenAI spokesperson is quoted as saying that the company was “proud” of its track record in “providing the most capable and safest AI systems” and believed in its approach to addressing the related risks. 

(Theruni Liyanage)

© All content copyright The Hype Economy. Do not reproduce in any form without permission, even if you have a paid subscription.