OpenAI, Google DeepMind Employees Warn of AI Risks, Demand Better Whistleblower Protection Policies

Technology



OpenAI and Google DeepMind are among the leading technology companies at the forefront of creating artificial intelligence (AI) systems and capabilities. However, several current and former employees of these organizations have now signed an open letter claiming that there is little or no oversight in the construction of these systems and that not enough attention is being paid to the huge risks this technology poses . The open letter is endorsed by two of the three “godfathers” of AI, Geoffrey Hinton and Yoshua Bengio and seeks better whistleblower protection policies from their employers.

OpenAI and Google DeepMind employees demand right to warn about AI

The open letter claims to have been written by current and former employees of leading AI companies who believe in AI's potential to deliver unprecedented benefits to humanity. It also points to the risks posed by the technology, which include the strengthening of social inequalities, the spread of misinformation and manipulation, and even the loss of control over AI systems that could lead to human extinction .

The open letter highlights that the self-governance structure implemented by these tech giants is ineffective in ensuring scrutiny of these risks. He also claimed that “strong financial incentives” further encourage companies to overlook the potential danger that AI systems can cause.

Claiming that AI companies are already aware of the capabilities, limitations and risk levels of different types of AI harm, the open letter questions their intent to take remedial action. “Currently they only have weak obligations to share some of this information with governments, and none with civil society. We don't think they can all be trusted to share it voluntarily,” he says.

The open letter has made four demands of their employers. First, employees want companies not to enter into or enforce any agreements that prohibit their criticism of risk-related concerns. Second, they have called for an anonymous, verifiable process for current and former employees to raise risk-related concerns with the company's board, regulators and an appropriate independent organization.

Employees also urge organizations to develop a culture of open criticism. Finally, the open letter emphasizes that employers should not retaliate against current and former employees who publicly share confidential risk-related information after other processes have failed.

A total of 13 former and current employees of OpenAI and Google DeepMind have signed the letter. Apart from the two “godfathers” of AI, British computer scientist Stuart Russell has also endorsed this move.

A former OpenAI employee talks about the risks of AI

One of the former OpenAI employees who signed the open letter, Daniel Kokotajlo, also made a series of posts on X (formerly known as Twitter), highlighting his experience at the company and the risks of AI. He claimed that when he resigned from the company he was asked to sign a non-disparagement clause to prevent him from saying anything critical of the company. He also claimed that the company threatened Kokotajlo with the withdrawal of his share capital for refusing to sign the agreement.

Kokotajlo stated that neural networks in AI systems grow rapidly from the large data sets fed into them. In addition, he added that there are no adequate measures to control the risks.

“There is much we do not understand about how these systems work and whether they will remain aligned with human interests as they become more intelligent and possibly surpass human-level intelligence in all areas,” he added.

In particular, OpenAI is building Model Spec, a document through which it aims to better guide the enterprise in building ethical AI technology. Recently, it has also created a Safety and Security Committee. Kokotajlo applauded these promises in one of the posts.



Source

Leave a Reply

Your email address will not be published. Required fields are marked *