Advertisement
You have a preview view of this article while we are checking your access. When we have confirmed access, the full article content will load.
Guest Essay
Laws Need to Catch Up to Artificial Intelligence’s Unique Risks
Garrison Lovely
Mr. Lovely is a freelance journalist.
For about five years, OpenAI used a system of nondisclosure agreements to stifle public criticism from outgoing employees. Current and former OpenAI staff members were paranoid about talking to the press. In May, one departing employee refused to sign and went public in The Times. The company apologized and scrapped the agreements. Then the floodgates opened. Exiting employees began criticizing OpenAI’s safety practices, and a wave of articles emerged about its broken promises.
These stories came from people who were willing to risk their careers to inform the public. How many more are silenced because they’re too scared to speak out? Since existing whistle-blower protections typically cover only the reporting of illegal conduct, they are inadequate here. Artificial intelligence can be dangerous without being illegal. A.I. needs stronger protections — like those in place in parts of the public sector, finance and publicly traded companies — that prohibit retaliation and establish anonymous reporting channels.
OpenAI has spent the last year mired in scandal. The company’s chief executive was briefly fired after the nonprofit board lost trust in him. Whistle-blowers alleged to the Securities and Exchange Commission that OpenAI’s nondisclosure agreements were illegal. Safety researchers have left the company in droves. Now the firm is restructuring its core business as a for-profit, seemingly prompting the departure of more key leaders. On Friday, The Wall Street Journal reported that OpenAI rushed testing of a major model in May, attempting to undercut a rival’s publicity; after the release, employees found out the model exceeded the company’s standards for safety. (The company told The Journal the findings were the result of a methodological flaw.)
This behavior would be concerning in any industry, but according to OpenAI itself, A.I. poses unique risks. The leaders of the top A.I. firms and leading A.I. researchers have warned that the technology could lead to human extinction.
Since more comprehensive national A.I. regulations aren’t coming anytime soon, we need a narrow federal law allowing employees to disclose information to Congress if they reasonably believe that an A.I. model poses a significant safety risk. Congress should establish a special inspector general to serve as a point of contact for these whistle-blowers. The law should mandate companies to notify staff about the channels available to them, which they can use without facing retaliation.
Such protections are essential for an industry that works so closely with such exceptionally risky technology, particularly when regulators have not caught up with the risks. People reporting violations of the Atomic Energy Act have more robust whistle-blower protections than those in most fields, while those working in biological toxins for several government departments are protected by proactive, pro-reporting guidance. A.I. workers need similar rules.
Advertisement