OpenAI Employees Forced to Sign NDA Preventing Them From Ever Criticizing Company
That seems harsh.
Cone of Silence
ChatGPT creator OpenAI might have “open” in the name, but its business practices seem diametrically opposed to the idea of open dialogue.
Take this fascinating scoop from Vox, which pulls back the curtain on the restrictive nondisclosure agreement (NDA) that employees at the Sam Altman-helmed company are forced to sign to retain equity. Here’s what Vox‘s Kelsey Piper wrote of the legal documents:
It turns out there’s a very clear reason for that. I have seen the extremely restrictive off-boarding agreement that contains nondisclosure and non-disparagement provisions former OpenAI employees are subject to. It forbids them, for the rest of their lives, from criticizing their former employer. Even acknowledging that the NDA exists is a violation of it.
If a departing employee declines to sign the document, or if they violate it, they can lose all vested equity they earned during their time at the company, which is likely worth millions of dollars. One former employee, Daniel Kokotajlo, who posted that he quit OpenAI “due to losing confidence that it would behave responsibly around the time of AGI,” has confirmed publicly that he had to surrender what would have likely turned out to be a huge sum of money in order to quit without signing the document.
Signature Flourish
How egregious the NDA is depends on your industry and view of employees’ rights. But what’s certain is that it flies directly in the face of the “open” in OpenAI’s name, as well as much of its rhetoric around what it frames as the responsible and transparent development of advanced AI.
For its part, OpenAI issued a cryptic denial after Vox published its story that seems to contradict what Kokotajlo has said about having to give up equity when he left.
“We have never canceled any current or former employee’s vested equity nor will we if people do not sign a release or nondisparagement agreement when they exit,” it said. When Vox asked if that was a policy change, OpenAI replied only that the statement “reflects reality.”
It’s possible to imagine a world in which the development of AI was guided by universities and publicly funded instead of being pushed forward by impulsive and profit-seeking corporations. But that’s not the timeline we’ve ended up in — and how that reality influences the outcome of AI research is anyone’s guess.
More on OpenAI: OpenAI Researcher Quits, Flames Company for Axing Team Working to Prevent Superintelligent AI From Turning Against Humankind