Hardware

Another OpenAI Researcher Quits, Issuing Cryptic Warning

Another OpenAI Researcher Quits, Issuing Cryptic Warning
Another OpenAI Researcher Quits, Issuing Cryptic Warning


“One of the ways tech companies in general can disempower those seeking to hold them accountable…”

Another One

Another OpenAI researcher left the company this week, ominously citing concerns over opaque “decision-making processes” in the AI industry.

In a thread posted this week to X-formerly-Twitter, former OpenAI policy researcher Gretchen Kreuger announced her departure, writing that this “was not an easy decision to make.” And while the ex-OpenAIer didn’t quite go into detail — can’t imagine why not! — about the forces that made her make that difficult choice, she did offer a cryptic warning about the lack of oversight within the AI industry and why it matters.

“We need to do more to improve foundational things,” Kreuger wrote, “like decision-making processes; accountability; transparency; documentation; policy enforcement; the care with which we use our own technology; and mitigations for impacts on inequality, rights, and the environment.”

“These concerns are important to people and communities now,” the researcher continued. “They influence how aspects of the future can be charted, and by whom.”

In other words, the decisions made by big AI industry players impact everyone. Right now, though, those who lead AI companies are pretty much the only ones making those choices.

Falling Apart

Kreuger’s departure comes at a turbulent moment for OpenAI. In addition to multiple recent scandals — one involving OpenAI allegedly copying actress Scarlett Johansson’s voice without her permission for one of its new products, and another regarding the company’s previously unknown practice of pressuring exiting employees into signing strict non-disclosure and non-disparagement agreements by way of threatening to claw back vested equity, as reported by Vox — the company has seen several high-profile departures in recent weeks.

Those departures include that of Ilya Sutskever, who served as OpenAI’s chief scientist, and Jan Leike, a top researcher on the company’s now-dismantled “Superalignment” safety team — which, in short, was the division effectively in charge of ensuring that a still-theoretical human-level AI wouldn’t go rogue and kill us all. Or something like that.

Sutskever was also a leader within the Superalignment division. And to that end, it feels very notable that all three of these now-ex-OpenAI workers were those who worked on safety and policy initiatives. It’s almost as if, for some reason, they felt as though they were unable to successfully do their job in ensuring the safety and security of OpenAI’s products — part of which, of course, would reasonably include creating pathways for holding leadership accountable for their choices.

“One of the ways tech companies in general can disempower those seeking to hold them accountable is to sow division among those raising concerns or challenging their power,” Kreuger’s thread continued. “I care deeply about preventing this.”

More on OpenAI: OpenAI Researcher Quits, Flames Company for Axing Team Working to Prevent Superintelligent AI from Turning Against Humankind



Another OpenAI Researcher Quits, Issuing Cryptic Warning

Source link