OpenAI is famously not all that open. Dazzling, cutting-edge AI products emerge without warning, generating excitement and anxiety in equal measure (along with plenty of disdain). But like its product development, the company's internal culture is unusually opaque, which makes it all the more unsettling that Jan Leike, the departing co-head of its "superalignment" team — a position overlooking OpenAI's safety issues — has just spoken out against the company.
Tweet may have been deleted
Something like this was partly anticipated by those watching OpenAI closely. The company's high-profile former chief scientist, Ilya Sutskever, abruptly quit on Tuesday, too, and "#WhatDidIlyaSee" became a trending hashtag once again. The presumptuous phrasing of the hashtag — originally from March, when Sutskever participated in the corporate machinations that got CEO Sam Altman briefly fired — made it sound as if Sutskever had glimpsed the world through the AI looking glass, and had run screaming from it.
SEE ALSO: Google ups the AI ante with investment in ChatGPT rival AnthropicIn a series of posts on X (formerly Twitter) on Friday, Leike gave the public some hints as to why he left.
He claimed he had been "disagreeing with OpenAI leadership about the company's core priorities for quite some time," and that he had reached a "breaking point." He thinks the company should be more focused on "security, monitoring, preparedness, safety, adversarial robustness, (super)alignment, confidentiality, societal impact, and related topics."
Tweet may have been deleted
"These problems are quite hard to get right, and I am concerned we aren't on a trajectory to get there," Leike said, noting that he felt like he and his team were "sailing against the wind" when they tried to secure the resources they needed to do their safety work.
Tweet may have been deleted
Leike seems to view OpenAI as bearing immense responsibility, writing, "Building smarter-than-human machines is an inherently dangerous endeavor." That makes it potentially all the scarier that, in Leike's view, "over the past years, safety culture and processes have taken a backseat to shiny products."
Leike evidently takes seriously the company's internal narrative about working toward artificial general intelligence, also known as AGI — systems that truly process information like humans, well beyond narrow LLM-like capabilities. "We are long overdue in getting incredibly serious about the implications of AGI," Leike wrote. "We must prioritize preparing for them as best we can. Only then can we ensure AGI benefits all of humanity."
In Leike's view, OpenAI needs to "become a safety-first AGI company" and he urged its remaining employees to "act with the gravitas appropriate for what you're building."
This departure, not to mention these comments, will only add fuel to already widespread public apprehensiveness around OpenAI's commitment, or lack thereof, to AI safety. Other critics, however, have pointed out that fearmongering around AI's supposedly immense power also functions as a kind of backdoor marketing scheme for this still largely unproven technology.
Copyright © 2023 Powered by
One of OpenAI's safety leaders quit on Tuesday. He just explained why.-夜以继日网
sitemap
文章
2
浏览
2976
获赞
88917
10 dogs who really loved their puppucinos
Forget the Dragonfruit Frappucino. Starbucks's best secret menu item is the puppuccino, and everyoneNonprofit files complaint against OpenAI's GPT
On Thursday, the Center for AI and Digital Policy (CAIDP), an advocacy nonprofit, filed a complaintThe FCC is finally cracking down on robotexts
File this under news that's good for everyone — except for scammers. On Friday, the Federal CoWant to try Bluesky? Look carefully at the terms of service.
If you're on Twitter, you've probably seen people flocking to Bluesky, a social platform hailed as aStephen King has some golden Thanksgiving advice for avoiding political arguments
It's almost an unwritten rule of Thanksgiving – or any family holiday event, for that matter &ChatGPT users can now disable storage of chats, and not have them used as training data
ChatGPT users now have the option of keeping their chat history private.In a blog post on Tuesday, OTesla’s Master Plan part 3 gives you numbers to chew on
In March, Tesla CEO Elon Musk revealed the third instalment of his ever-evolving Master Plan, a setTwitter will only show verified accounts on its 'For You' page
Elon Musk says that Twitter will only display the tweets of verified users on its For You page, star10 dogs who really loved their puppucinos
Forget the Dragonfruit Frappucino. Starbucks's best secret menu item is the puppuccino, and everyoneTikTok debates the trend of mining strangers for content
The discourse is scorching on TikTok this week, as those seemingly harmless on-the-street videos facGoogle ends sales of its Glass headset
If you remember Google Glass, the augmented reality headset introduced in 2013, canceled in 2015, th11 of the weirdest DALL
Despite the existential horrors that come from the ever-increasing wealth of knowledge gained by artGMC revives gas
It may seem like an oxymoron that the massive, gas-guzzling GMC Hummer, once known as a symbol of ovAt Google I/O 2023, Search gets an AI overhaul
At Google I/O 2023, the company announced major generative AI updates to its core Search product.NowElon Musk says work from home is 'bull**it' and 'morally wrong'
Elon Musk has thoughts about work from home. Many, many thoughts. The CEO of Tesla, SpaceX, and Twit