New report illuminates why OpenAI board said Altman “was not consistently candid”
![](https://sh.itjust.works/pictrs/image/ba603784-f632-4512-9ddb-f4721f460c33.webp)
![](https://fry.gs/pictrs/image/c6832070-8625-4688-b9e5-5d519541e092.png)
![New report illuminates why OpenAI board said Altman “was not consistently candid”](https://lemmy.world/pictrs/image/01302b97-2984-4736-ac01-fc9200d9e4c3.jpeg?format=jpg&thumbnail=256)
arstechnica.com
Insider report details clash over one board member's criticism in an academic paper.
Kyle Orland - 12/5/2023, 9:31 PM
Insider report details clash over one board member's criticism in an academic paper.
Kyle Orland - 12/5/2023, 9:31 PM
This is the best summary I could come up with:
Toner, who serves as director of strategy and foundational research grants at Georgetown University’s Center for Security and Emerging Technology, allegedly drew Altman's negative attention by co-writing a paper on different ways AI companies can "signal" their commitment to safety through "costly" words and actions.
In the paper, Toner contrasts OpenAI's public launch of ChatGPT last year with Anthropic's "deliberate deci[sion] not to productize its technology in order to avoid stoking the flames of AI hype."
She also wrote that, "by delaying the release of [Anthropic chatbot] Claude until another company put out a similarly capable product, Anthropic was showing its willingness to avoid exactly the kind of frantic corner-cutting that the release of ChatGPT appeared to spur."
At the same time, Duhigg's piece also gives some credence to the idea that the OpenAI board felt it needed to be able to hold Altman "accountable" in order to fulfill its mission to "make sure AI benefits all of humanity," as one unnamed source put it.
"It's hard to say if the board members were more terrified of sentient computers or of Altman going rogue," Duhigg writes.
The piece also offers a behind-the-scenes view into Microsoft's three-pronged response to the OpenAI drama and the ways the Redmond-based tech giant reportedly found the board's moves "mind-bogglingly stupid."
The original article contains 414 words, the summary contains 215 words. Saved 48%. I'm a bot and I'm open source!
why repost the same article with the exact same title?
https://sopuli.xyz/post/6648766
Welcome to lemmy.
Because it's an entirely different instance. That's on sopuli, this is on Lemmy.world.
OK, taking Toner's approach, no one would ever release a AI because there isn't one already out there.
Don't blame Atman for "lashing out".
What a stupid take, and the board is idiotic for going along with it.
I...don't think that's what the referenced paper was saying. First of all, Toner didn't co-author the paper from her position as an OpenAI board member, but as a CSET director. Secondly, the paper didn't intend to prescribe behaviors to private sector tech companies, but rather investigate "[how policymakers can] credibly reveal and assess intentions in the field of artificial intelligence" by exploring "costly signals...as a policy lever."
The full quote:
Anthropic is being used here as an example of "private sector signaling," which could theoretically manifest in countless ways. Nothing in the text seems to indicate that OpenAI should have behaved exactly this same way, but the example is held as a successful contrast to OpenAI's allegedly failed use of the GPT-4 system card as a signal of OpenAI's commitment to safety.
Honestly, the paper seems really interesting to an AI layman like me and a critically important subject to explore: empowering policymakers to make informed determinations about regulating a technology that almost everyone except the subject-matter experts themselves will *not fully understand.
Take your deliberate ignorance to reddit.