What happened to OpenAI’s long-term AI risk team?

gedaliyah@lemmy.worldmod to News@lemmy.world – 81 points –
What happened to OpenAI’s long-term AI risk team?
arstechnica.com

Former team members have either resigned or been absorbed into other research groups.

11

You are viewing a single comment
  1. Misalignment is a huge problem in any black-box system, not just in AGIs.
  2. What would it look like for us to be close to AGI? I have doubts that we're close, but it seems at least plausible.

Any sort of intelligence. LLMs are not intelligent, and we haven't created intelligence yet.

Should open source AI be condemned and possibly outrighted banned for when(if) these big tech companies achieve their alignment? Or will it be a way to hinder open source and ban it since the big players can claim you can hack it out, and therefore ban competition.

I can’t believe I’m aligned with Meta here but Yan LeCun is on the right side of history releasing these models for free. Giving everyone the ability to be competitive with LLMs is a much better outcome than only someone like Sam Altman having the keys.