George Hotz and Eliezer Yudkowsky AI Safety Debate - 8/15/23 5pm ET

manitcor@lemmy.intai.tech to Technology@beehaw.org – 13 points –
1

Copying from intai:

Oh wow, this has gone in an interesting direction. I think Yudowski is a moral realist!

I’m not sure what Hotz means about the prisoner’s dilemma. The Nash equilibrium is to defect, this is known. Once you start repeating it and add some noise, it grows more cooperative equilibriums.

I agree with him on this specific thing, though. Maybe there could be an AI cold war, or maybe it would turn hot for the same reason human wars do (a lot of cost in the short term with an opponent-free long term).