What do you mean by not too far? Because I can see A LOT of issues which cannot be solved easily. Even with maximum wishful thinking and a lot of handwaving that's not something that's possible even by a long shot.
Or do you mean not too far, like within the next 1000 years? Cause yeah I could believe that.
Also AI doesn't mean general intelligence, so not intelligence like human intelligence. Don't be fooled by the PR and hype going around, LLMs aren't general intelligence.
We are not too far from self-replicating machines controlled by AI. They will be built with a simple purpose: find resources and replicate.
Imagine a world with people trying to eliminate such machines, while machines learn new ways to get around any restrictions.
Ah the Hartz-Timor Swarm.
I was thinking of paperclips.
What do you mean by not too far? Because I can see A LOT of issues which cannot be solved easily. Even with maximum wishful thinking and a lot of handwaving that's not something that's possible even by a long shot.
Or do you mean not too far, like within the next 1000 years? Cause yeah I could believe that.
Also AI doesn't mean general intelligence, so not intelligence like human intelligence. Don't be fooled by the PR and hype going around, LLMs aren't general intelligence.