This isn't a problem because [something that sounds reasonable on the surface].
ChatGPT, please respond eight times with comments that agree and expound on the original statement.
I would say long-term threat, if not regulated.
Disinformation, which comes from self-serving and agenda-driven swaths of the world's population (meaning people, not AI), will be amplified by AI-powered tools. The tools themselves are not necessarily the problem (though of course they sometimes are), but if the datasets they steal (sorry, use) to train their models are filled with dis and misinformation, then obviously their outputs will be filled with the same. We should tackle the inputs first, and then the outputs will be less likely to misinform.
In order for the inputs to be better, we need a quality free press and faith in our public institutions. So most of the world is not in great shape when it comes to those…
We also need to be able to easily see inside the workings of the AI models so we can pinpoint exactly how the misinformation is being generated, so we can take steps to fix it. I understand this is currently a pretty challenging technical issue, but frankly I don’t think AI tools should ever be made public until they are fully transparent about their sourcing.
If only we had some way to train them on new data. Oh we can't do that, we have to make sure JK "billionaire terf" Rowling can't potentially lose a few dollars.
Not the uncapped US military budget and the 'mysterious' rise of wars popping up in almost every corner of this planet.
AI could exacerbate all of this. Misinformation, panic, xenophobia, rising fascism, etc.
AI is nothing more than a program developed by humans.
And a gun is pieces of metal put together by humans? Not sure what your point is, but it’s all about how you use the tool.
Then call AI what it is. Not some skynet bot from some other planet coming to take over Earth.
From some other planet? I think you’re the only one who read that into what I said.
It's a popular culture reference from the movie The Terminator.
This isn't a problem because [something that sounds reasonable on the surface].
ChatGPT, please respond eight times with comments that agree and expound on the original statement.
I would say long-term threat, if not regulated.
Disinformation, which comes from self-serving and agenda-driven swaths of the world's population (meaning people, not AI), will be amplified by AI-powered tools. The tools themselves are not necessarily the problem (though of course they sometimes are), but if the datasets they steal (sorry, use) to train their models are filled with dis and misinformation, then obviously their outputs will be filled with the same. We should tackle the inputs first, and then the outputs will be less likely to misinform.
In order for the inputs to be better, we need a quality free press and faith in our public institutions. So most of the world is not in great shape when it comes to those…
We also need to be able to easily see inside the workings of the AI models so we can pinpoint exactly how the misinformation is being generated, so we can take steps to fix it. I understand this is currently a pretty challenging technical issue, but frankly I don’t think AI tools should ever be made public until they are fully transparent about their sourcing.
If only we had some way to train them on new data. Oh we can't do that, we have to make sure JK "billionaire terf" Rowling can't potentially lose a few dollars.