Most U.S. adults don't believe benefits of AI outweigh the risks, new survey finds
axios.com
The majority of U.S. adults don't believe the benefits of artificial intelligence outweigh the risks, according to a new Mitre-Harris Poll released Tuesday.
You are viewing a single comment
Most of the U.S. adults also don't understand what AI is in the slightest. What do the opinions of people who are not in the slightest educated on the matter affect lol.
"What do the opinions of people who are not in the slightest educated on the matter affect"
Judging by the elected leaders of the USA: quite a lot, in fact.
That's a harsh self roast lmao
So you'd rather only the 1% get the right to vote? How about only white land owners? How about only men get to vote in this wonderful utopia of yours
Stop stop there isn’t any straw left!
Making a mockery of the workforce who rely on jobs to not be homeless is not appropriate in this conversation, nor is it even an argument to begin with, it's just a snobbish incel who probably lives in a gated community mocking poor people
You don’t have to understand how an atomic bomb works to know it’s dangerous
Prime example. Atomic bombs are dangerous and they seem like a bad thing. But then you realize that, counter to our intuition, nuclear weapons have created peace and security in the world.
No country with nukes has been invaded. No world wars have happened since the invention of nukes. Countries with nukes don't fight each other directly.
Ukraine had nukes, gave them up, promptly invaded by Russia.
Things that seem dangerous aren't always dangerous. Things that seem safe aren't always safe. More often though, technology has good sides and bad sides. AI does and will continue to have pros and cons.
Atomic bomb are also dangerous because if someone end up launching one by mistake, all hell is gonna break loose. This has almost happened multiple times:
https://en.wikipedia.org/wiki/List_of_nuclear_close_calls
We've just been lucky so far.
And then there are questionable state leaders who may even use them willingly. Like Putin, or Kim, maybe even Trump.
…and the development and use of nuclear power has been one of the most important developments in civil infrastructure in the last century.
Nuclear isn’t categorically free from the potential to harm, but it can also do a whole hell of a lot for humanity if used the right way. We understand it enough to know how to use it carefully and safely in civil applications.
We’ll probably get to the same place with ML… eventually. Right now, everyone’s just throwing tons of random problems at it to see what sticks, which is not what one could call responsible use - particularly when outputs are used in a widespread sense in production environments.
If you're from one of the countries with nukes, of course you'll see it as positive. For the victims of the nuke-wielding countries, not so much.
That’s a good point, however just because the bad thing hasn’t happened yet, doesn’t mean it wont. Everything has pros and cons, it’s a matter of whether or not the pros outweigh the cons.
I don't disagree with your overall point, but as they say, anything that can happen, will happen. I don't know when it will happen; tomorrow, 50 years, 1000 years... eventually nuclear weapons will be used in warfare again, and it will be a dark time.
Except the current world war.
You need to understand to correctly classify the danger though.
Otherwise you make stupid decisions such as quiting nuclear energy in favor of coal because of an incident like Fukushima even though that incident just had a single casualty due to radiation.
You chose an analogy with the most limited scope possible but sure I'll go with it. To understand how dangerous an atomic bomb is exactly without just looking up a hiroshima you need to have atleast some knowledge on the subject, you'd also have to understand all the nuances etc. The thing about AI is that most people haven't a clue what it is, how it works, what it can do. They just listen to the shit their telegram loving uncle spewed at the family gathering. A lot of people think AI is fucking sentient lmao.
I don’t think most people think ai is sentient. In my experience the people that think that are the ones who think they’re the most educated saying stuff like “neural networks are basically the same as a human brain.”
You don't think, yet a software engineer from google, Blake Lemoine, thought LaMDA was sentient. He took a lot of idiots down with him when he went public with said claims. Not to mention the movies that were made with the premise of sentient AI.
Your anecdotal experience and your feelings don't in the slightest affect the reality that there is tons of people who think AI is sentient and will somehow start some fucking robo revolution.
I'm over here asking chatGPT for help with a pandas dataframe and loving every minute of it. At what point am I going to feel the effects of nuclear warfare?
I’m confused how this is relevant. Just pointing out this is a bad take, not saying nukes are the same as AI. chatGPT isn’t the only AI out there btw. For example NYC just allowed the police to use AI to profile potential criminals… you think that’s a good thing?
Sounds like NYC police are the problem in that scenario.
Yeah sure “guns don’t kill people, people kill people” is an outrageous take.
The take is "let's not forget to hold people accountable for the shitty things they do." AI is not a killing machine. Guns aren't particularly productive.
you also don't have to understand how 5g works to know it spreads covid /s
point is, I don't see how your analogy works beyond the limited scope of only things that result in an immediate loss of life
I don’t need to know the ins and outs of how the nazi regime operated to know it was bad for humanity. I don’t need to know how a vaccine works to know it’s probably good for me to get. I don’t need to know the ins and outs of personal data collection and exploitation to know it’s probably not good for society. There are lots of examples.
okay, I'll concede, my scope also was pretty limited. I still stand by not trusting the public with deciding what's the best use of AI, when most people think what we have now is anything more than statistics supercharged in its implementation.
I can certainly give that "you" don't need to know but there are a lot of differing opinions on even the things you're talking about inside of the people that are in this very community.
I would say that the Royal we need to know because there are a lot of opinions on facts that don't line up with actual facts for a lot of people. Sure, not you, not me but a hell of a lot of people.
I don’t disagree that people are stupid, but the majority of people got/supported the vaccine. Majority is sometimes a good indicator, that’s how democracy works. Again, it’s not perfect, but it’s not useless either.
Because they live in the same society as you, and they get to decide who goes to jail as much as you do
Nice argument you made there, we don't decide who goes to jail, a judge does that, someone who studied law.
Are you familiar with juries?
No, I'm from a country where the jury all studied law and isn't 64 year old Margereta who wants to see some drama to tell at her knitting and book clubs.
There are a lot more countries than yours, believe it or not, and some of them don’t have the same justice system as yours. Do people in your country have the right to vote? Same sentiment, do you think that’s a stupid system?
Your first argument can be used against you, lmao. Your second argument is a strawman. Good job.
Well and being a snob about it doesn't help. If all the average joe knows about AI is what google or openAI pushed to corporate media, that shouldn't be where the conversation ends.
The average joe can have their thoughts on it all they want, but their opinions on the matter aren't really valid or of any importance. AI is best left to the people who have a deep knowledge of the subject, just as nuclear fusion is best left to scientists studying the field. I'm not going to tell average Joe the mechanic that I think the engine he just revised might just blow up, because I have no fucking clue about it. Sure I have some very basic knowledge of it, that's pretty much where it end too though.
You can not know the nuanced details of something and still be (rightly) sketched out by it.
I know a decent amount about the technical implementation details, and that makes me trust its use in (what I perceive as) inappropriate contexts way less than the average layperson.
What a terrible thing to say, they're human beings so I hope they matter to you
They do not matter to me.
Well, at least you admit you're a terrible person
I am a terrible person simply because they don't matter to me? Do you cry for every death victim your military caused? Do you cry for every couple with a stillborn baby? No, you don't. You think it's shitty, because it is. But you don't really care, they don't truly matter to you. The way you throw those words around makes their meaning less.
Lot of words to just say you're a terrible person, we got it already, you don't need to explain why you're terrible
You didn't deny any of it. Tells me all I need to know about you.