Fake Biden video prompts call for Meta to label posts
bbc.com
"Since the quality of AI deception and the ways you can do it keeps improving and shifting this is an important element to keep the policy dynamic as AI and usage gets more pervasive or more deceptive, or people get more accustomed to it," Mr Gregory said.
He added focusing on labelling fake posts would be an effective solution for some content, such as videos which have been recycled or recirculated from a previous event, but he was sceptical about the effectiveness of automatically labelling content manipulated using emerging AI tools.
Are there any social scientists on lemmy? Have we actually studied the effects of labeling misinformation as opposed to removing it? Does labeling misinformation actually stop it from spreading and being believed, or does it reinforce conspiratorial thinking? This is not a rhetorical question - I genuinely don't know.
I'm not a social scientist, but it's a mixed bag. Here are the top results from Google Scholar:
There is growing concern over the spread of misinformation online. One widely adopted intervention by platforms for addressing falsehoods is applying “warning labels” to posts deemed inaccurate by fact-checkers. Despite a rich literature on correcting misinformation after exposure, much less work has examined the effectiveness of warning labels presented concurrent with exposure. Promisingly, existing research suggests that warning labels effectively reduce belief and spread of misinformation. The size of these beneficial effects depends on how the labels are implemented and the characteristics of the content being labeled. Despite some individual differences, recent evidence indicates that warning labels are generally effective across party lines and other demographic characteristics.
Social media platforms face rampant misinformation spread through multimedia posts shared in highly-personalized contexts [10, 11]. Foundational qualitative research is necessary to ensure platforms’ misinformation interventions are aligned with users’ needs and understanding of information in their own contexts, across platforms. In two studies, we combined in-depth interviews (n=15) with diary and co-design methods (n=23) to investigate how a mix of Americans exposed to misinformation during COVID-19 understand their information environments, including encounters with interventions such as Facebook fact-checking labels. Analysis reveals a deep division in user attitudes about platform labeling interventions, perceived by 7/15 interview participants as biased and punitive. As a result, we argue for the need to better research the unintended consequences of labeling interventions on factual beliefs and attitudes.
These findings also complicate discussion around "the backfire effect", the idea that when a claim aligns with someone’s ideological beliefs, telling them that it’s wrong will actually make them believe it even more strongly [35]. Though this phenomenon is thought to be rare, our findings suggest that emotionally-charged, defensive backfire reactions may be common in practice for American social media users encountering corrections on social media posts about news topics. While our sample size was too small to definitively measure whether the labels actually strengthened beliefs in inaccurate claims, at the very least, reactions described above showed doubt and distrust toward the credibility of labels--often with reason, as in the case of "false positive" automated application of labels in inappropriate contexts.
In the case of state-controlled media outlets on YouTube, Facebook, and Twitter this has taken the form of labeling their connection to a state. We show that these labels have the ability to mitigate the effects of viewing election misinformation from the Russian media channel RT. However, this is only the case when the platform prominently places the label so as not to be missed by users.
Using appropriate statistical tools, we find that, overall, label placement did not change the propensity of users to share and engage with labeled content, but the falsity of content did. However, we show that the presence of textual overlap in labels did reduce user interactions, while stronger rebuttals reduced the toxicity in comments. We also find that users were more likely to discuss their positions on the underlying tweets in replies when the labels contained rebuttals. When false content was labeled, results show that liberals engaged more than conservatives. Labels also increased the engagement of more passive Twitter users. This case study has direct implications for the design of effective soft moderation and related policies.
One thing we know for certain is that handing the government the ability to mandate control over our information flow is one of the primary tools by which fascism took hold in mid-20th century Europe.
Okay.
Someone do one on Zuckerberg and get it on the platform! Pronto!
What fake video?
Where's the god damn video?
Democrats always lose because they fight with one arm behind their back while the Republicans use fire.
Meta is not going to do the right thing. Fight fire with fire and make deepfakes of Trump too and all his ilk
Doesn’t even need to be deepfakes. They’re already that awful,
Yeah, you can show Maga morons actual videos of Trump committing crimes or admitting to them, and rhey will brush it off. Comedians have been doing this for a while at rallies. It's not so funny lately, since they're winning in current polls.
Fuck polls.
No, let's pay attention to them, and vote like it matters.
Polls indicate that Democrats need to move left in order to win. Democrats would rather lose.
No you make videos of Trump saying he wants open boarders. Or that we need to raise taxes. Or police need to be de-funded. Things that are awful to his voters.
If they don’t believe he said things to them live, and in person… it doesn’t matter. They won’t believe anything negative anyway.
This isn’t for the die hard trumpers, it’s for the people who are actually persuadable; and lying about it… is just going to fuel the fake news narrative.
We don’t have to be dishonest, he’s awful enough, so why shoot yourself in the foot with it?
Borders*
Bad idea. Moderates would then see him as a moderate candidate.
By making deepfakes saying he's for things moderates oppose?
Moderates, by definition, don't oppose anything strongly, or they wouldn't be moderates.
They oppose progressives very strongly indeed. They never oppose conservatives or fascists in any meaningful way.
Then we use the same term to refer to different groups.
Someone who opposes a progressive strongly is a conservative. Some conservatives call themselves moderates because they know a large percentage of women won't fuck conservatives.
Then the Democratic Party contains a lot of conservatives, since they oppose progressives more strongly than they've ever opposed fascists.
Do they care though? They went from "Russia is the greatest evil threat" to "Putin is our friend, why dont you let him conquer Europe?" In just a few years.
Dementia Donnie would love nothing more than a flood of fake videos to distract from his actual statements.
This is probably the stupidest take I've seen.
Absolutely! I love it!
Is Bam Margera free?
I agree that dems, quite often, well....do things like absolute pussies. But....deep fakes are not the answer.
I agree with the first sentence, not so much the second part. The apathy and lack of fight in Biden's campaign sucks, it's like the democrats are defending the title but bringing a pillow to the boxing ring.