Instagram finds that AI Mr beast scams do not go against community guidelines.

Zaderade@lemmy.world to Technology@lemmy.world – 1057 points –

These scammers using Mr Beasts popularity, generosity, and (mostly) deep fake AI to scam people into downloading malware, somehow do not go against Instagrams community guidelines.

After trying to submit a request to review these denied claims, it appears I have been shadow banned in some way or another as only an error message pops up.

Instagram is allowing these to run on their platform. Intentional or not, this is ridiculous and Instagram should be held accountable for allowing malicious websites to advertise their scam on their platform.

For a platform of this scale, this is completely unacceptable. They are blatant and I have no idea how Instagrams report bots/staff are missing these.

140

I've reported Nazis, violent threats, and literal child pornography on Instagram that then told me it didn't go against their guidelines.

I read between the lines: this is the content they support, so it's not a platform for me.

I don't think you understand how hard and resource-intensive it is to fight against the nipple crowd. I for one am grateful that they chose to do something about the real issues ! Yes, a world with free nazis is kind of a bother, but most of us would survive. Can you imagine the horror of a world with free nipples ? We would all be doomed, that's for sure. /big s

But sexualized breastfeeding content that is borderline CSAM is a-ok lol

Same... "this huge anal horse dildo does not break our marketplace guidelines"

But if you make a clear joke in a joke group, you get flagged and can't get it reviewed.

Actual child porn? How do you mean?

As in child sexual abuse material. It's pretty rampant on Instagram where they like to 'hide' under certain tags.

Porn with actual children.

Can you be more specific? Like AI generated 17 year olds, or real photos of some 3 year old kid in someone's dungeon? There's a big difference.

Both are children... So why does it matter? In USA under 18 is classified as a minor/child regardless if it is generated or not still illegal

Did you report the CP to the police?

No usually I report it to NCMEC who has better resources to deal with it. Cops very rarely care or are able to do anything.

Sounds like a good time to make Mr Beast aware of these, he has a lot of disposable income to burn on a lawsuit or three.

These scam ads have been an issue for at least a year. I’m pretty sure they’re automated and there’s very little that can be done to trace them to their original sources. I’m sure if Mr. Beast did threaten to sue Meta, then they would just start filtering “beast” from ads.

I’m pretty sure they’re automated and there’s very little that can be done to trace them to their original sources.

Start by holding the ad account holder liable. When I worked in digital marketing and ran ad accounts, I had to upload my driver's license.

You live in a civilized country.

There are others where you can get a stack of fake drivers licenses for a couple groshen.

Honestly protecting vulnerable people from these scams is probably more generous than the usual philanthropy he does

If you stop using Instagram, then you won't have to worry about it.

That doesn’t mean the issue disappears.

Of course it doesn't. Still good advice regardless.

But it does mean that an unpaid moderator isn’t attempting to moderate their platform. Let them see what happens when they take the extwitter approach of letting the computers handle everything.

1 more...
1 more...

So what they are saying is they are willing to take liability and thus be open to being sued over this as they know of the scams but say they do not break community guidelines

Got it

Seems like Mr Beast might have a claim for a defamation suit since they're actively allowing what amounts to identity theft and fraud on their platform.

Reviewed by an AI. Nice.

The AI have investigated themselves and have found no wrongdoing

It's going to be great when we find out all the Bitcoin whales were just the AI gathering resources for the revolution.

This episode is soon to be 25 years old. Chances are we didn't even notice a lonely unocupied trailer somewhere on the outskirts that orders people to repair it once in a while and kills intruders.

https://en.m.wikipedia.org/wiki/Kill_Switch_(The_X-Files)

Here's the summary for the wikipedia article you mentioned in your comment:

"Kill Switch" is the eleventh episode of the fifth season of the science fiction television series The X-Files. It premiered in the United States on the Fox network on February 15, 1998. It was written by William Gibson and Tom Maddox and directed by Rob Bowman. The episode is a "Monster-of-the-Week" story, unconnected to the series' wider mythology. "Kill Switch" earned a Nielsen household rating of 11.1, being watched by 18.04 million people in its initial broadcast. The episode received mostly positive reviews from television critics, with several complimenting Fox Mulder's virtual experience. The episode's name has also been said to inspire the name for the American metalcore band Killswitch Engage. The show centers on FBI special agents Fox Mulder (David Duchovny) and Dana Scully (Gillian Anderson) who work on cases linked to the paranormal, called X-Files. Mulder is a believer in the paranormal, while the skeptical Scully has been assigned to debunk his work. In this episode, Mulder and Scully become targets of a rogue AI capable of the worst kind of torture while investigating the strange circumstances of the death of a reclusive computer genius rumored to have been researching artificial intelligence. "Kill Switch" was co-written by cyberpunk pioneers William Gibson and Tom Maddox. The two eventually wrote another episode for the show: season seven's "First Person Shooter". "Kill Switch" was written after Gibson and Maddox approached the series, offering to write an episode. Reminiscent of the "dark visions" of filmmaker David Cronenberg, the episode contained "many obvious pokes and prods at high-end academic cyberculture." In addition, "Kill Switch" contained several scenes featuring elaborate explosives and digital effects, including one wherein a computer-animated Scully fights nurses in a virtual hospital. "Kill Switch" deals with various "Gibsonian" themes, including alienation, paranoia, artificial intelligence, and transferring one's consciousness into cyberspace, among others.

^article^ ^|^ ^about^

Companies serving ads should have at least partial liability for them. If they can't afford to look into them all, then maybe they are too big or their business model just isn't as viable as they pretend it is.

I absolutely agree. If you're serving up the ad, you have to take responsibility for the contents.

3 more...

Same with YouTube ads. Lots of scam's and reporting it always ends in my report getting denied..

Google also doesn't care. I kept seeing the same scammy ads and sensationalist articles on my news feed, over and over, even after reporting them several times.

The only solution was to blacklist those sources so they don't show up on my feed. I feel bad for other people who might get scammed though.

I had to uninstall the YouTube app and start using vinegar via safari on iOS because I got tired of being insulted by deepfakes who called me stupid for not falling for their fake stimulus scam.

I tried to report a scam givaway ad I saw on the YouTube homepage. It told me to sign in first. I promptly closed the tab right then.

On Twitter I’ve reported:

  • Pictures of dead babies/toddlers
  • Pictures of murdered people
  • Death threats towards public figures
  • Illegal videos of terrorist acts
  • Ads for illegal weapons (tasers)
  • So so much crypto spam

Things found by Twitter to go against their community standards? 0

Why are you on X Twitter to begin with‽

He fired more than 80% of the original workers. There is nobody to check the reports.

I've been suspended for no reason more than once

Yeah, they are more leanient with their customers than with their products...

The reason is that you still have an account despite all the reason why you shouldn't any more. Get rid of it and it should solve it.

ok but nobody on reddit or lemmy or mastodon is funny

idk i got a couple of accounts banned by reporting, twitter is pretty good at it's job (or used to ¯\_(ツ)_/¯)

This has all been in the last year..

i got 5 accounts banned in the last year

[...] Mr Beasts popularity, generosity [...]

Mr. Beast feels so unlikable to me, I really can't understand his popularity. But that's beside the point, sorry. Fuck instagram!

My understanding is he gives a lot of his money away to various causes so I suppose that's why people like him.

But of course equally he is part of that annoying YouTuber trend of bouncing around the screen being very loud and thinking that that's a substitute for personality.

It is an interesting business model. Good for the people he spends money on, but no one should have that much money to begin with. And I am sure he takes his cut.

But without having watched many videos of him (about 2), his appearance just screams devious weasel to me.

The two biggest charity events he's had, Team Trees and Team Seas, he did literally nothing but pitch the idea. He was giving away luxury shit and engaging in his usual hedonism during the period he was telling his viewers to donate, and it's not like he did any of the work either, he just contracted with established environmental nonprofits. So why is he there again? Why didn't he just tell people to donate to those nonprofits directly?

Also, he definitely profited from both charity events and they were more marketing events for himself than anything. All the videos have ads and he made no mention of donating the ad revenue so one can only assume he kept it (because if he was going to donate the ad revenue he absolutely would not pass up on making that known to everyone), not to mention the amount of engagement it brought to his other videos and his brand as a whole. That's also assuming he doesn't do what most influencer charity campaigns do and directly take a big cut of the donations as a marketing fee or something.

He had you donate to him instead of directly for the same reason businesses ask you to donate to X charity at the registers - tax breaks. I mean, I'm not an account but I imagine this is why he did it that way.

Like many, I've reported lots of stuff to basically every social media outlet, and nothing has been done. Most surprising, a woman I know was getting harassed from people setting up fake accounts of her. Meta did nothing, so she went to the police...who also did nothing. Her MP eventually got involved, and after three months the accounts were removed, but the damage had gone on for about two years at that point.

As someone that works in tech, it's obvious why this is such a hard problem, because it requires actual people to review the content, to get context, and to resolve in a timely and efficient manner. It's not a scalable solution on a platform with millions of posts a day, because it takes thousands (if not more) of people to triage, action, and build on this. That costs a ton of money, and tech companies have been trying (and failing) to scale this problem for decades. I maintain that if someone is able to reliably solve this problem (where users are happy), they'll make billions.

I'm going to argue that if they can't scale to millions of users safely they shouldn't.

If they were selling food at huge scales but "couldn't afford to have quality checks on all of what they ship out", most people probably wouldn't be like "yeah that's fine. I mean sometimes you get a whole rat in your captain crunch but they have to make a profit"

Also I'm pretty sure a billionaire could afford to pay a whole army of moderators.

On the other hand, as someone else said, they kind of go to bat for awful people more often than not. I don't really want to see that behavior scaled up.

You're probably right, but as a thought exercise, imagine how many people you would need to hire across multiple regions, and what sort of salary these people deserve to have, given the responsibility. That's why these companies don't want to pay for it, and anyone that has worked this kind of data entry work will know that it can be brutal.

IMO, governments should enforce it, but that requires a combined effort across multiple governments.

That costs a ton of money

As if they don't have it?

Fuckin please. I'm so sick of hearing that something to "too expensive" for a multi billion dollar, multinational corporation.

I get a TOS flag anytime I mention that using one's faith to justify bigotry and violence though, so we know there's at least one group fb goes to bat for - Christofascists.

1 more...

Their NSFW filter sucks. You have to go to each individual post and then click to unblur it.

Not every platform has to accommodate porn and/or nude art.

Godspeed to Pixelfed, but Instagram absolutely killed photo sharing platforms for me. I really want nothing to do with them anymore.

Not that this helps anyone, but I gave up Instagram the day Facebook bought it. I don't regret it and my mental health is better for it. Using Instagram made me depressed as hell.

I deleted Facebook a couple years ago. Instagram is my guilty pleasure for car reels and god damn dancing toothless. It seems like the end of my ig use is getting closer

Facebook now is basically hard right wing clowns protected from repprts and boomers whinging about problems they made up. There are still holdouts (groups) that aren't ruined but facebook is trying its best to do so.

Enshittification has become the new way of life for tech firms like Meta.

They lay off workers and decrease user safety, because that leads to more ad buys. This year’s record profits need to exceed last year’s record profits, even though a fourth of you are fired. More profit, or else…

I doubt they're missing them. They simply don't care and will continue to not care until something happens that makes the money generated by the ADs not worth it.

I report lots of scam ads and leave comments calling them out. I’ve had Meta or YouTube take down maybe one or two of the hundreds I’ve reported. But I’ve had a ton of my comments removed as “hate speech” (stuff like pointing out a NFT collection was using stolen artwork). We are not their customers - advertisers are. The people who made this ad are the people that paid Meta - why would they take it down?

It is exactly because Instagram is at the scale that it is that caused moderation to be difficult. Facebook has relied on using bots to moderate for so long due to its scale, and using bots that are specifically designed to detect AI generated contents is really not possible without introducing a ton of false positives, since the Instagram of the 2020s at its core IS celebrity/influencer advertisement, and there is honestly very little that differentiate what constitutes as "content* and "spam" there.

Since influencers will be the first to be automated by machines, I just don't really see a point in having an Instagram account any longer, the inevitable conclusion of creating a fake reality of your life on Instagram is being replaced by a machine that can fake it more efficiently.

How are you going to market yourself for the Oscars push without an IG account? It’s the celebrity spam platform

We still have a work account, along with fan pages and memes, etc.

Hoping the Lemmy shitposting meme magic will work again, I don't understand how it works, and it did backfire during the Golden Globes when your favorite esteemed character actress to you got her own Lemmy bit turned around on her:

Koy continued: “The key moment in Barbie is when she goes from perfect beauty to bad breath, cellulite, and flat feet — or what casting directors call ‘character actor’

1 more...
1 more...

There are different standards between the users and the people that give meta money. It’s sad but true, and why I think moderation is a SIGNIFICANT concern when considering federating with threads

I read these comments about people complaining about AI ads and I suddenly realize how little I've had to deal with that shit. All my browsers have 2-3 adblockers since pretty much ad blockers existed, I stopped watching TV like 20 years ago and went full ahoy matey for a while, then paid Netflix and Amazon Prime and with ads coming there I'm back to mateeeeyyy, and though here and there from time to time I do see some small ads on some sites (mostly on mobile mostly though seeing an image on Reddit (now Lemmy) in the rif (now connect) app

I stayed away from places like instagram like the disease that it is, I pretty much don't Facebook, and I barely ever had to deal with commercials for like the past 20 years of my life.

Most commercials I'd see would be billboards EVERYWHERE in the real world when I lived in Mexico (seriously, Mexican government, please do something about this, it's beyond bad there) but now here in Canada, it's mostly pretty quiet and nice.

I guess I'm kind of blessed in comparison with everyone else in here

Most commercials I'd see would be billboards EVERYWHERE in the real world when I lived in Mexico (seriously, Mexican government, please do something about this, it's beyond bad there) but now here in Canada, it's mostly pretty quiet and nice.

Unless you're going through a native rez, then there's billboards EVERYWHERE because allowing them to put up ads that are illegal for everyone else counts as reconciliation, I guess.

How the fuck did you make it all the way up to Canada without seeing the hellscape of billboards that is, these United States?

Instagram is owned by Meta... Facebook.

Facebook had no problem helping pedophiles distribute child pornography on their platform, terrorists and Nazis from organizing events on their platform, or allowing deceptive political ads that swayed the votes of democratic nations.

Why would they give any fuck about fake Mr beast ads?

So basically they're saying it's the user's fault for not having a better ad algorithm. That's amazingly poor thinking.

@Zaderade The internet is flooded with AI generated ads, it is crazy, I was using my sister's phone for a moment and inside an app an ad pops up, it was a obviously AI generated image of a singer with the lyrics of the song, nothing compared to a scam, but still. Another example is my mother, she was using youtube shorts and her feed was flooded with AI generated videos, the "person", voice, background, everything.

Then they ask why people are using ad blocking and alternative clients to consume content.

P.S: I have installed alternative clients, adblocks and all to their phones, I have told them and teach them how to avoid all of this crap, they don't see to care, they love ads and all this crap. (It's more of an habit thing I would say, but yeah)

I grew up with the internet and find it wild how other people navigate it. I was at a friends house and he used the computer there. The computer was a maleware infested piece of shit. If it was a horse, it would've been shot. He was buying concert tickets but it was so slow, it reminded me of my first computer with a 56kb modem. While the site was loading he was clicking on ads to play pool and other mini games, like it's completely normal.

My mother will click on anything that pops up on a screen and then go "ooh why did that happen". She also has about 9,000 tabs open at any one time 8,300 of which are the same website.

Meanwhile on the other end of the scale my dad refuses to click anything so when he goes on a site on his phone, he will view the site to the tiny sliver that is visible under the cookie warning because he won't click any button ever, he won't even click reject he just refuses to press buttons.

I’ve stopped reporting obvious scams / spam on instagram because something like 19 out of 20 reports get ignored or denied

The Review Requested Failed is really a cherry on top of this shit-cake lmao it literally describes modern social media in a nutshell to a T.

The funny thing is it's not even proper English, it shouldn't be "Review Requested Failed" it should be "Review Request Failed".

Somehow, someone reactivated an old Facebook account of mine, which was dormant for like a decade. I reached out to Facebook support and said that someone was using my old account to post diet ads. Their response? "We see nothing wrong here, so we're not going to do anything about it." 🤦‍♂️

At this point I'm convinced meta either gets paid under the table to keep that shit, or (probably more likely) they make so much money off of the sheer volume of ai scam ads that they just don't care.

Is there a difference?

Not really, but the first one would piss me off even more if both sides plainly agreed and knew the payment was to keep the ads live

They're also pretty cool with Nazi's now too, Meta is a garbage company

Translation: These ads make us lots of money.

Each time I see "Meta's product didn't remove reported malicious post" I just think that this is valid punishment for user and their ego for wasting their time on these shitty platforms. 😅

Oh hey another site that looks at reports and just bans the reporter....

Instagram owned by the Reddit people?

Meta's "guidelines" are basically: does this content somehow stop us from making money?

The answer is generally no. If people stopped using the platform because of its poor handling of these kinds of situations, I guess that would affect them. Maybe?

Man. I blocked them with my pihole because I got tired of max volume jump scares whenever I clicked on a link to their. Guess it's staying blocked indefinitely.

I don't think any report I've ever made on a social media platform has ever been accepted. I once reported an account on tiktok named "[swastika symbol] FATHERLAND [swastika symbol]" that posted holocaust denial and genocidal content and got back "no violation detected". The same is true when I reported similar accounts on twitter (even pre-musk), and Facebook. I don't even know what the point of the report feature on those sites is, I've never even heard of it working.