They’ve grown up online. So why are our kids not better at detecting misinformation?

L4sBot@lemmy.worldmod to Technology@lemmy.world – 375 points –
They’ve grown up online. So why are our kids not better at detecting misinformation?
thestar.com

They’ve grown up online. So why are our kids not better at detecting misinformation?::Recent studies have shown teens are more susceptible than adults. It’s a problem researchers, teachers and parents are only beginning to understand.

92

You are viewing a single comment

Calling them psychologists is giving them too much credit, but you're right that the companies trying to trick them are putting tons of resources into it.

Very often this shit is designed by people with psychology degrees.

I thought marketing and media people generally have communication degrees.

User researcher is a job that’s becoming more common at tech firms, and usually requires a psychology degree or similar

You don't need a full penetration of psychology degrees, just a sufficient amount.

The specific field is marketing psychology, it's a subset of industrial-organizational (I-O) psychology.

I'm not going to mention the company I work for, but I can verify that psychology is being used to advertise to kids. Mass manufactured food industry.

They will pick out very specific colours, mascot attributes, shapes and more to draw kids attention.

I shit you not, there's a certain cookie brand with a happy bear on the box that has eyes that look upwards. The entire purpose of this is to subconsciously make kids think that they're making eye contact with the happy mascot, so they'll trust it more. Certain colours are also believed to trigger more hunger in consumers. They play on so many factors in advertising that it isn't funny.

This is just one example, but this is definitely a thing that is happening in many companies.

The serious stuff is increasingly nation states, and there are for sure psychologists involved in that.

The influence operation was the seventh from China that Meta has removed in the last six years. Four of them were found in the last year, said the company, which published details of the new operation as part of a quarterly security report.

The effort appeared to “learn and mimic” Russian-style influence operations, Meta said. It also appeared aimed at a broad audience. At times, posts were in Chinese on websites such as the Chinese financial forum Nanyangmoney. At other times, posts were in Russian, German, French, Korean, Thai and Welsh on sites such as Facebook and Instagram, which are banned in China.