Soliciting Feedback for Improvements to the Media Bias Fact Checker Bot

jeffw@lemmy.worldmod to News@lemmy.world – 139 points –

Hi all!

As many of you have noticed, many Lemmy.World communities introduced a bot: @MediaBiasFactChecker@lemmy.world. This bot was introduced because modding can be pretty tough work at times and we are all just volunteers with regular lives. It has been helpful and we would like to keep it around in one form or another.

The !news@lemmy.world mods want to give the community a chance to voice their thoughts on some potential changes to the MBFC bot. We have heard concerns that tend to fall into a few buckets. The most common concern we’ve heard is that the bot’s comment is too long. To address this, we’ve implemented a spoiler tag so that users need to click to see more information. We’ve also cut wording about donations that people argued made the bot feel like an ad.

Another common concern people have is with MBFC’s definition of “left” and “right,” which tend to be influenced by the American Overton window. Similarly, some have expressed that they feel MBFC’s process of rating reliability and credibility is opaque and/or subjective. To address this, we have discussed creating our own open source system of scoring news sources. We would essentially start with third-party ratings, including MBFC, and create an aggregate rating. We could also open a path for users to vote, so that any rating would reflect our instance’s opinions of a source. We would love to hear your thoughts on this, as well as suggestions for sources that rate news outlets’ bias, reliability, and/or credibility. Feel free to use this thread to share other constructive criticism about the bot too.

203

My personal view is to remove the bot. I don't think we should be promoting one organisations particular views as an authority. My suggestion would be to replace it with a pinned post linking to useful resources for critical thinking and analysing news. Teaching to fish vs giving a fish kind of thing.

If we are determined to have a bot like this as a community then I would strongly suggest at the very least removing the bias rating. The factuality is based on an objective measure of failed fact checks which you can click through to see. Although this still has problems, sometimes corrections or retractions by the publisher are taken note of and sometimes not, leaving the reader with potentially a false impression of the reliability of the source.

For the bias rating, however, it is completely subjective and sometimes the claimed reasons for the rating actually contradict themselves or other 3rd party analysis. I made a thread on this in the support community but TLDR, see if you can tell the specific reason for the BBC's bias rating of left-centre. I personally can't. Is it because they posted a negative sounding headline about Trump once or is it biased story selection? What does biased story selection mean and how is it measured? This is troubling because in my view it casts doubt on the reliability of the whole system.

I can't see how this can help advance the goal (and it is a good goal) of being aware of source bias when in effect, we are simply adding another bias to contend with. I suspect it's actually an intractable problem which is why I suggest linking to educational resources instead. In my home country critical analysis of news is a required course but it's probably not the case everywhere and honestly I could probably use a refresher myself if some good sources exist for that.

Thanks for those involved in the bot though for their work and for being open to feedback. I think the goal is a good one, I just don't think this solution really helps but I'm sure others have different views.

Removing the bias rating might be enough indeed.

Nah even credibility is subjective to MBFC.

The bot calls Al Jazeera "mixed" factually (which is normally reserved for explicit propaganda sources), and then if you look at the details, they don't even pretend it has anything to do with their factual record -- just, okay they're not lying but they're so against Israel that we have to say something bad about them.

One issue with poor media literacy is that I don’t think people are going to go out of their way to improve their literacy on their own just from a pinned post. We could include a link in the bot’s comment to a resource like that though.

Do you think that the bias rating would be improved by aggregating multiple factors checkers’ opinions into one score?

Yeah it's definitely a good point, although I would argue people not interested in improving their media literacy should not be exposed to a questionable bias rating as they are the most likely to take it at face value and be misled.

The idea of multiple bias sources is an interesting one. It's less about quantity than quality though I think. If there are two organisations that use thorough and consistent rating systems it could be useful to have both. I'm still not convinced that it's even a solvable problem though but maybe I'm just being too pessimistic and someone out there has come up with a good solution.

Either way I appreciate that it's a really tough job to come up with a solution here so best of luck to you and thanks for reading the feedback.

One problem I’ve noticed is that the bot doesn’t differentiate between news articles and opinion pieces. One of the most egregious examples is the NYT. Opinion pieces aren’t held to the same journalistic standards as news articles and shouldn’t be judged for bias and accuracy in the same way as news content.

I believe most major news organizations include the word “Opinion” in titles and URLs, so perhaps that could be something keyed off of to have the bot label these appropriately. I don’t expect you to judge the bias and accuracy of each opinion writer, but simply labeling them as “Opinion pieces are not required to meet accepted journalistic standards and bias is expected.” would go a long way.

Thanks for this. As a mod of /c/news, I hadn’t really thought about that. We don’t allow opinion pieces, but this is very relevant if we roll out a new bot for all the communities that currently use the MBFC bot.

No problem. Specifically came to my attention about a week ago on this post where the bot reported on an opinion piece as if it was straight news.

BTW, I actually do appreciate the bot and think it’s doing about as well as it can given the technical limitations of the platform.

Hi. I have a suggestion:

Try to make it more clear that this is not a flawless rating (as that is impossible).

Ways to implement:

  • Make sure the bot says something along the lines of “MBFC rates X news as Y” and not “X news is Y”.
  • Make a caveat (collapsable) at the bottom, that says something along the lines of “MBFC is not flawless. It has an american-centric bias, is not particularly clear on methodology, to the point where wikipedia deems it unreliable; however, we think it is better to have this bot in place as a rough estimate, to discourage posting from bad sources”
  • If possible, add other sources, Like: “MBFC rates the Daily Beast as mostly reliable, Ad Fontes Media rates it as unreliable, and Wikipedia says it is of mixed reliability”
  • Remove the left right ratings. We already have a reliability and quality rating, which is much more useful. The left-right rating is frankly poorly done and all over the place, and honestly doesn’t serve much purpose.

This contributes significantly to the noise issue most people complain about

Interesting that people say that opinion pieces should not be held to the same standard. I personally see such pieces contribute to fake news going around. Shouldn't a platform with reach, held accountable for wrong information, they hide behind an opinion piece?

It’s not a question of “should” - an opinion piece is rhetoric, not reporting. You can fact check some of it sometimes but functionally can’t hold it to the same standards as a regular news article. I agree that this can sometimes lead to “alternative facts” and disingenuous arguments, but the only other option is to forbid the publication of them which is obviously an infringement of first amendment rights. It’s messy, and it can lead to people being misinformed, but it’s what we’re stuck with.

Can you explain how a piece with a title like "Helldivers is awesome and fun" can be judged at all for factual accuracy?

The NYT ran an opinion recently where the author pretty clearly was using the NYT along with other outlets as part of a voter demobilization tactic in which the author lied about not voting. The NYT was skewered on twitter, and had to alter the opinion after the fact. It seems like some basic fact checking would have been useful in that situation. Or really, just any amount of critical thought on the part of the NYT in general.

This. Otherwise op-eds get a free pass to launder opinions the paper wants to publish, but can't.

You don't need every post to have a comment basically saying "this source is ok". Just post that the source is unreliable on posts with unreliable sources. The definition of what is left or right is so subjective these days, that it's pretty useless. Just don't bother.

I agree with that. Having a warning message when the source is known to be extremely biased and/or unreliable is probably a good thing, but it doesn't need to be in every single thread.

If a source is that bad, it should be banned. I think bot comments on just some posts presents inconsistency.

My personal view is that the bot provides a net negative, and should be removed.

Firstly, I would argue that there are few, if any, users whom the bot has helped avoid misinformation or a skewed perspective. If you know what bias is and how it influences an article then you don't need the bot to tell you. If you don't know or care what bias is then it won't help you.

Secondly, the existence of the bot implies that sources can be reduced to true or false or left or right. Lemmy users tend to deal in absolutes of right or wrong. The world exists in the nuance, in the conflict between differing perspectives. The only way to mitigate misinformation is for people to develop their own skeptical curiosity, and I think the bot is more of a hindrance than a help in this regard.

Thirdly, if it's only misleading 1% of the time then it's doing harm. IDK how sources can be rated when they often vary between articles. It's so reductive that it's misleading.

As regards an open database of bias, it doesn't solve any of the issues listed above.

In summary, we should be trying to promote a healthy sceptical curiosity among users, not trying to tell them how to think.

Thanks for the feedback. I have had the thought about it feeling like mods trying to tell people how to think, although I think crowdsourcing an open source solution might make that slightly better.

One thing that’s frustrating with the MBFC API is that it reduces “far left” and “lean left” to just “left.” I think that gets to your point about binaries, but it is a MBFC issue, not an issue in how we have implemented it. Personally, I think it is better on the credibility/reliability bit, since it does have a range there.

That's perhaps a small part of what I meant about binaries. My point is, the perspective of any given article is nuanced, and categorising bias implies that perspectives can be reduced to one of several.

For example, select a contentious issue like abortion. Collect 100 statements from 100 people regarding various related issues, health concerns, ethics, when an embryo becomes a fetus, fathers rights. Finally label each statement as either pro-choice or pro-life.

For sobering trying to understand the complex issues around abortion, the labels are not helpful, and they imply that the entire argument can be reduced to a binary choice. In a word it's reductive. It breeds a culture of adversity rather than one of understanding.

In addition, I can't help but wonder how much "look at this cool thing I made" is present here. I love playing around with web technologies and code, and love showing off cool things I make to a receptive audience. Seeking feedback from users is obviously a healthy process, and I praise your actions in this regard. However, if I were you I would find it hard not to view that feedback through the prism of wanting users to find my bot useful.

As I started off by saying, I think the bot provides a net negative, as it undermines a culture of curious scepticism.

Just a point of correction, it does distinguish between grades. There is "Center-Left," "Left," and "Extreme Left."

Who fact-checks the fact-checkers? Fact-checking is an essential tool in fighting the waves of fake news polluting the public discourse. But if that fact-checking is partisan, then it only acerbates the problem of people divided on the basics of a shared reality.

This is why a consortium of fact-checking institutions have joined together to form the International Fact-Checking Network (IFCN), and laid out a code of principles. You can find a list of signatories as well as vetted organizations on their website.

MBFC is not a signatory to the IFCN code of principles. As a partisan organization, it violates the standards that journalists have recognized as essential to restoring trust in the veracity of the news. I've spoken with @Rooki@Lemmy.World about this issue, and his response has been that he will continue to use his tool despite its flaws until something better materializes because the API is free and easy to use. This is like searching for a lost wallet far from where you lost it because the light from the nearby street lamp is better. He is motivated to disregard the harm he is doing to !politics@Lemmy.World, because he doesn't want to pay for the work of actual fact-checkers, and has little regard for the many voices who have spoken out against it in his community.

By giving MBFC another platform to increase its exposure, you are repeating his mistake. Partisan fact-checking sites are worse than no fact-checking at all. Just like how the proliferation of fake news undermines the authority of journalism, the growing popularity of a fact-checking site by a political hack like Dave M. Van Zandt undermines the authority of non-partisan fact-checking institutions in the public consciousness.

Thank you for discovering me IFCN, i see there maldita.es from spain which are fact checking kings.

Thanks, this was a very informative comment. I assume none of the IFCN signatories have a free API? Just asking since you seem pretty well versed on this

I appreciate you reading and responding to my concern instead of censoring me like your fellow mod in !news and !world:

More than half of these occurred in a community you moderate. Do you approve of this use of the term 'spamming' to silence criticism?

Exposing a free API for anyone to use is not typical trade practice for respectable fact-checking operations. You may be able to get free access as a non-profit organization, and that may be worth persuing. On the other hand, there's a fundamental problem in the disconnect between the goals of real fact-checking websites and the kind of bot you are trying to create.

Thanks, that tip about being a non-profit is a good suggestion. Do you have any specific fact checkers in mind?

In terms of the comments, they look like they are off-topic. There are support communities within Lemmy.world that would be more appropriate places to post concerns. Or even other communities focused on things like Lemmy drama and similar topics like that. But copy/pasting the same comment on multiple threads? Doesn’t matter what you’re saying, we’ll delete it as spam. Done it many times myself, even if I didn’t delete your comments in particular.

This is not a case of copy/pasting the same comment in multiple threads. Please look closer at the comments and the reports. One comment is repeated once, but that is due to it being topical to MBFC's take on the BBC, and both articles were from the BBC.

Also, I'm alarmed you consider contextualization of MBFC in comments that reply to the Bot as 'off-topic.' The Bot created the topic of MBFC's credibility by linking to it as an authoritative source. If a comment about the credibility of the BBC in reply to an article published by the BBC is on-topic, then a comment about the credibility of MBFC as a reply to a review published by MBFC is also on-topic.

From their methodology:

Our methodology incorporates findings from credible fact-checkers who are affiliated with the International Fact-Checking Network (IFCN). Only fact checks from the last five years are considered, and any corrected fact checks do not negatively impact the source’s rating.

Just like every good lie has a little bit of truth in it, MBFC wouldn't be able to spin its bullshit as well without usurping the credibility of real fact-checking organizations.

What an odd form for a mea culpa to take!

You seemed to care passionately about IFCN fact-checkers doing the fact-checking. It turns out that MBFC agrees with you. Your (feigned) concern has been completely addressed in just the way you'd hoped. A person making that argument in good faith might say, "Oh! Maybe this is a better resource than I thought it was," or maybe,"I should probably apologize to Rooki for harassing them about something I appear to have just made up." Instead you just spin it into some other nebulous bullshit and move the goal posts. If you're not careful, people might begin to suspect that you're starting with the conclusion and working backwards.

Sorry, no mea culpa. Let me elaborate. Van Zandt claims to value IFCN fact-checkers in his ratings, then he uses that laundered credibility to gatekeep minority and politically inconvenient voices. Here's a recent example brought to my attention.

It should be noted that despite no non-partisan fact checkers are listed on MBFC's site as raising concerns about the The Cradle's credibility, Van Zandt has arbitrarily placed it in the "Factual Reporting: Mixed" and "Credibility: Medium" categories. The concerns he posits about The Cradle's 'lack of transparency, poor sourcing," and one-sidedness clearly apply to the weird right-wing guy who makes these opaque decisions about journalistic value.

If IFCN fact-checkers have issues with sources he'd like to denigrate, he's happy to list them even if they've since been resolved. But they don't make up the central criteria for his 'methodology' as he'd like you to believe. Meanwhile he's free to make unreferenced claims about the credibility of others that uncareful readers take completely at face value.

All the concerns I have about The Cradle's credibility have been developed in spite of MBFC, which is the opposite of what you want if your goal is accountability and media literacy. And thanks to their reliance on this charlatan, LW!news have recently punted what I think is a valuable report.

Sorry, no mea culpa.

If you think being an unrepentant liar is good for your cred, fill your boots, I guess.

It should be noted that despite no non-partisan fact checkers are listed on MBFC’s site as raising concerns about the The Cradle’s credibility, Van Zandt has arbitrarily placed it in the “Factual Reporting: Mixed” and “Credibility: Medium” categories. The concerns he posits about The Cradle’s 'lack of transparency, poor sourcing," and one-sidedness clearly apply to the weird right-wing guy who makes these opaque decisions about journalistic value.

'I don't understand how it works so it's stupid!'

  1. The Cradle is a rag that's been banned by Wikipedia for publishing conspiracy theories and for (gasp!) poor sourcing.
  2. If you had read their methodology, you'd know that MBFC wasn't being arbitrary as lack of transparency and the impact are clearly defined:

A source is considered to lack transparency if it fails to provide an ‘About’ page or a clear description of its mission. Transparency is further compromised if the ownership of the source is not openly disclosed, including the identification of the parent company and key individuals involved. Additionally, the absence of information about major donors, funding sources, or general revenue generation methods contributes to this lack of transparency. It is essential for the source to at least disclose the country, state, or city of operation and the name of the person responsible (such as the editor). While providing a physical address is not mandatory, meeting some of these transparency criteria is important. Inadequate transparency typically results in the source’s factual reporting rating being reduced by one or two levels, depending on the extent of the shortfall.

Credibility Levels:

  • High Credibility: A score of 6 or above.

  • Medium Credibility: A score between 3-5 points. Sources lacking an ‘About’ page or ownership information are automatically rated as Medium Credibility.

  • Low Credibility: A score of 0-2 points. Sources rated as Questionable, Conspiracy, or Pseudoscience are automatically classified as Low Credibility.

This is from the report:

The Cradle lacks transparency as they do not disclose ownership. The domain is registered in the United States.

Who could've seen that rating coming?

Methodical is the opposite of arbitrary. The reason it seems arbitrary to you is that you don't understand it. As a bare minimum to be critical of MBFC you should understand how it works, understand their methodology, and probably have read their Wikipedia page. Bonus points for seeing what high quality research says about them (spoiler alert: it says you're wrong). You're demanding that people take very seriously your misinterpretations and assumptions about something you don't understand. How is that a reasonable request?

The tone of this content is super patronizing and toxic

Remove it.

No need for a bot. Obvious misinformation should be removed by the mods. Bias is too subjective to be adjudicated by the mods. Just drop it already. It's consistently downvoted into oblivion for a reason. The feedback has been petty damn obvious. This whole thread is just because the mods are so sure they're right that they can't listen to the feedback they already got. Just kill the bot.

To clarify what MBFC considers "MIXED" factual reporting (the same rating they give known disinformation factory Breitbart):

Further, while The Guardian has failed several fact checks, they also produce an incredible amount of content; therefore, most stories are accurate, but the reader must beware, and hence why we assign them a Mixed rating for factual reporting.

They list like five fact checks, while The Guardian puts out basically quintuple that every day. And moreover, this is the sort of asinine nitpick that they classify as a "fact check".

"Private renting is making people ill." "Private renting is making people ill, but maybe this happens with other housing situations too, we don't know, so we rate this as false."

MBFC's ratings for "factual reporting" are a joke.

This is my problem with MBFC, and which seems to consistently get ignored by the admins and mods pushing for the bot.

MBFC seems to rate every even slightly "left wing" news source as "mixed factual reporting" for absolutely any excuse whatsoever. The fact that they deem The Guardian as reliable as Breitbart should really tell you something.

It has been helpful and we would like to keep it around in one form or another.

Bull fucking shit. The majority of feedback has been negative. I can't recall a single person arguing in its favor, but I can think of many, myself included, arguing against it. I hope you can find my report of one particular egregious example, because Lemmy doesn't let me see a history of things I reported. I recall that MBFC rated a particular source poorly because they dared to use the word "genocide" to describe what's going on in Gaza. Trusting one person, who clearly starts from an American point of view, and has a clearly biased view of world events, to be the arbiter of what is liberal or conservative, or factual or fictional, is actively harmful.

No community, neither reddit nor Lemmy nor any other, has suffered for lack of such a bot. I strongly recommend removing it. Non-credible sources, misinformation, and propaganda are already prohibited under rule 8. If a particular source is so objectionable, it should be blacklisted entirely. And what is and is not acceptable should be determined in concert with the community, not unilaterally.

Edit: And another thing! It's obnoxious for bot comments to count toward the number of comments as shown in the post list. Nobody likes seeing it and thinking "I wonder what people are saying about this" and it's just the damn bot again. But that's really a shortcoming in Lemmy.

Yes! The mods starting out the discussion with their preferred outcome is so incredibly telling. This is a tool to reinforce the mods bias, deliberately or not

I will start by saying that I feel like we are trying to address the criticism in your first paragraph with these changes. That being said, thanks for your feedback. I particularly like the comment you shared under the “edit,” because I hadn’t seen that sentiment shared before (not saying nobody else had that issue, just appreciating you for contributing that and challenging me to think more about how we execute things).

I also would like it not add to the comment count. I am now getting inured to comment counts of “1”.

I generally like the bot and its intentions, but feel it inaccurate with my perception too often.

Just as a point of clarification, there is certainly not a community consensus among the feedback.

While you are absolutely correct in stating that there are vocal members of the community opposed to it in any form, there is also a significant portion of the community that would prefer to keep or modify how it works. The mod team will be taking all of these perspectives into account. We hope that you will be respectful of community members with whom you disagree.

I haven't seen any strong arguments for keeping it up.

Edit: clearly there are none.

The bot is basically a spammer saying "THIS ARTICLE SUCKS EVEN THOUGH I DIDN'T READ IT" on every damn post. If that was a normal user account you'd ban it.

This thread is a mess.

users: "bot is awful"

mod: "ok so it's not terrible so that's good"

Yeah lol, i cant help but laugh every time i see the mods replies in this thread. i dont understand shit about his train of thought, i dont know if he is denyal or was surprised most people didnt end up aligning with his bias and is in damage control replying nonsense.

The thing I don't get is why are they so insistent on this change that is so overwhelmingly not wanted?

Who is pushing this and why are so many mods backing it?

I apologize if this thread was misunderstood. Perhaps I was not clear that this was meant for improvements, it is not a vote on removal. Should that vote ever happen, the post would be clear about that.

All of my questions were only seeking to gain more information about people’s feelings. I apologize if it came off as a promise to enact anything in particular or an endorsement of any particular stance on the bot.

The problem is with MBFC, and you have no control over them. Therefore, the only way you can improve the bot is to remove it entirely.

Remove MBFC? Yes, that’s part of the discussion and the point of this post. The struggle seems to be over the API, but I’d love to have suggestions to bring to the rest of the team. As I have said multiple times, it is not my decision to remove the bot, I’m simply here for suggestions that the rest of the team would be open to.

Whose decision is it, then?

It’s a team decision and I am the newest mod on the team. The main developer of the bot is an admin, who ultimately would be the one to implement any changes.

So it is in part your decision. I'm pretty sure the admins aren't forcing you to have it here.

During your next shift, you should do something that nobody on your team or your supervisor wants you to do. Lmk how that goes for you

I'm concerned about why the team wants to force something on the users that is objectively harmful. What is the justification?

1 more...
1 more...
1 more...
1 more...
1 more...
1 more...
1 more...

Yes, you've been very clear from the start that you do not want to remove the bot. However, the feedback you've consistently received is that it provides no benefit, is misleading, reductive, and the best improvement you could make would be to remove it. You don't seem willing or able to respond to that.

Correct, I am unable to supersede admin decisions as a mod. I am here collecting feedback on improvements. Again, I am looking for feedback on improvements, as the decision to remove the bot is not in my control.

1 more...
1 more...

In literally every thread I've seen it post in, it gets downvoted to hell.

  1. Please, move the bias and reliability outside of the first accordion/spoiler. This is the sole purpose the bot was meant to provide. If we can't see that at a glance, it's bad. I don't see how these few words are "too long" either. I feel like a lot of the space could be cleared by turning the "Search Ground News" accordion into another link in the footer.
  2. While I personally don't see the point of the controversy, it wouldn't be too hard to manually enter Wikipedia's Perennial Sources list into the database that the bot references, especially with MediaWiki's watchlist RSS feed. This would almost certainly satisfy the community.
  3. Open source the database and the bot. Combined with #2, this could also offer an API to query Wikipedia's RSP for everyone to use in the spirit of fedi and decentralization.
  1. Open source the database and the bot.

Yes. A certain amount of my complaint about MBFC bot is not that it's a bad idea per se, it's just that the database and categorizations are laughably bad. It puts Al Jazeera in the same factual classification as TASS. It lists MSNBC as factually questionable and then when you look at the actual list, a lot of them are MSNBC getting it right and MBFC getting it wrong. It might as well be retitled "The New York Times's Awful Neoliberal Idea of Reality Check Bot". (And not talking about the biases ranking -- if that one is skewed it is fine, but they claim things are not factual if they don't match the appropriate bias, and the bias is unapologetic center-right.)

You can't set yourself up to sit in judgement of sources that write dozens of articles every single day about unfolding world events where the "objectively right" perspective isn't always even obvious in hindsight, and then totally half-ass the job of getting your basic facts straight about the sources you're ranking, and expect people to take you seriously. I feel like mostly the Lemmy hivemind is leaps and bounds ahead of MBFC bot at determining which sources are worth listening to.

  1. it wouldn't be too hard to manually enter Wikipedia's Perennial Sources list into the database that the bot references

FUCK FUCK FUCK YES

This is an actual up-to-date and very extensive list that people who care bother to keep up to date in detail (even making distinctions like "hey this source is ok for most topics but they are biased when talking about X, Y, Z"). This would immediately do away with like 50% of my complaint about MBFC bot.

For example, if we retain MBFC, the layout could look something like this:

Rolling Stone Bias: Left, Credibility: High, Factual Reporting: High - United States of America

MBFC report | bot support | Search topics on Ground.News

in which "Rolling Stone" is linked to the Wikipedia article.

With RSP, it could look something like this:

::: spoiler Rolling Stone is generally reliable on culture There is consensus that Rolling Stone has generally reliable coverage on culture matters (i.e., films, music, entertainment, etc.). Rolling Stone's opinion pieces and reviews, as well as any contentious statements regarding living persons, should only be used with attribution. The publication's capsule reviews deserve less weight than their full-length reviews, as they are subject to a lower standard of fact-checking. See also Rolling Stone (politics and society), 2011–present, Rolling Stone (Culture Council). ::: ::: spoiler Rolling stone is generally unreliable on politics and society, 2011–present According to a 2021 RfC discussion, there is unanimous consensus among editors that Rolling Stone is generally unreliable for politically and societally sensitive issues reported since 2011 (inclusive), though it must be borne in mind that this date is an estimate and not a definitive cutoff, as the deterioration of journalistic practices happened gradually. Some editors have said that low-quality reporting also appeared in some preceding years, but a specific date after which the articles are considered generally unreliable has not been proposed. Previous consensus was that Rolling Stone was generally reliable for political and societal topics before 2011. Most editors say that Rolling Stone is a partisan source in the field of politics, and that their statements in this field should be attributed. Moreover, medical or scientific claims should not be sourced to the publication. ::: RSP listing | bot support | Search topics on Ground.News

Both examples with everything necessary linked, of course

We are looking at a composite rating, of many sources.

Would you open source that on launch? What about the nuances of reporting from the same source on different topics?

The distance will be open source day one. As to the other question: still in the planning phase

For 3 they said they'd release the code when it was announced, but have been completely silent since. Maybe it'll be public when sublinks goes live lol

The bot is basically loud as fuck in a way that disrupts the comment feed.

Imagine how comments should create and add to a conversation. Imagine how various lemmy clients feed or service that conversation….

Now imagine how a double dropdown big as fuck post says “fuck you” to that conversation.

Just please consider how the form of your shit can be just as imposing as the content, which I really appreciate.

Yet somehow your posts always have me thinking “shut the fuck up” which seems antithetical to building a community.

I'm gonna be Left-Center on this with reliable credibility that the bot is useless at best.

It is reporting on the source, not the content, of what is posted which is already going to be a problem for discourse.

If there are media sources that are known or proven to be a problem, I would find it preferable the bot just alert that and ignore anything else.

I appreciate the joke lol. But on a serious note, it sounds like you’re saying it’s not actually 100% useless, just that it’s being deployed too widely. Any specific suggestions on what the bot should say on those questionable sources?

My main issue is that it doesn't provide any real value.

If I see a Guardian/BBC news article about international events, I'll give it a lot of trust. But when it's talking about England, my eyebrows are raised. Calling it Left/right/center doesn't help a reader understand that.

Worse it hot garbage like The Daily Mail. They have no fact check or provide real journalism. It means nothing to me what it aligns to.

Then the bottom of the barrel is some random news site that was spun up a month ago like Freedom Patriot News. Of course we know where it lands in the political spectrum. But it's extreme propaganda.

The challenge here is that trust has become subjective. Conservatives don't trust CNN. Democrats don't trust Fox News. It becomes difficult to rate the quality of the organization in a binary way.

Current ownership and governance of the media outlet, generally speaking. Noting if an outlet is state owned or public traded, etc might help.

Does the bot even tell the difference between an opinion piece and investigative journalism?

If a source is a proven misinformation generator then noting the proof with direct links to evidence, cases, rulings, etc. However those sources tend to disappear quickly and are constantly being generated. It is whack a mole and generates an endlessly outdated list.

The problem is it likely isn't any information a bot can just scoop up and relay, and instead requires research and human effort.

MBFC does link to articles that are examples of misinformation. And no, the bot cannot tell if something is an opinion piece or not.

Interesting suggestion about state-owned media, hadn’t heard that before. Thanks for that

Get rid of it entirely. In another one of your comments you acknowledged that it "seemed" like the bot is an extension of the mods telling everyone else what to think. You are close. It doesn't seem that way, it is that way.

Also, bot is annoying AF. If you really are in love with so much, make it an opt-in service and it can DM all the psychos who want to be spammed by it.

I wish the comment count on a post didn't include MBFC (or maybe bots in general).

I’ll be honest, that’s probably outside of the scope of what we can do for now. It’s definitely valuable feedback in general and I wish I could offer some kind of solution but that’s probably even outside the control of the instance admins.

Someone can feel free to correct me if I’m wrong!

On this topic the information would probably be ideally delivered by flairs/post tags which lemmy doesn't support yet (AFAICT).

Simply having (bias:left) (factuality: high) would be much better than a whole comment.

I blocked it straight away so I don't have a dog in this fight but I'm instantly skeptical of any organization that claims to be the arbiter of what is biased and to what degree.

@jeffw@lemmy.world Why did you stop replying to posts here? Most people is telling you the bot is bullshit. You stopped commenting in this thread while being active elsewhere, are you going to take action or not?

I’m not the admin who created the bot. I’m a mod who is collecting feedback on behalf of the entire mod team.

Just to be perfectly clear: because I am the face of this feedback, you can feel free to say whatever you want to me. It’s odd that you seem to harbor ill feelings towards me in particular just because I pushed for collecting user feedback on this issue.

you can feel free to say whatever you want to me

my cat likes to sit in my window seal but I accidentally knocked the curtain rod down. She has been laying in the bunched up curtain that's laying on the floor, I think she likes it better than then window seal. However the window is right out the front of the house so anytime I come home after a long day I see her watching me roll up the driveway and it makes me feel good. I don't know if it would be best to move the bunched up curtain back to the window or let her stay on the floor and not see her when I get home :(

Putting aside the bone apple tea moment... I had to replace blinds because of an overzealous dog who loved watching the street. I just had to permanently keep the blinds open for him. Maybe you could do some sort of compromise solution like that? Ig what I’m saying is leave the curtains where they are and buy new ones?

How do you rate bias without bias? What is the bot's definition of left or right? How did you build your ratings?

It's all bullshit, man.

Remove it please. It's an obtrusive advertisement for Ground News.

It's incredibly annoying to see comments: 1, only to click the post to see an ad. It makes me less inclined to interact with Lemmy at all. It's the same kind of crap that ruined Reddit.

We want to keep it in some form. Would you prefer not having the Ground News link?

The overwhelming majority of comments I'm seeing indicate they'd like to see it gone. Why are you opposed to listening to the people who create and consume all of the content in this space?

We want to keep it in some form.

There's your problem.

You're not really looking for feedback if you've already made up your mind. Stop pretending to listen to the community if you're ignoring the countless blocks and downvotes. That's your feedback right there

How about you remove the bot and then fix whatever problems you have without doubling down on the bot solution? If you want community feedback on mod overburden, I'm sure people will be willing to help with that. But stop forcing the bot.

Although this is a fair point, I think that there is a difference between saying, "the moderation team finds it useful and would like to keep it," vs "We have already decided that we are keeping it no matter what."

This discussion is to help us guide the community's next steps from an informed position.

Which of those are you trying to say? Because it very much comes across as the latter when you say stuff like "deleting the bot is not an option" and "off the table".

If you guys find it helpful from a mod perspective, it might be more appropriate to develop some mod tooling, like a browser extension, rather than pushing it on the users.

Edit: so it is the latter, then, got it.

Can you elaborate on how it's useful?

Just have it notify the mod team when a post appears from a questionable source.

That would certainly be preferable. I don't think we should be advertising.

Beyond that, it would be much better if there were a way to not have the bot be counted as a comment. Comments are what humans do. They're meant to be interacted with. I can't interact with a bot, other than rolling my eyes at it.

How much more feedback do you need to gather on the subject to understand that a bot with a garbage datasource is no use to anyone? Even opening this thread is an insult and a sign of how little you recognize and care for your community. Remove the shit bot already instead of fishing for excuses to keep it active.

Exactly! What a waste. They already got their feedback

Honestly, have a look through the whole thread. There are comments from those who, like yourself, oppose it in any form. However, please also be respectful of the many community members who are saying that it is useful or could be made useful.

The mods will be taking all of these comments into account.

The mods aren't neutral though. They already start with a pretty strong "the bot is here to stay" which is borderline insulting to the community. Then they're asking for ideas to make it better, which already presumes the idea is feasible or a good idea in the first place. Sure, I would make it less spammy, put the details behind a link, etc etc, but they're already committed to the bot as a solution to their stated problem of overloaded mods. Well that could be solved in much better ways. All the energy going to this controversial bot is adding to the mod overburden!

Yes MBFC is Extremely American in its definitions of left and right. A less US-centric rating would be much preferred.

I generally like the idea otherwise.

Any suggestions of alternative sources we could rely on?

Im sorry but the sole concept of the bot is bullshit and as many have said already the idea is biased per se. I wish i lived in the same world as mbfc where it seems like all media is left-center.

If anything, what would be needed would be a bot that checked if the information on that article has any known missinformation or incorrect/wrong facts. And that would be extremely hard to maintain and update as a lot of news are posted before any fact checking can be done.

I think this tool, while probably well-intended, only adds to the polarization problem of the world.

Can you elaborate? Like, do you think the bot would be better if it didn’t label things as “left” or “right” (ie: remove the bias rating) or do you think the reliability/credibility ratings have the same issue?

Here's the comment reply from when I first asked what was wrong with MBFC. Gotta say. I agree with that comment. I'm surprised more people haven't posted similar examples here.

https://lemmy.dbzer0.com/comment/12328918

Edit: here is the text from the linked comment.

I'm just gonna drop this here as an example:

https://mediabiasfactcheck.com/the-jerusalem-report/

https://mediabiasfactcheck.com/the-jerusalem-post/

The Jerusalem Report (Owned by Jerusalem Post) and the Jerusalem Post

This biased as shit publication is declared by MBFC as VEEEERY slightly center-right. They make almost no mention of the fact that they cherry pick aspects of the Israel war to highlight, provide only the most favorable context imaginable, yadda yadda. By no stretch of the imagination would these publications be considered unbiased as sources, yet according to MBFC they're near perfect.

This biased as shit publication is declared by MBFC as VEEEERY slightly center-right. They make almost no mention of the fact that they cherry pick aspects of the Israel war to highlight

You keep repeating this lie.

From their report on the Jerusalem Post:

Overall, we rate The Jerusalem Post Right-Center biased based on editorial positions that favor the right-leaning government. We also rate them Mostly Factual for reporting rather than High due to two failed fact checks.

Until 1989, the Jerusalem Post’s political leaning was left-leaning as it supported the ruling Labor Party. After Conrad Black acquired the paper, its political position changed to right-leaning, when Black began hiring conservative journalists and editors. Eli Azur is the current owner of Jerusalem Post. According to Ynetnews, and a Haaretz article, “Benjamin Netanyahu, the Editor in Chief,” in 2017, Azur gave testimony regarding Prime Minister Benjamin Netanyahu’s pressure. Current Editor Yaakov Katz was the former senior policy advisor to Naftali Bennett, the former Prime Minister and head of the far-right political party, “New Right.”

In review, The Jerusalem Post covers Israeli and regional news with strongly emotionally loaded language with right-leaning bias with articles such as this “Country’s founding Labor party survives near extinction” and “Netanyahu slams settler leader for insulting Trump.” . . . During the 2023 Israel-Hamas conflict, the majority of stories favored the Israeli government, such as this Netanyahu to Hezbollah: If you attack, we’ll turn Beirut into Gaza. In general, the Jerusalem Post holds right-leaning editorial biases and is usually factual in reporting.

They literally mention their bias over and over. Center-right is consistent with how they're rated everywhere. Allsides rates them center with the note that the community thinks they lean right. Wikipedia rates them as centre-right/conservative. Your "VEEEERY slightly" bit is pure fabrication. They specifically note that they're a highly biased source on the conflict in Gaza.

There's no such thing as an objective left or right. It's a relative scale. You shouldn't have a bot calling things left or right at all.

Also don't push Ground News. They already get plenty of press from their astroturfing.

This. The bot is effectively just propaganda for the author's biases.

Do you think aggregating ratings from multiple factors checkers would reduce that bias?

Keep in mind that if you base your judgements of left bias and right bias on the American overton window, that window has been highly influenced by fascism over the last 10 years, and now your judgement is based on the normalisation of fascism, which your bot is implicitly accepting. That's bad. If you're going to characterise sources as left or right in any form, you need to pick a point that you personally define as center. And now your judgements are all going to implicitly push people towards that point. You could say that Karl Marx is the center of the political spectrum, or you could say Mussolini is. Both of those statements are equally valid, and they are as valid as what you are doing now. If you don't want to push any set of biases, you need to stop calling sources left and right altogether.

No. The problem with your current bot isn't that the website authors have a particular axe to grind, it's that they're just in a rush and a bit lazy.

This means that they tend to say news sites which acknowledge and correct their own mistakes have credibility problems, because it's right there - the news sites themselves acknowledged issues. Even though these are the often most credible sites, because they fix errors and care about being right.

Similarly the whole left-right thing is just half-assed and completely useless for anyone that doesn't live in the US. While anyone that does live in the US probably already has an opinion about these US news sources.

Because these are lazy errors, lots of people will make similar mistakes, and aggregating ratings will amplify this, and let you pretend to be objective without fixing anything.

Adding more biases doesn't remove the initial bias.

Hm... At some point a human will have to say "Yes, this response is correct." to whatever the machine outputs. The output then takes the bias of that human. (This is unavoidable, I'm just pointing it out.) If this is really not an effort in ideological propaganda, a solution could be for the bot to provide arguments, rather han conclusions. Instead of telling me a source is "Left" or "Biased", it could say: "I found this commentary/article/websites/video discussing this aource's political leaning (or quality): Link 1 Link 2 Link 3"

Here you reduce bias by presenting information, instead of conclusions, and then letting the reader come to their own conclusions based on this information. This not only is better at education, but also helps readers develop their critical thinking.

Instead of... You know, being told what to think about what by a bot.

Honestly, the first time I had heard of Ground News was in a discussion about implementing it with the bot. Do you have any thoughts on alternatives or would you prefer that bit just removed from the bot’s comment?

Someone else in this thread said to link to media literacy resources and I agree with them.

I'm frankly rather concerned about the idea of crowdsourcing or voting on "reliability", because - let's be honest here - Lemmy's population can have highly skewed perspectives on what constitutes "accurate", "unbiased", or "reliable" reporting of events. I'm concerned that opening this to influence by users' preconceived notions would result in a reinforced echo chamber, where only sources which already agree with their perspectives are listed as "accurate". It'd effectively turning this into a bias bot rather than a bias fact checking bot.

Aggregating from a number of rigorous, widely-accepted, and outside sources would seem to be a more suitable solution, although I can't comment on how much programming it would take to produce an aggregate result. Perhaps just briefly listing results from a number of fact checkers?

I second this. This community is better than most social media, but it's still that, and social media popularity is pretty bottom of the barrel as a means of determining accuracy. Additionally, that'd just open it up to abuse from people trying to weight the votes with fake accounts, scripts, whatever.

That’s fair. One idea could be a separate “community rating” and one that is more professional. Think Metacritic, RottenTomatoes, etc

Ban it and all bots honestly. I hate seeing a comment on a thread just to find out it's a bot. If not use like this continues we might see a fresh post with 6 new comments, all of them bots that don't add to the discussion.

We need a bot that will tell us what the bias of the bot is.

It calls the Associated Press and Reuters leftist. That’s all you need to know about the bias of the bot.

I think the problem is with the whole concept. Most news organizations have more than one person working there, so unless the bot is measuring the bias of individual journalists it seems really silly. It presupposes that there's someone at the top of a large news organization dictating to the staff to make an article "more left" or "more right" or whatever. Sure at some news organizations (like FoxNews) that may happen, but I doubt that happens at AP or Reuters and many other news organizations.

I've seen many articles where the headline was incredibly biased (to get clicks I guess?) while the article was not. Clearly the editor that wrote the headline had more bias than the person that wrote the article who might've been a freelancer.

And many news articles don't have any bias at all. "Earthquake in California" is that a left or right biased article? I think it's neither. Even a quote from a politician, Kamala Harris said "XYZ" or Donald Trump said "ZYX" is it biased to report on what people said? It's a fact they said those words, is it biased to tell people what someone said? I think it's just treating people like adults who can read what a person said and make their own conclusions.

At the end of the day people have to learn how to spot bias themselves, there's no quick-fix-life-hack-work-around to skip having to build some experience with media literacy. Ground News or a bot or whatever will have their own biases, and if people are trusting someone on the internet to tell them what is biased, they've failed at media literacy from the get go.

It called a lot of tabloid trash "left".

These tabloids chase money and will flip for whatever gets them eyeballs.

I'm not sure what to do here. On my mobile device the compacted media bias fact check post still takes up 50% of my phone screen.

How a post tag if we have a tagging system in Lemmy, instead of a whole long comment?

Maybe the bot could just post a one line summary with a link to more information?

Thanks for the feedback. Can you elaborate a bit about the 50% of your screen thing? Is it the text itself, or is the issue that the app provides links at the bottom of the comment? I’m thinking of my experience on Voyager, where the links are summarized at the bottom of each comment, which does lead to a decent amount of screen being taken up. Would it be better if there weren’t any links?

yep I'm using Voyager on my iPhone. Maybe a super short summary without links. People could open the bot's profile and look at the bot's posts (not comments) if they want to dig deeper to understand a source.

Interesting, so you think the bot should make posts too? Like, a post for each source with a summary of relevant info? Just making sure I understand what you mean

Yeah. It's an idea for a way to create a user repository within Lemmy that could be edited by the bot as needed. I'm sure there are better ways.

Tell the bot to never be the first comment. I find it very frustrating when I see "a comment on this post" and it's just the bot. I'm here to read what people have to say so it is very annoying when I think someone said something and it's just the bot.

There was even a front page meme about this last year, but with another noisy bot. Lemmy doesn't bury new comments like Reddit does, so there's no real penalty to making the bot wait.

Just to clarify, are you saying the bot should be triggered after someone else comments on a post?

Yeah. Maybe after a few tbh. Save on API calls

You don’t need to manufacture an authoritative source of truth as you the mods see it.

Just write down what you see as the truth and that you’ll ban anyone who speaks out against it.

Stop trying to build a machine to do the work of creating an echo chamber for you.

Always appreciate feedback! As a .ml user, I’d love to hear more about your thoughts on echo chambers

Are you removing the bot or are you still following this stunt as if you didnt have enough replies?

My replies were good faith discussions with the users (except for the one joke above). I don’t control the bot but the mod team has been discussing this. By all means, if you want to blame the mod who took the initiative to solicit feedback, go ahead though. It’s worth noting that I can’t force admins to act though, only supply evidence.

Edit: to be clear, this post was always meant for constructive feedback to improve the bot.

The feedback has been loud and clear, the bot is a shit and most people want it gone. Sorry but most of your replies have been very targeted and the good faith more than debatable, they go from naive to outright omitting the point most people is trying to explain to you. Edit: also to be clear, i dont care if you are not the owner of the bot, as a mod you can ban it.

Yes, I could do whatever I want as a mod, if I want to be removed from the mod team within minutes.

What can i say? I really admire your writing comprehension

If news@world had rules that reflected a coherent politics it could be political or even propagandistic.

Because no such rules exist to direct action and development, ideas like the fact checker bot crop up. In lieu of direction, the fact checker bot reflects a laundered western liberal political line back onto the news@world community.

An echo chamber is not an area where everyone says the same things, it’s an environment where a certain type of waves (or just all waves) are reenforced due to structural elements of the chamber.

By using the fact checker bot to do the work of policing speech, you have created a structural element which reenforces certain kinds of speech.

It’s a component of an echo chamber in the metaphor.

That’s significantly different than taking the more difficult route of determining the news@world mod team political line, struggling internally and externally with its contradictions and acting in ways that reflect it because the latter requires that the mod team use judgement rather than just act on voices who are not reenforced by the built structural elements of the news@world community.

The bot has no purpose. Either an article can be posted or not there's no reason for the bot prompt. It just looks like thought policing using a bias checker which 'coincidentally' prefers what the current Democrats position is.

I can hardly imagine the bot stopping any fake news from being posted either.

While I think it's important to have some sort of media bias understanding, I dislike the bot being the first (and sometimes only) comment on a post. Maybe it should be reserved only for posts that are garnering attention, and has a definitive media bias answer for (the no results comments are just damn annoying to see).

It also has the knock-on effect of boosting the post higher in whichever sorting algorithm users are using. So it often feels artificially controlled whenever something has 100+ upvotes and less than 10 comments, knowing the first comment is always a bot. Like, would it be fair for me to have 10 bots that comment factual information of posts I personally like, just to boost their visibility?

Unfortunately the bot is fatally flawed as long as it's just repeating MBFC information. I would be interested in a community program but I have the same end worry. What's the risk that we create an echo chamber? It might be better than an echo chamber based on MBFC ratings but it's still an issue worth worrying about.

That said I'm down to try a community approach.

I wound up blocking your shitty bot because it spammed pretty much every post.

Okay. This post is an attempt to solicit constructive criticism/feedback. Do you have anything more concrete to share?

Yes, maybe don't have the bot be the first and only response on every single post. Let them gain the tiniest bit of traction first. It's beyond annoying to see an article, go to the comments, and your bot be the only response.

A way to improve the MBFC bot would be to delete it.

Failing that, a way to improve the community would be to ban it.

Would you like to elaborate?

It adds no value to the posts, incites arguments (how is that helping with modding? Why do the mods need to announce MBFC’s rating on every post?), and exports critical thinking to a site that has its own biases while maintaining a veneer of “neutrality”. The ratings often have no justification, making them little better than some dude’s opinion. I can keep going but I think that covers most of it.

Would you like to read 99% of the replies to this thread tbh?

Probably not an issue with the bot itself, but just FYI, it appears the spoiler tags don't work on Boost.

Yeah… there was a whole big to do about this. One dev actually quit (can’t remember which one) because it was publicly noted that their app “scored” lower in terms of feature implementation. But feedback has been made available for app developers.

Addressing the Overton window issue is the main fix I would hope for.

The proposed solution of a home-brewed open-source methodology of determining bias without the Overton influence would be a very welcome improvement in my opinion.

Addressing the Overton window issue is the main fix I would hope for.

This is far and away the most frequently mentioned issue in posts I've seen. It's also the one I would like to see addressed.

I don't know enough to know how accurate this chart actually is, but I've seen it tossed around plenty:

https://guides.library.harvard.edu/newsleans/thechart

https://libapps.s3.amazonaws.com/accounts/56624/images/Media-Bias-Chart-12.0_Jan-2024-Licensed-scaled.jpg

Why does that image only have the sun and the rebel from Canadian media? Both are given more credibility than they deserve, the rebel in particular has history, bunch of white supremacists and alt right personalities were or are still involved, publication absolutely stokes hate and fear.

Edit: I'm still at a loss, why those? The Globe and Mail, McLean's, The Toronto Star, National Post, CBC all have better reputations domestically (though natpost and the sun are a circle these days and most print media is owned by American Hedge Funds so...), far more likely to actually get the news instead of opinion masquerading as news in one of those.

MBFC and Ad Fontes are both part of the same grift, to artificially raise the value of right-wing journalism, while artificially denigrating left-wing journalism, so their maps of media come out looking like a horseshoe with the apex dominated by corporate advertising conglomerates that use journalism as their hook.

The CEOs of conglomerates will happily fund this propaganda, and a surprising number of people will pay good money to have the 'horseshoe theory' lie repeated back to them.

Credibility isn't subjective. It should be a hard value.

Orientation is indeed subjective and unless in the extremes should (imo) not be defined

How about just finally making the bot open source and let people comment or contribute there?

Although I do see that that bot has a very slight right wing bias I like it. It prevents the normalization of the use of literal propaganda outlets as news sources.

I have a suggestion that might be a good compromise.

The bot only comments on posts that are from less factual news sources or are from extreme ends of the spectrum.

On a post from the AP the bot would just not comment.

On a post from Alex Jones or RT the bot would post a warning.

That way there is less “spam”, but people are made aware when misinformation or propaganda is being pushed.

Also with such a system smaller biasses are less relevant and therefore become less important.

I don't trust MBFC to tell me anything useful about left-leaning sources, or discussion about the Israel-Palestine conflict, but if a right-biased credibility gatekeeper tells me a site I've never encountered before is far-right, I do consider that useful.

This bot was introduced because modding can be pretty tough work at times and we are all just volunteers with regular lives.

Then maybe it can be an internal thing only. Let people do their own critical thinking. I believe that if you're on Lemmy, you can make informed decision.

I hate that I have to expand the section to see the rating. If that could be fixed, it'd be better.

Please God no

Edit: Ooooh, yeah putting the rating outside the spoiler tag sounds great. I thought they were talking about taking away the spoiler tag. My bad.

Personally I'm in favor of the bot. One complaint I've seen that I agree with is that it doesn't need to float high up in the comments. If it was simply made to not upvote itself, it would stay nearer to the bottom naturally, which I think would be preferable.

Bias ratings will always be biased. So aggregate or having multiple sources briefly used in a single small post would work best.

Not directly related to MBFC bot, but what's your opinion on other moderation ideas to improve the nature of the discussion? Something Awful forums have strawmanning as a bannable offense. If someone says X, and you say they said Y which is clearly different from X, you can get a temp ban. It works well enough that they charge a not-tiny amount of money to participate and they've had a thriving community for longer than more existing social media has been alive. They're absolutely ruthless about someone who's being tricksy or pointlessly hostile with their argumentation style simply isn't allowed to participate.

I'm not trying to make more work for the moderators. I recognize that side of it... the whole:

This bot was introduced because modding can be pretty tough work at times and we are all just volunteers with regular lives. It has been helpful and we would like to keep it around in one form or another.

... makes perfect sense to me. I get the idea of mass-banning sources to get rid of a certain type of bad faith post, and doing it with automation so that it doesn't create more work for the moderators. But to me, things like:

  • Blatant strawmanning
  • Saying something very specific and factual (e.g. food inflation is 200%) and then making no effort to back it up, just, that's some shit that came into my head and so I felt like saying it and now that I've cluttered up the discussion with it byeeeeee

... create a lot more unpleasantness than just simple rudeness, or posting something from rt.com or whatever so-blatant-that-MBFC-is-useful type propaganda.

It’s tricky because we could probably make 100 rules if we wanted to define every specific type of violation. But a lot of what you’re talking about could fall under Rules 1 and 8, which deal with civility and misinformation. If people are engaging in bad faith, feel free to report them and we’ll investigate.

Hm

I can try it -- I generally don't do reports; I actually don't even know if reports from mbin will go over properly to Lemmy.

For me it's more of a vibe than a set of 100 specific rules. The moderation on political Lemmy feels to me like "you have to be nice to people, but you can argue maliciously or be dishonest if you want, that's all good." Maybe I am wrong in that though. I would definitely prefer that the vibe be "you can be kind of a jerk, but you need to be honest about where you're coming from and argue in good faith, and we'll be vigorous about keeping you out if you're not." But maybe it's fair to ask that I try to file some reports under that philosophy before I assume that they wouldn't be acted on.

Some of what you describe is likely against our community rules. We do not allow trolling, and we do not allow misinformation. We tend to err on the side of allowing speech when it is unclear, but repeat offenders are banned.

When you see these behaviors, please make a report that that we can review it. We cannot possibly see everything.

The bot is pretty accurate and the comments are already pretty short. I feel like if people don't like it they should just block it.

I appreciate the bot. I like to play a game of “guess what the bot will say” before checking. I usually win, but it’s cool to have.

I feel like bots on lemmy get way too much hate in general. There aren't that many and if you don't like you can block this one/all bots. I for one find it useful as it is.

This only applies to beautiful geniuses that include MBFC links in their posts, but the bot probably doesn't need to include the MBFC entry for MBFC. It's pretty useless and that could free up a little space. And, hey, that's something people are pretending to care about, right?

We could also open a path for users to vote, so that any rating would reflect our instance’s opinions of a source.

If you'd program something, perhaps this should be the start.

I noticed you got a couple downvotes, so this comment is more for the voters: if you have thoughts on this, please comment them so we can understand why you feel the way you do.

BTW, I really like the suggestion to just get rid of the bias rating from the bot's comments. That should be a lot less work than implementing a bias crowdsourcing system. Given limited volunteer time and all that.

Holy moly, people seem to really be upset with this bot. I like it because it can call out when someone is doing something shady with their news sources when people like me (that don't know news sources by heart) read a posting.

We have a lot of repeat users in here that I personally feel (and I could be wrong) that have ulterior motives, like being a foreign actor spreading misinformation, trying to sew division, and lots of other foreign and domestic actors that are obsessed with one thing and throw the baby out with the bathwater (for example people obsessed with Gaza and Israel war just being nasty in general because they're angry - I'm not saying that scenario is not wrong and fucked, but this bot can help illuminate patterns in their behavior which can help us regular people tag them accordingly as a single issue participant so they are more informed when engaging that person)

My suggestion is to be very careful about crowd-sourcing the rating process. Nearly every post I go into this bot is super negative on its downvotes. Rather than just simply blocking the bot, people are retaliating against something they don't agree with. You would likely see that translate to your crowd-sourcing rating also at best. At worst you would see bad actors focused on division and misinformation making a fuckery of it all.

I'm not saying don't include the community, but brainstorm with this potential pitfall in mind.

I like this community, and want to see it continue to be as factually correct and represented fairly, and appreciate the mods and their ongoing challenges with the people that would seek to upset the apple cart at any opportunity.

I think the bot adds value and applaud the honest effort to make improvements.

The down voting is for several reasons, Jeff laid it out well. The people who don't like it's ratings though have a larger worry that blocking it does not help. If MBFC, and thus the bot, are biased then the entire conversation is shifted around that bias. Blocking is useful if you find something an eyesore. It's not useful in fighting misinformation.

It’s fine to use MBFC as a tool when you are writing a comment calling out a bad source. You don’t need a bot for that.

The news source of this post could not be identified. Please check the source yourself. Media Bias Fact Check | bot support

Okay, so maybe we don’t need a comment if it’s a meta post or a mod announcement. Thanks for your inadvertent feedback, bot!

it also does this with a bunch of weird little local newspapers or etc which I've never heard of, which is like the one time I actually want it to be providing me with some kind of frame of reference for the source. MSNBC and the NYT, I feel like I already know what I think about them.

Yeah, it’s tricky because who reviews those small guys? Granted, most of them are probably owned by a giant like Gannett, but that doesn’t mean we can just apply a rating from 1 small Gannett-owned paper to another. We’d like there to be some way for users to share their feedback/ratings on those small guys. But then it’s also true that some people will create a news site and try to share links on here to promote their new website and that’s typically just spam bots.

It's this uninvited commenting on the bots part that has me downvoting it. It's presenting itself at an authority here. If a user in the comments called the bot to fact check something and the bot did a bad job, i'd just block the bot. I'd even be able to look over that users history to get an idea of the bot's purpose. But this bot comes in and says "here's the truth", then spits out something i'd expect to see on twitters current itteration.

If the problem you're trying to solve is the reliability of the media being posted here. Take the left/right bias call out and find a decent databse on new source quality. Start the bots post out with resources for people to develop their own skill at spotting bad news content.

If the problem you're trying to solve is the visibility of political bias in content posted here. So the down vote button isnt acting as a proxy for that. Adding a function for the community to rate left/right lean like rotten tomatoes sounds interesting, so long as you take the reliability rating out of the bot. You can't address both media reliability and political bias in one automated post. nyt and npr being too pearl clutchy for my taste. and some outlet that exists only on facebook having the same assumed credibility as the associated press. are wildly different issues.

*stupid phone, i'll live with the spelling but not repeated words.

I think the bot is incredibly useful. The criticism falls under a very specific group of users being very loud about their preferred source not ranking the way they expect.

Linking additional sources will improve it. Wikipedia maintains an active list and has an incentive to do so. Personally, I'd like to see a transparent methodology applied to a source: number of articles retracted silently, corrections issued in last 30 days, etc.

That having been said, I'd rather see efforts invested in other areas rather than inventing yet another "weighing" function for multiple ratings. Let us decide if mbfc is good enough or if we prefer ad fontes or Wikipedia or whoever. Give us two or three options and let us decide on our own.

The criticism falls under a very specific group of users being very loud about their preferred source not ranking the way they expect.

"Any opinions that differ from my own are simply invalid!"

It seems bizarre to me that the only user I have seen actually trying to provide constructive criticism for the bot so far in this thread is the one that already likes it. Especially when others instead advocate for things like the mods taking a political stance to endorse and using mod powers to reinforce it.

I like the bot. It's valuable to have context for the organization pushing a story. I agree that others are reading too much from the orgs they like being labeled as biased. It's assumed a news source will have some bias, and trying to avoid acknowledging that is dangerous. The takeaway is simply to be wary of any narrative being pushed (intentionally or not) by framing or omission, and get news from a variety of sources when possible. Instead, people tend to think identifying bias is advocating that the article should be disregarded, which is untrue.

To your suggestion, I do think adding more sources for reliability and bias judgements is a good idea. It would give more credibility if multiple respected independent organizations come to the same conclusion. More insight into their methodology in the comment itself could also be nice. The downside of adding these is that it would make the comment even longer when people have already complained about its size.

Other than that, I have seen people dislike using the American political center as a basis for alignment, but I have yet to see a good alternative. I expect a significant plurality of users are from the US, and US politics are globally relevant, so it seems to be a natural choice.

Nearly every critic I have seen so far just thinks it should be removed entirely because they find it annoying. I would say even if it isn't considered useful for the majority of users, the amount of value it provides people who do use it justifies whatever minor annoyance it is to others. Anyone who gets really tired of collapsing the comment or scrolling past it can block it in seconds.

Thank you to the mod who created this thread. Even if it's good to gather feedback, it's obviously not easy to get bombarded with negative comments. I'm impressed with the patience you have shown in this thread.

Can you elaborate on what you mean by improvements in other areas?

Improvements to automod, such as checking for opinion articles by regex (and building up that list). Or automatically marking/linking duplicate posts.

Also, regex scanning of comments to autoban would be useful for moderation well outside of the news/politics realm.

Most of the changes I'd like to see would require major changes to Lemmy though. Things like rate limiting posts/comments/votes, and allowing complex conditions for using those quotas. Also more nuanced moderation such as unlisting a post/comment (or potentially rehoming them).