The_Lemmington_Post

@The_Lemmington_Post@discuss.online
6 Post – 34 Comments
Joined 7 months ago

This is not possible because sorting is done in the database, so adding a new sort option requires a database migration with new indexes, columns and updated queries. Not something that can be done with a simple plugin.

@nutomic@lemmy.ml in https://github.com/LemmyNet/lemmy/issues/3936#issuecomment-1738847763

An alternative approach could involve utilizing an API endpoint that provides metadata for recent posts, allowing users to implement custom sorting logic on their client side using JavaScript. This API endpoint is currently accessible only to moderators and administrators

There is already such an API endpoint which is available for mods and admins.

@nutomic@lemmy.ml in https://lemmy.ml/comment/9159963

3 more...

Yeah, and the FOSS alternative Codidact isn't any better. What's the point of asking for solutions for bugs when even an LLM can solve that already? I want proper solutions to actual problems so that I can find everything in there, not just troubleshooting bugs.

I think in a few years using an AI for this kind of task will be much more efficient and simpler to set up. Right now I think it would fail too much.

Human bias is a pervasive element in many online communities, and finding a platform entirely free from it can be akin to searching for the holy grail. Maybe look into self-hosting an instance and punish moderators who don't follow their own rules.

I don't know how that works. Why would have to do anything to participate in the discussions? The curation can be done by whoever wants to do it.

I've based the idea on Discourse which has very good moderation. I don't know why everyone is talking about StackExchange, did I mention it anywhere?

1 more...

Karma promotes shitposting, memes and such, I've yet to see that kind of content on Discourse.

1 more...

There has to be a way to federate trust levels otherwise all of this just isn't applicable to the fediverse. One of the links I posted talks about how to federate trust levels. So the appeal is processed by a user with a higher trust level.

A system like this rewards frequent shitposting over slower qualityposting. It is also easily gamed by organized bad faith groups. Imagine if this was Reddit and T_D users just gave each other a high trust score, valuing their contributions over more “organic” posts.

You are just assuming that this would work similarly to Reddit based on karma. I don't know why you would assume the worst possible implementation just so you can complain about this. If you had read the links, you would know that shitposting wouldn't help much because what contributes most to Trust Levels in Discourse is reading posts.

2 more...

On a basic level, the idea of certain sandboxing, i.e image and link posting restrictions along with rate limits for new accounts and new instances is probably a good idea.

If there were any limits for new accounts, I'd prefer if the first level was pretty easy to achieve; otherwise, this is pretty much the same as Reddit, where you need to farm karma in order to participate in the subreddits you like.

However, I do not think “super users” are a particularly good idea. I see it as preferrable that instances and communities handle their own moderation with the help of user reports - and some simple degree of automation.

I don't see anything wrong with users having privileges; what I find concerning is moderators who abuse their power. There should be an appeal process in place to address human bias and penalize moderators who misuse their authority. Removing their privileges could help mitigate issues related to potential troll moderators. Having trust levels can facilitate this process; otherwise, the burden of appeals would always fall on the admin. In my opinion, the admin should not have to moderate if they are unwilling; their role should primarily involve adjusting user trust levels to shape the platform according to their vision.

An engaged user can already contribute to their community by joining the moderation team, and the mod view has made it significantly easier to have an overview of many smaller communities.

Even with the ability to enlarge moderation teams, Reddit relies on automod bots too frequently and we are beginning to see that on Lemmy too. I never see that on Discourse.

Trust lvls themselves are just Karma plus login/read tracking aka extra steps.

Trust Levels are acquired by reading posts and spending time on the platform, instead of receiving votes for posting. Therefore, it wouldn't lead to low-quality content unless you choose to implement it that way.

The Karma system is used more as a bragging right than to give any sort of moderation privilege to users.

But in essence is similar, you get useless points with one and moderation privileges with the other.

If you are actually advocating that the Fediverse use Discourse’s service you have to be out of your mind.

You are making things up just so you can call me crazy. I'm not advocating anything of the sort.

Where? I haven't heard any of that.

5 more...

The benefit of this is that only individuals who are interested will progress up the trust level ladder. If you are indifferent, you will have the same experience as currently. I believe this benefits everyone involved.

Some sort of appeal process to deal with human bias and punish moderators abusing power and remove their privileges would help address concerns about potential troll moderators.

Yeah an appeal process to mitigate human bias would be nice.

I don't really care all that much about any particular issue. I enjoy copying the ideas suggested by others in the fediverse and transforming them into new issues, as many individuals do not take this initiative.

People keep mentioning StackOverflow even though I specifically mention Discourse. The two do similar things but one does it right and the other doesn't. I don't really understand how it would be inconvenient to create accounts. If you are active and behave you get moderation privileges otherwise you get the same experience as you do now.

2 more...

Regrettably, complaining tends to be a common pastime for many individuals. I acknowledge your frustrations with certain users who may appear entitled or unappreciative of the considerable effort you've dedicated to developing Lemmy. Shifting towards a mindset that perceives complaints as opportunities for enhancement can be transformative. Establishing a set of transparent rules or guidelines on how you prioritize issues and feature requests could help turn critiques into opportunities for improvement. This transparency can help manage expectations and foster a more collaborative relationship with the users in your community. While not all complaints may be actionable, actively listening to feedback and explaining your prioritization criteria could go a long way in building trust and goodwill. Open communication and a willingness to consider diverse perspectives can lead to a stronger, more user-centric product in the long run.

The philosophy of Complaint-Driven Development provides a simple, transparent way to prioritize issues based on user feedback:

  1. Get the platform in front of as many users as possible.
  2. Listen openly to all user complaints and feedback. Expect a lot of it.
  3. Identify the top 3 most frequently reported issues/pain points.
  4. Prioritize fixing those top 3 issues.
  5. Repeat the process, continuously improving based on prominent user complaints.

Following these straightforward rules allows you to address the most pressing concerns voiced by your broad user community, rather than prioritizing the vocal demands of a few individuals. It keeps development efforts focused on solving real, widespread issues in a transparent, user-driven manner.

Here's a suggestion that could help you implement this approach: Consider periodically making a post like What are your complaints about Lemmy? Developers may want your feedback. This post encourages users to leave one top-level comment per complaint, allowing others to reply with ideas or existing GitHub issues that could address those complaints. This will help you identify common complaints and potential solutions from your community.

Once you have a collection of complaints and suggestions, review them carefully and choose the top 3 most frequently reported issues to focus on for the next development cycle. Clearly communicate to the community which issues you and the team will be prioritizing based on this user feedback, and explain why you've chosen those particular issues. This transparency will help users understand your thought process and feel heard.

As you work on addressing those prioritized issues, keep the community updated on your progress. When the issues are resolved, make a new release and announce it to the community, acknowledging their feedback that helped shape the improvements.

Then, repeat the process: Make a new post gathering complaints and suggestions, review them, prioritize the top 3 issues, communicate your priorities, work on addressing them, release the improvements, and start the cycle again.

By continuously involving the community in this feedback loop, you foster a sense of ownership and leverage the collective wisdom of your user base in a transparent, user-driven manner.

I thought the ‘hot’ ranking was a mixture of votes and comment engagement?

Hot: Like active, but uses time when the post was published

https://join-lemmy.org/docs/users/03-votes-and-ranking.html

I do feel like there needs to be some further tweaking, controversial should have a time falloff so it shows recent controversy instead of something 6 months old for example.

Yeah, I believe the "Most Comments" sort should have a time limit too. There is an issue opened about it: Controversial post sort should have time limit

Having AGI as moderators would be a futuristic dream come true. However, until that becomes a reality, it's crucial to consider the well-being of human moderators who are exposed to disturbing content like CSAM and graphic images. I believe it would be important to provide moderators with the ability to decrease their moderation levels to avoid such content.

.

Look at how well tagged pictures are in this website: https://safebooru.org/index.php?page=post&s=list

The poster isn't necessarily the one doing the tagging. Those tags are added by everybody.

Nobody says that about Discourse, perhaps they have implemented it better, and Discourse is the one I based the idea on.

I very much doubt this kind of system would be implemented for Lemmy.

I think an appeal process to punish moderators abusing power would help with that.

2 more...

I don't have any hope left for Lemmy in this regard, but hopefully, some other Fediverse projects, other than Misskey, will improve the moderation system. Reddit-style moderation is one of the biggest jokes on the Internet.

I did read the links, and I still strongly feel that no automated mechanical system of weights and measures can outperform humans when it comes to understanding context.

But this is not a way to replace humans; it's just a method to grant users moderation privileges based on their tenure on a platform. Currently, most federated platforms only offer moderator and admin levels of moderation, making setting up an instance tedious due to the time spent managing the report inbox. Automating the assignment of moderation levels would streamline this process, allowing admins to simply adjust the trust level of select users to customize their instance as desired.

What are you doing getting politics in here, get the fuck out, as if there wasn't enough politics already in my feed.

1 more...

Yeah, this seems to favor people who stick to one account, but I also enjoy seeing some of the regular posters here. Even though I like creating new accounts, I wouldn't mind if they were given moderation privileges to share the workload. I'm unsure about the implementation details, so I can't comment on the protocol. What I do know is that Reddit moderation sucks, while Discourse moderation rocks.

You are probably thinking about StackExchange, I don't see anybody saying anything about popularity when talking about Discourse. It's a matter of doing it like Discourse and not like StackExchange.

4 more...

I am, to the github project.

I'm surprised that only one platform in the fediverse has copied Discourse; they copy Reddit instead, with the biggest joke of a moderation system on the Internet.

I have the same right they have to ask for contributions and support for something they are going to do their way.

3 more...

Why would anyone contribute? Would you pay someone to work for you if they don't want to listen to anything you have to say? When they close issues without allowing the community to provide input, that's exactly what they are doing. If they were too busy to engage with the issue tracker, I wouldn't mind. However, if they simply appear to close issues with numerous upvotes and no downvotes, it frustrates me.

1 more...