Zetaphor

@Zetaphor@zemmy.cc
7 Post – 169 Comments
Joined 7 months ago

Developer, 11 year reddit refugee

Zetaphor

This situation sucks and was something I would have been willing to see through. But after reading the thread from Madison this morning I've decided to cancel my Floatplane subscription. While the accusations she makes are currently accusations, they're pretty damning and worth taking seriously in case they are more than allegations. I await LMG's response to her thread, as I feel that will be the deciding factor in whether or not I continue to consume and support anything LMG does going forward.

Her thread: https://twitter.com/suuuoppp/status/1691693740254228741

16 more...

Then you missed where they dropped an opportunity to show a new screwdriver variant coming to LLTStore.com 🤦

2 more...

You're saying this like Firefox is adding the shitty standard because they want to, and not because Google used their monopoly to force adoption of the shitty standard forcing Firefox to follow suit if they don't want their users to have a broken experience.

If Google introduces a shitty standard to YouTube and Firefox doesn't adopt it, do you honestly think users are going to care or understand and blame Google? No, they'll get pissed because they think Firefox broke YouTube and they'll move to Chrome.

This exact situation played out with shadow DOM, Google implemented it into YouTube while it was still a draft standard, so all non-Chrome browsers ran worse because they had to use a polyfill.

That is why we're telling people to stop using Chromium. If they didn't have this monopoly none of this would be possible. Mozilla has some issues as an organization, but do honestly you think the better choice is letting an advertising company decide how the web works?

3 more...

I have a book on learning Pytorch, this XKCD is in the first chapter and implementing this is the first code practice. It's amazing how things progress.

2 more...

Not sure what to take from this other than it being a really bad take. Insect protein is orders of magnitude more sustainable and eco-friendly than beef. We could replace all the land we destroyed that is used to have cows standing around in their own shit and for a fraction of the acreage produce the same number of protein and calories without massively contributing to climate change.

11 more...

And you haven't already quit because you're on an H1B/GC visa, and so your residence in the US is tied to your employment, effectively making you a corporate owned slave.

Putting all of the large communities on a single instance is just reddit with more steps. It's good that one of the larger Lemmy communities is not also on the largest Lemmy instance. Lemmy.world suffers a lot of outages (in part because it's so centralized), meanwhile this community remains available.

12 more...

This is a false dilemma. With reddit you had no choice but to accept the top-down decision. With federation you are free to take your activity and traffic elsewhere. If you don't approve of the decision being made here you're not beholden to this community and the choices of its administration.

16 more...

Another addition that wasn't covered in the release notes, these will now automatically be linked to your local instance without having to do anything:

Edit: Forgot about kbin support

The markup is rendered as links when a valid format is detected without modifying the underlying text:

4 more...

This question all comes down to your opinion of what makes a person a person, whether that means we have something greater than the collection of our atoms, or whether we are simply the emergent outcome of the complex arrangement of atoms. If you subscribe to the former then you also need to believe that this machine is somehow capable of either transporting/transplanting that "soul" for lack of a better expression. Where if you subscribe to the latter than this is most certainly a suicide cloning machine.

I personally subscribe to the idea that consciousness is an emergent property of complexity. Given a sufficiently large enough series of inputs you can observe new and unexpected outputs that appear to be on higher orders of complexity than their inputs. This response is an example of that, from electrons flowing through transistors we end up with operating systems, hardware IO, web browsers, networking protocols, ASCII standards, font rendering, etc. All of that complexity emerges from a massive amount of on/off switches arranged in patterns over time.

Following this chain of reasoning I believe that making an exact duplicate of me down to the state of each atom is no different than that entity being me, however as a conscious being with human ethics and morals I put value in the singularity of my existence, and so a plurality of Zetaphor is something I find undesirable as it fundamentally challenges my perception of what it means to be myself.

So assuming the entity leaving the transporter is me, there's two ways to approach the way a machine like this could operate:

  • It reads my state in its entirety and then destroys (or encodes for transport) that state
  • Or it's creating the new instance of me bit by bit as it reads my current state

That means one of two things, either there is a brief moment of time where two identical copies of me are in the universe, or there is a period of time where zero complete copies of me exist in the universe. So either I stopped existing momentarily and then was recreated from scratch (death and clone birth), or I existed in two places at once and then died in one (cloning and suicide).

9 more...

This is fixed in the upcoming 0.18 release

This is a known bug in 0.18.3, a fix will be in the next release:

https://github.com/LemmyNet/lemmy-ui/issues/1999

5 more...

I was an avid user of Relay, and unfortunately I haven't found anything that mirrors its layout and functionality, so I decided to build it myself. Here's the Github repo and a short recording of the UI which is already out of date. I'll be releasing it once the permissive CORS PR gets merged and lands in major servers.

8 more...

but if you need me to leave, I can. I get that a lot.

I don't think OP is suggesting this. It's simply a reminder to those who have the privilege of having extra income that contributing to the core devs improves the experience for everyone, regardless of their individual ability to contribute.

I'm personally happy to donate if it means everyone gets to continue enjoying the growth of the platform, as the real value of the threadiverse is user activity.

And yet here I am using Revanced, which is even better than Vanced was

https://revanced.app/

3 more...

They’re the reason for Trump being elected

Trump was elected because the Electoral College voted for him. Hillary Clinton won the popular vote, AKA the one you participated in. The American populace don't decide the president. Your vote is not you deciding who wins, it's you expressing your opinion in the hopes that the electors your state party officials hand picked will actually listen to the interests of their constituents.

What are the issues I have with Mozilla? They're floundering with little direction and seemingly incompetent management.

They laid off a bunch of their key engineers while they continue to increase the CEO's compensation. They keep making half baked decisions with regards to features and marketing that don't seem conducive to their core offering, like the Pocket integration. They completely killed PWA integration, that only works now with an extension and third party software. They retired BrowserID. They orphaned Thunderbird. There's probably more I'm forgetting.

1 more...

A 2-line SQL TRIGGER removal takes about minutes to fix.

Then go fix it and open a PR

2 more...

Do you mean like this?

https://radio.zetaphor.com

;)

Edit: To more directly answer your question, this is using the "Public Pages" feature that is already built into Azuracast along with a bunch of custom CSS to make it look nicer

4 more...

Not sure what you're on about, the PR with the fix is merged and ready for the next release version.

0.18 will also remove websockets which will improve performance and error handling, but it will likely have it's own complications.

They're building the plane as we're flying and simultaneously boarding more passengers.

Nutomic has said they're open to restoring captchas, but it will require a fair amount of work to bring the 0.17 implementation into 0.18, which the currently don't have the bandwidth to implement.

They've also said they're open to PR's, so if someone really wants this feature they can open a PR for inclusion in the 0.18 release

NO it is not the best we can do, we need to be applying some pressure to the developers here and that requires EVERYONE to do their part.

I sure hope you're supporting them financially considering the demands you're making that require their time and labor.

There’s a reason why when you go to “private mentions” on Mastodon, this appears:

Lemmy carries the same warning:

Assuming you're referring to lab-grown meat, I think that's also a great alternative. We should be exploring any and all options that can get us to stop relying on cows for protein.

Quoting this comment from the HN thread:

On information and belief, the reason ChatGPT can accurately summarize a certain copyrighted book is because that book was copied by OpenAI and ingested by the underlying OpenAI Language Model (either GPT-3.5 or GPT-4) as part of its training data.

While it strikes me as perfectly plausible that the Books2 dataset contains Silverman's book, this quote from the complaint seems obviously false.

First, even if the model never saw a single word of the book's text during training, it could still learn to summarize it from reading other summaries which are publicly available. Such as the book's Wikipedia page.

Second, it's not even clear to me that a model which only saw the text of a book, but not any descriptions or summaries of it, during training would even be particular good at producing a summary.

We can test this by asking for a summary of a book which is available through Project Gutenberg (which the complaint asserts is Books1 and therefore part of ChatGPT's training data) but for which there is little discussion online. If the source of the ability to summarize is having the book itself during training, the model should be equally able to summarize the rare book as it is Silverman's book.

I chose "The Ruby of Kishmoor" at random. It was added to PG in 2003. ChatGPT with GPT-3.5 hallucinates a summary that doesn't even identify the correct main characters. The GPT-4 model refuses to even try, saying it doesn't know anything about the story and it isn't part of its training data.

If ChatGPT's ability to summarize Silverman's book comes from the book itself being part of the training data, why can it not do the same for other books?

As the commentor points out, I could recreate this result using a smaller offline model and an excerpt from the Wikipedia page for the book.

6 more...

A lawsuit requires them to be breaking a law. Doing a shit or even malicious job at something you volunteered for is not against the law. Mods are not employees of reddit. If the argument is that they're somehow harming the product, that same argument could be extended to the protestors and shitposters. It wouldn't hold any water in an actual court.

1 more...

Also all of South America. Everything from ordering pizza to scheduling a doctor's appointment. Not having WhatsApp means you are not able to participate in society

2 more...

Most people from the US think of the it as the default :/

Currently living in Argentina, if you want to make an appointment with the doctor, plumber, or barber you use Whatsapp. Want to order a pizza without using one of the gig economy ordering apps? You use Whatsapp. Communicating with anyone and everything in this region involves having a Whatsapp account.

If all I experience is being one place one moment and another place the next, then it’s me

If I make an exact molecular copy of you and set that copy free into the world thinking it had just successfully transported, but then I take the original you that entered the transporter and lock them up in a basement somewhere, how is that any different? From the perspective of the conscious being that came out the other end their continuity is uninterrupted. They will think they are the only version of themselves to have ever existed and that they simply moved from one place to another, as opposed to being a duplicate of the original entity, and that the original entity may be dead or in this case locked in a basement.

2 more...

Sure, but "the past" as you refer to it was not even a week ago, and the situation is still actively unfolding.

Communities are growing, OC is being posted, and the lemmyverse is starting to get its legs underneath it. If you look past the reddit related posts you will see this happening all around you.

However most of us haven't even been here a week, communities aren't built overnight. Reddit has 15 years of people creating content and building out subreddits.

Just because it's not using your personal preference of containerization doesn't qualify it as being "hacked together". Docker is a perfectly acceptable solution for what Lemmy is.

1 more...

This is not obvious to anyone who doesn't have some understanding of how networking and federation work, which is most people. Especially if we're talking about users who have only ever experienced centralized platforms.

It should be called "Known Network" or something more transparent that doesn't require an explanation of indexing

3 more...

I think it’s a bit silly to have megathreads just because some users can’t scroll past posts that doesnt interest them.

The problem is there are so goddamn many, to the extent that I'm working on a userscript that lets me entire hide posts that contain keywords. Checking my frontpage using Subscribed/Active, 5 of the first 20 posts are about this "news". And that's a full day after it happened, yesterday was far worse

Edit: The userscript is ready!

2 more...

Same here after 11, they forget that the users create the value, they're only the middlemen.

10 more...

We're the minority, if this gets implemented it's endgame. Try convincing the billions of people who already don't care enough to use Firefox to protect their privacy to now stop using Chrome because it's killing the open web. Now tell them to stop using services they care about because DRM is bad.

At this point our only real hope is the EU decides to forcibly stop this, but I'm not holding my breath.

9 more...

No for-profit is nice, but they are the lesser shit of the two choices we have. Remember that the Mozilla Corporation is a for-profit, the Mozilla Organization is a non-profit. There is a clear conflict of interest between those two entities.

I do and will continue to use their browser because it's the only choice I have if I want to stand by my principle of supporting a free and open web.

Additionally it's going to cause you headaches if your server is low spec. The federation queue is not well optimized for GIGANTIC subscription counts like this. There is an active draft PR working on it, but using that script is still a bad idea.

Setting aside the obvious answer of "because capitalism", there are a lot of obstacles towards democratizing this technology. Training of these models is done on clusters of A100 GPU's, which are priced at $10,000USD each. Then there's also the fact that a lot of the progress being made is being done by highly specialized academics, often with the resources of large corporations like Microsoft.

Additionally the curation of datasets is another massive obstacle. We've mostly reached the point of diminishing returns of just throwing all the data at the training of models, it's quickly becoming apparent that the quality of data is far more important than the quantity of the data (see TinyStories as an example). This means a lot of work and research needs to go into qualitative analysis when preparing a dataset. You need a large corpus of input, each of which are above a quality threshold, but then also as a whole they need to represent a wide enough variety of circumstances for you to reach emergence in the domain(s) you're trying to train for.

There is a large and growing body of open source model development, but even that only exists because of Meta "leaking" the original Llama models, and now more recently releasing Llama 2 with a commercial license. Practically overnight an entire ecosystem was born creating higher quality fine-tunes and specialized datasets, but all of that was only possible because Meta invested the resources and made it available to the public.

Actually in hindsight it looks like the answer is still "because capitalism" despite everything I've just said.

4 more...

I had years worth of posts and comments that I deleted via the interface a while ago. Then as part of the reddit exodus I decided to run a removal tool that used the API, and it turns out 11 years worth of "deleted posts" were all still sitting out there, they were just hidden from me.

I did find it strange when I received a reply to a years old comment that my profile page said was deleted, but I just thought it was a caching issue. Turns out all of that content was still out there with my name attached, I was the only one who couldn't see it.

1 more...

I've gone ages without using Spotify and found the list still regularly updates regardless of whether or not I'm actually there listening. This is also why I threw in the Last.FM recommendations though, so I can have something more dynamic based on my current listening.

Just FYI, my subcribe requests to Lemmy.ml stick at pending as well.

Everyone's subscribe requests to Lemmy.ml are stuck, including mine from my 1 person server running on dedicated hardware. Lemmy.ml has been experiencing some major growing pains as a result of the reddit tidal wave, and the Lemmy devs (who are also the lemmy.ml admins) are focusing all of their energy on performance upgrades to the core software.