LittleLordLimerick

@LittleLordLimerick@lemm.ee
0 Post – 44 Comments
Joined 1 years ago

I pay for Hulu, Max, Disney+, Prime, and get Netflix for free. Half the time, I still can't find the show or movie I want to watch. But 99.99% of the time I can find a torrent for it within 20 seconds of searching. Not only is pirating cheaper, but it's more convenient and more user friendly these days.

24 more...

So I hate Elon Musk and I think Tesla is way overhyped, but I do want to point out that singular anecdotes like this don't mean anything.

Human drivers run red lights and crash cars all the time. It's not a question of whether a self-driving car runs a light or gets in a crash, it's whether they do it more often than a human driver. What are the statistics for red lights run per mile driven?

27 more...

I’m still calling it Twitter because it will piss off Elon Musk if everyone keeps calling it Twitter and I think that’s funny.

The anecdote proves nothing because the model could potentially have known of the McGonagal character without ever being trained on the books, since that character appears in a lot of fan fiction. So their point is invalid and their anecdote proves nothing.

Did anyone read the article? It’s not saying ever you guys think it’s saying. The DC Democrats are saying that in two predominantly black areas, having voters pick two choices on ballots has already led to confusion and that ranked choice will lead to even worse confusion.

They’re not speculating here, they’re describing what’s already known.

8 more...

I wouldn't say that pirating is user-friendly, only that it can be more user-friendly than logging into and searching through multiple streaming services. That of course pre-supposes that you have some knowledge of how to torrent and where to find torrent files.

Just want to say that this is a fantastic answer. Pay attention to the parts about printing/downloading stuff. There are huge parts of America where you won't get a reliable cell signal sometimes for hours.

Plex is just a media server. You have to acquire content through some other means, then you can host it on a Plex server and stream it to any device that has the Plex app installed.

I have a Plex server that my entire family has logins for, and I put whatever movies or shows people are interested in. Basically you can make your own mini Netflix.

3 more...

I have a netflix subscription for "free" through my phone plan, that's the only reason I didn't cancel. I'm still sharing with everyone I know because I have a plex server and just add whatever netflix shows people want to see onto it.

Honestly, I think yes, it’s inevitable. The reason why is that keeping up with constantly changing technologies requires constantly learning how to do everything over again, and again, and again. It will get tiring eventually, and people will feel that learning the ins and outs of yet another social media app just isn’t worth it when they can already get by.

I say this as as software developer who sees a new tool or framework or language come out every year that’s bigger and better than the last, and I see the writing on the wall for myself. I’ll be outdated and just some old geezer who works on legacy tech stacks in 10-20 years, just like the guys working in COBOL or whatever now.

I mean, they need to ramp it way faster. It's pretty garbage right now and there's really no excuse. Compare it to Lemmy and it's very obvious. Lemmy still has problems, but it's much easier to use and has way fewer bugs and glitches. If you're used to Reddit, then switching to Lemmy is pretty easy to do, and I can see average users making that jump. But Mastadon isn't even close to the user experience that Twitter/X offered, and I cannot see the average Twitter user sticking around and waiting for all the issue to be fixed.

1 more...

When we're talking about public safety, it should be entirely about statistics. Basing public safety policy on feelings and emotions is how you get 3 hour long TSA checkpoints at airports to prevent exactly zero attempted hijackings in the last 22 years.

1 more...

Not quite, I think those just stream over Wi-Fi like Chromecast?

Plex has a full media server application that your can run on a computer, and you can stream your media from it over the internet to any device that has the Plex client app.

I have a Plex server set up and running on an old laptop in my bedroom with a huge library of movies and TV shows. It acts like basically any streaming service such as Netflix, but it’s my own stuff. I can watch from anywhere on any device that has a Plex client app. I can also send invitations to friends and family so they can access it, too.

It won't last unless Mastadon gets some serious improvements. It's buggy, glitchy, feature-poor, and confusing to use. There's no way in its current state it's going to compete with the big guys for the average person's attention.

9 more...

Well in my defense I have no idea what I’m talking about.

You guys are exhausting.

No one is saying black people are too dumb to understand ranked choice. They're saying that people from low income, predominantly black areas, under-voted when required to choose two candidates on past ballots. That means those people were effectively disenfranchised. If the evidence shows that ranked choice could potentially disenfranchise people from low-income minority areas, that is something to be concerned about.

1 more...

I had a pi4 but I found it struggled with transcoding and multiple streams, ended up with too much buffering.

That was true 20 years ago. Things evolve. No one wants to download and install ten million individual apps for every single thing they do on the internet.

I'm not saying that they're right. I'm saying that calling them racists who think black people are dumb is a complete mischaracterization.

It’s a statistical model. Given a sequence of words, there’s a set of probabilities for what the next word will be.

That is a gross oversimplification. LLM's operate on much more than just statistical probabilities. It's true that they predict the next word based on probabilities learned from training datasets, but they also have layers of transformers to process the context provided from a prompt to eke out meaningful relationships between words and phrases.

For example: Imagine you give an LLM the prompt, "Dumbledore went to the store to get ice cream and passed his friend Sam along the way. At the store, he got chocolate ice cream." Now, if you ask the model, "who got chocolate ice cream from the store?" it doesn't just blindly rely on statistical likelihood. There's no way you could argue that "Dumbledore" is a statistically likely word to follow the text "who got chocolate ice cream from the store?" Instead, it uses its understanding of the specific context to determine that "Dumbledore" is the one who got chocolate ice cream from the store.

So, it's not just statistical probabilities; the models' have an ability to comprehend context and generate meaningful responses based on that context.

That's not the argument for self-driving cars at all. The argument for self-driving cars is that people hate driving because it's a huge stressful time sink. An additional benefit of self-driving cars is that computers have better reaction times than humans and don't stare at a phone screen while flying down the freeway at 70 mph.

If we find that SDC get in, say, 50% fewer serious accidents per 100 miles than human drivers, that would mean tens of thousands fewer deaths and hundreds of thousands fewer injuries. Your objection to that is that it's not good enough because you demand zero serious accidents? That's preposterous.

1 more...

Waterproofing isn't important so I can take it swimming; it's important in case I drop it in a puddle.

The battery on an iPhone is good for about 1000 charge cycles (will maintain at least 85% capacity), which is about 3 years of normal use. After that, it costs like $80 to have Apple replace the battery. That's absolutely worth it to me for the improved water resistance.

1 more...

You seem to have the assumption that they’re not. And that “helping society” is anything more than a happy accident that results from “making big profits”.

It's not an assumption. There's academic researchers at universities working on developing these kinds of models as we speak.

Are you asking me whether it’s a good idea to give up the concept of “Privacy” in return for an image classifier that detects how much film grain there is in a given image?

I'm not wasting time responding to straw men.

Your concern seems to be for the pilot of the car that causes the accident. What about the victims? They don't care if the car was being driven by a person or a computer, only that they were struck by it.

A car is a giant metal death machine, and by choosing to drive one, you are responsible not only for yourself, but also the people around you. If self-driving cars can substantially reduce the number of victims, then as a potential victim, I don't care if you feel safer as the driver. I want to feel less threatened by the cars around me.

9 more...

I'll be honest here, I hate cars and the car-centered culture of the USA. I care way more about the victims of bad/careless/drunk/distracted drivers than I do about the bad/careless/drunk/distracted drivers themselves.

If me being in a self-driving car means other people around me are more safe, then it's not even a question.

So then you've just circled back around to what I originally said: is it actually true that you're at more risk near a Tesla than you are near a human driver? Do you have any evidence for this assertion? Random anecdotes about a Tesla running a light don't mean anything because humans also run red lights all the time. Human drivers are a constant unknown. I have never and will never trust a human driver.

5 more...

I believe the manufacturer should be liable for damage caused by their product due to manufacturing defects and faulty software. This incentivizes manufacturers to make the safest product possible to reduct their liability. If it turns out that it's not possible for manufacturers to make these cars safe enough to be profitable, then so be it.

I don't think we should prosecute the programmer, but I do think the manufacturing company should be liable.

I am not saying we should exempt autonomous vehicle manufacturers from regulation. I'm actually saying the opposite: that we need to base any decision on a rigorous analysis of safety data for these vehicles, which means the manufacturers should be required to provide said data to regulatory agencies.

This same concept is why you can’t make a 100% safe self driving car. Driving safety is a function of everyone on the road. You could drive as safely as possible, but you’re still at the mercy of everyone else’s decisions. Introducing a system that people aren’t familiar with will create a disruption, and disruptions cause accidents.

Again, we don't need a 100% safe self driving car, we just need a self driving car that's at least as safe as a human driver.

I disagree with the premise that humans are entirely predictable on the road, and I also disagree that self driving cars are less predictable. Computers are pretty much the very definition of predictable: they follow the rules and don't ever make last minute decisions (unless their programming is faulty), and they can be trained to always err on the side of caution.

Their assumptions about what the car can or will do without the need for human intervention makes them an insane risk to everyone around them.

Do you have statistics to back this up? Are Teslas actually more likely to get into accidents and cause damage/injury compared to a human driver?

I mean, maybe they are. My point is not that Teslas are safer, only that you can't determine that based on a few videos. People like to post these videos of Teslas running a light, or getting into an accident, but it doesn't prove anything. The criteria for self-driving cars to be allowed on the road shouldn't be that they are 100% safe, only that they are as safe or safer than human drivers. Because human drivers are really, really bad, and get into accidents all the time.

1 more...

If that’s the problem, then I would just use something like goimports to auto fix the imports every time I hit save. I never even see those errors so they don’t bother me.

Voters were able to select two candidates and only voted for one

Clarification: low-income voters from predominantly black areas did this, which is effectively disenfranchisement. That's the concern: that low-income minorities may be disenfranchised by more complex/confusing ballots. The concern is real because it already happened.

You still use TP if you have a bidet though

There’s nothing that says AI has to exist in a form created from harvesting massive user data in a way that can’t be reversed or retracted. It’s not technically impossible to do that at all, we just haven’t done it because it’s inconvenient and more work.

What if you want to create a model that predicts, say, diseases or medical conditions? You have to train that on medical data or you can't train it at all. There's simply no way that such a model could be created without using private data. Are you suggesting that we simply not build models like that? What if they can save lives and massively reduce medical costs? Should we scrap a massively expensive and successful medical AI model just because one person whose data was used in training wants their data removed?

2 more...

How is “don’t rely on content you have no right to use” litteraly impossible?

At the time they used the data, they had a right to use it. The participants later revoked their consent for their data to be used, after the model was already trained at an enormous cost.

ok i guess you don’t get to use private data in your models too bad so sad

You seem to have an assumption that all AI models are intended for the sole benefit of corporations. What about medical models that can predict disease more accurately and more quickly than human doctors? Something like that could be hugely beneficial for society as a whole. Do you think we should just not do it because someone doesn't like that their data was used to train the model?

2 more...

If 100% safety is your criteria, then humans shouldn't be allowed to drive. Humans suck at it. We're really, really bad at driving. We get in accidents all the time. Tens of thousands of people die every year, and hundreds of thousands are seriously injured. You are holding self-driving cars to standards that human drivers could never hope to meet.

If enforcement means big tech companies have to throw out models because they used personal information without knowledge or consent, boo fucking hoo

A) this article isn't about a big tech company, it's about an academic researcher. B) he had consent to use the data when he trained the model. The participants later revoked their consent to have their data used.