ClamDrinker

@ClamDrinker@lemmy.world
0 Post – 84 Comments
Joined 1 years ago

On the odd chance you aren't completely trolling. Anne Frank was a girl going through puberty. She had a crush on her friend and like any normal young person had to deal with scary, unknown, but very normal human feelings and desires of intimacy and love. It's her own fucking diary, she didn't self censor herself for prudes in 2024. She had a war and death hanging over her head at any moment.

And if you actually go look at the book, there is nothing graphic about it. To these prudes having these normal feelings and describing them in a diary is what they consider to be graphic. Here's a Dutch talkshow host absolutely clowning on these people just by showing the passage the controversy is actually about (with English subtitles)

2 more...

There is no such sexual material in the book. An innocent teenage girl asked a raunchy question to a friend she had a crush on because that's the kind of behavior teens display while they grow up and develop themselves. And she got shut down. Nothing graphical was ever shown. It's showing only what was written by that same normal girl disconnected and hidden from the world as they hide from murderous tyrannical nazi's. Raw and unfiltered thoughts and feelings of a normal developing teen, as the girl wrote it for herself, not us. As the Anne Frank Foundation said in the video "A book written by a 12 year old can be read by 12 year olds."

"You know you don't need to bring a dead horse every time you want catering right, Jim?"

If you're here because of the AI headline, this is important to read.

We’re looking at how we can use local, on-device AI models -- i.e., more private -- to enhance your browsing experience further. One feature we’re starting with next quarter is AI-generated alt-text for images inserted into PDFs, which makes it more accessible to visually impaired users and people with learning disabilities.

They are implementing AI how it should be. Don't let all the shitty companies blind you to the fact what we call AI has positive sides.

9 more...

Finally. A human readable format. And pretty too.

2 more...

That's easier said than done, DDoS mitigation requires a large amount of servers that are only really useful to persist an active DDoS attack. It's why everyone uses Cloudflare, because of the amount of customers they serve there's pretty much always an active attack to fend off. Decentralization wouldn't work great for it because you would have to trust every decentralized node not to perform man in the middle attacks. But if you know of any such solution I'd love to hear it.

15 more...

It will never be solved. Even the greatest hypothetical super intelligence is limited by what it can observe and process. Omniscience doesn't exist in the physical world. Humans hallucinate too - all the time. It's just that our approximations are usually correct, and then we don't call it a hallucination anymore. But realistically, the signals coming from our feet take longer to process than those from our eyes, so our brain has to predict information to create the experience. It's also why we don't notice our blinks, or why we don't see the blind spot our eyes have.

AI representing a more primitive version of our brains will hallucinate far more, especially because it cannot verify anything in the real world and is limited by the data it has been given, which it has to treat as ultimate truth. The mistake was trying to turn AI into a source of truth.

Hallucinations shouldn't be treated like a bug. They are a feature - just not one the big tech companies wanted.

When humans hallucinate on purpose (and not due to illness), we get imagination and dreams; fuel for fiction, but not for reality.

18 more...

You can't mitigate a man in the middle attack on a technical level... Because they are a man in the middle... That's the point of using DDoS mitigation. Nothing's stopping them from just sending incoming traffic to a phishing site if a bad actor was in control of it.

A mass exodus doesn't really happen in the traditional sense unless shit really hits the fan. For that to happen a large majority or even everyone has to be displaced at once and there can be no way to salvage the situation. In this case, there were a lot of short term ways out here for users not directly affected.

But, the whole situation is more akin to a war of attrition. The ones not convinced by the big things, will be convinced by the smaller things that accumulate over time. Goodwill for reddit is at an all time low, which hampers their ability to grow since word of mouth is effectively dead. People that provided effective labour for reddit in the form of moderation or content aggregation lost their morale to continue. Not all of them for sure, but it might very well be a critical mass (even if they didn't move to lemmy).

It's like a line of dominos increasing in size, if the ones that fell now were big enough to topple the next, eventually there will be a ripple effect. Eventually the quality of content goes down, the discourse turns stale and antagonistic, and communities fall apart. Only once the users who took the easy way out now realize that will they finally start the process of moving. And if reddit was doing so bad they had to make this move, I can only assume their future will be very grim indeed. The seed of destruction has been planted. (And if you want an example of that future, look at Twitter)

Whether or not that all actually happens, I'm not sure. I'd like to believe it will, but some people revel in their unreasonableness, and they're often the easiest to exploit for financial gain. I think the best thing is to stop looking back, and focus on what we have here and now. I think what lemmy has achieved so far is already more valuable than reddit had.

The opinion of Hexbear doesn't seem to be the problem, and because of certain ideological overlap to users here that should be quite obvious in my opinion. You seem to have focused on the wrong part of the OP.

The problem is that they are presenting themselves as an ideological army. And especially that the admins of Hexbear seem to support this position, rather than it just being some rogue users.

Imagine if a Lemmy instance opened up for a specific religion and their whole point was to inject themselves into as many discussions as possible to push information favorable to their religion. The problem isn't that they believe in their religion, or even that they want to make the best case possible for it. It's the fact that they are trying to wield open discussions as a sword to convert people regardless of relevance or appropriateness.

Well by this logic, hurricanes and tornadoes must be targeting republican states. What's the message being sent there? 🤔 At least you can somewhat design and build architecture against earthquakes...

4 more...

This is just OpenAI covering their ass by attempting to block the most egregious and obvious outputs in legal gray areas, something they've been doing for a while, hence why their AI models are known to be massively censored. I wouldn't call that 'hiding'. It's kind of hard to hide it was trained on copyrighted material, since that's common knowledge, really.

Get one of those pillows where you can remove or add stuffing - Be your own Walter White.

P2P exposes your IP to those you need to connect to. So if you're a streamer or something - share a file and you dox yourself. It also means if you're offline you can't send the file.

It's just not practical over remotely hosted for it to be the default. There's other apps you can download if you still want to use P2P

It's a damn miracle this didnt just kill everyone in the rather small room if you watch the video. What the hell.

Its funny how something like this get posted every few days and people keep falling for it like its somehow going to end AI. The people that make these models are acutely aware of how to avoid model collapse.

It's totally fine for AI models to train on AI generated content that is of high enough quality. Part of the research to train models is building data sets with a text description matching the content, and filtering out content that is not organic enough (or even specifically including it as a 'bad' example for the AI to avoid). AI can produce material indistinguishable from human work, and it produces material that wasn't originally in the training data. There's no reason that can't be good training data itself.

That's an eventual goal, which would be a general artificial intelligence (AGI). Different kind of AI models for (at least some) of the things you named already exist, it's just that OpenAI had all their eggs in the GPT/LLM basket, and GPTs deal with extrapolating text. It just so happened that with enough training data their text prediction also started giving somewhat believable and sometimes factual answers. (Mixed in with plenty of believable bullshit). Other data requires different training data, different models, and different finetuning, hence why it takes time.

It's highly likely for a company of OpenAI's size (especially after all the positive marketing and potential funding they got from ChatGPT in it's prime), that they already have multiple AI models for different kinds of data either in research, training, or finetuning already.

But even with all the individual pieces of an AGI existing, the technology to cross reference the different models doesn't exist yet. Because they are different models, and so they store and express their data in different ways. And it's not like training data exists for it either. And unlike physical beings like humans, it doesn't have any kind of way to "interact" and "experiment" with the data it knows to really form concrete connections backed up by factual evidence.

I'm not an expert in AI, I will admit. But I'm not a layman either. We're all anonymous on here anyways. Why not leave a comment explaining what you disagree with?

6 more...

I use a 3rd party app maybe 1/4th of my time on reddit, so I could've technically gone without it. But I think anyone keeping up with the current situation can see old.reddit.com is on the chopping block soon, if not next. When reddit's admin team said half a year ago that the APIs aren't going anywhere for at least the next couple of years, and now have to deal with being called out on their deception, and even adding new lies onto the pile like the whole Apollo debacle, you know they can't be trusted in any capacity to keep promises.

If keeping an API accessible that does the exact same thing as their own app is too much of an expense to keep open, imagine how much maintaining a completely separate front end for their entire website must 'cost'. And that's in part why I've made this switch now and not when old.reddit.com gets killed eventually.

Completely true. But we cannot reasonably push the responsibility of the entire internet onto someone when they did their due diligence.

Like, some people post CoD footage to youtube because it looks cool, and someone else either mistakes or malicious takes that and recontextualizes it to being combat footage from active warzones to shock people. Then people start reposting that footage with a fake explanation text on top of it, furthering the misinformation cycle. Do we now blame the people sharing their CoD footage for what other people did with it? Misinformation and propaganda are something society must work together on to combat.

If it really matters, people would be out there warning people that the pictures being posted are fake. In fact, even before AI that's what happened after tragedy happens. People would post images claiming to be of what happened, only to later be confirmed as being from some other tragedy. Or how some video games have fake leaks because someone rebranded fanmade content as a leak.

Eventually it becomes common knowledge or easy to prove as being fake. Take this picture for instance:

It's been well documented that the bottom image is fake, and as such anyone can now find out what was covered up. It's up to society to speak up when the damage is too great.

Of course you are. There's nothing wrong with defending your beliefs, or advocating for them in the right context. Especially if they have sound arguments to back them up. (Also, I don't see any indication why that wouldn't be allowed based on this post, or the rules of conduct)

But pushing your beliefs is different. It's about foregoing actually convincing people and instead using underhanded tactics such as propaganda, brigading, or botting to make an opinion seem more sound than it really is. (Not saying your opinion necessarily is, by the way.)

Might want to post this on c/reddit - It'll probably be removed here, check the mod post for c/lemmyworld: Community moderation policy

That's incorrect. Sure it has no comprehension of what the words it generates actually means, but it does understand the patterns that can be found in the words. Ask an AI to talk like a pirate, and suddenly it knows how to transform words to sound pirate like. It can also combine data from different text about similar topics to generate new responses that never existed in the first place.

Your analogy is a little flawed too, if you mixed all the elements in a transformative way and didn't re-use any materials as-is, even if you called it Mazefecootviltale, as long as the original material were transformed sufficiently, you haven't infringed on anything. LLMs don't get trained to recreate existing works (which would make it only capable of producing infringing works), but to predict the best next word (or even parts of a word) based on the input information. It's definitely possible to guide an AI towards specific source materials based on keywords that only exist in the source material that could be infringing, but in general it generates so generalized that it's inherently transformative.

10 more...

I mostly see psychological benefits:

  • Building confidence in writing and (when roleplaying) in interacting with other people. LLMs dont shame, or get needlessly hostile. And since they follow your own style it can feel like talking to a friend.
  • related to that, the ability to help in processing traumatic events through writing about them.

For me personally, interacting with AI has helped me conquer some fears and shame that I buried long ago.

And that's something that's easy to forget once you've made the change. Uprooting something you use daily, to move to a new platform which feels new and different, takes quite a bit of mental effort and requires you to accept some anxiety, as you wean yourself off your habits. But when the power users go, and the new place becomes more familiar and understood, the rest will follow eventually as every step becomes easier to accept.

I agree and I get it's a funny way to put it, but in this case they started the video with a massive disclaimer that they were not Carlin and that it was AI. So it's hard to argue they were putting things in his mouth. If anything it's praiseworthy of a standard when it comes to disclosing if AI was involved, considering the hate mob revealing that attracts.

1 more...

That's disingenuous. It's not like Meta is some unknown party here with a clean reputation. They have a history, one that repeatedly shows they couldn't care less for the fundamental freedoms of the fediverse. Just like in society, for us to build free platforms where everyone is welcome, we must paradoxically not tolerate those that wish to wield the freedom of the platform against itself.

If you use StableDiffusion through a web UI (might exist for others as well), you might have access to a feature called 'interrogate', which allows you to find an approximate prompt to an image. Can be useful if you need it for future images.

It can also be done online: https://huggingface.co/spaces/pharmapsychotic/CLIP-Interrogator

And this, is a strawman. If this argument is being made, it's most likely because of their own misunderstanding of the subject. They are most likely trying to make the argument that the way biological neural networks and artificial neural networks 'learn' is similar. Which is true to a certain extent since one is derived from the other. There's a legitimate argument to be made that this inherently provides transformation, and it's exceptionally easy to see that in most unguided prompts.

I haven't seen your version of this argument being spoken around here at all. In fact it feels like a personal interpretation of someone who did not understand what someone else was trying to communicate to them. A shame to imply that's an argument people are regularly making.

You can train AI models on AI generated content though. AI collapse only occurs if you train it on bad AI generated content. Bots and people talking gibberish are just as bad for training an AI model. But there are ways to filter that from the training data. Such as language analysis. They will also most likely filter out any lowly upvoted comments, or those edited a long time since their original post date.

And if you start posting now, any sufficiently good AI generated material, which other humans will like and upvote, will not be bad for the model.

You seem to misunderstand what an LLM does. It doesn't generate "right" text. It generates "probable" text. There's no right or wrong since it only generates a single word ahead of where it currently is. Hence why it can generate information that's complete bullshit. I don't know the details about this Go AI you're talking about, but it's pretty safe to say it's not an LLM or uses a similar technique to it as Go is a game and not a creative work. There are many techniques for creating algorithms that fall under the "AI" umbrella.

Your second point is a whole different topic. I was referring to a "derivative work", which is not the same as "fair use". Derivative works are quite literally everywhere. https://en.wikipedia.org/wiki/Derivative_work A derivative work doesn't require fair use, as it no longer falls under the same copyright as the original. While fair use is an exception under which copyrightable work can be used without infringing.

And also, those projects most of the time do not get shut down because they are actually illegal, but they get shut down because companies with tons of money can send threatening letters all day and have a team of high quality lawyers to send them. A cease and desist isn't a legal enforcement from a judge, it's a "recommendation for us not to (attempt to) sue you". And that works on most small projects. It very very rarely goes to court over these things. And sometimes it's because it's totally warranted. Especially for fan projects it's extremely hard to completely erase all protected copyrightable work, since they are specifically made to at least imitate or expand upon what they're a fan project of.

EDIT: Minor clarification

8 more...

I'm not sure where you think I'm giving it too much credit, because as far as I read it we already totally agree lol. You're right, methods exist to diminish the effect of hallucinations. That's what the scientific method is. Current AI has no physical body and can't run experiments to verify objective reality. It can't fact check itself other than be told by the humans training it what is correct (and humans are fallible), and even then if it has gaps in what it knows it will fill it up with something probable - but which is likely going to be bullshit.

All my point was, is that to truly fix it would be to basically create an omniscient being, which cannot exist in our physical world. It will always have to make some assumptions - just like we do.

2 more...

I had YT premium for a while, and then I just wanted to download some videos (you know, like they advertise you can) and they just didnt allow it. Had to either watch it in the YT app or on youtube.com on my PC. That's not downloading - thats just streaming with less computation for youtube, which helps youtube but not me. What a great 'premium benefit'!

Cancelled my premium right then and there, if they cant provide a feature as simple as just being able to download videos to mp4 or something, thats just misleading. Literally takes seconds to find a third party site or app (NewPipe) that does it.

Halfway-North American

Hence why it's important that open source models should keep the support of the public. If they have to close shop, what remains will be those made by tech giants. They will be censored, they will be neutered, except if you pay enough for them. The power should remain with the artists, and not those with the money.

The inverse is also true. Disney will make their own AI regardless of being able to use anyone else's data for training. Because they have a ton of data already. The only ones that will be shafted if that freedom is restricted is that those without a library of data to train will have no access to AI.

Yup. Used to be it was quite easy to find the games that were worthwhile to play since there was very little for profit games and not too much choice. Nowadays only if I hear from people I trust to have a taste for the games I want to play will I actually get excited. Its just easier to go back to classics because you know you're going to have a better time than most things you buy new.

Always on the look out though, gems are still being produced, they just became a lot less findable.

First of all, your second point is very sad to hear, but also a non-factor. You are aware people stole artwork before the advent of AI right? This has always been a problem with capitalism. It's very hard to get accountability unless you are some big shot with easy access to a lawyer at your disposal. It's always been shafting artists and those who do a lot of hard work.

I agree that artists deserve better and should gain more protections, but the unfortunate truth is that the wrong kind of response to AI could shaft them even more. Lets say inspiration could in some cases be ruled to be copyright infringement if the source of the inspiration could be reasonably traced back to another work. This could allow companies big companies like Disney an easier pathway to sue people for copyright infringement, after all your mind is forever tainted in their IP after you've watched a single Disney movie. Banning open source models from existing could also create a situation where the same big companies could create internal AI models from the art in their possession, but anyone with not enough materials could not. Which would mean that everyone but the people already taking advantage of artists will benefit from the existence of the technology.

I get that you want to speak up for your friends and family, and perhaps they do different work than I imagine, but do you actually talk to them about what they do in their work? Because digital artist also use non-AI algorithms to generate meshes and images. (And yes, that could be summed down to 'type shit in and a 3D model appears') They also use building blocks, prefabs, and use reference assets to create new unique assets. And like all artists they do take (sometimes direct) inspiration from the ideas of others, as does the rest of humanity. Some of the digital artists I know have embraced the technology and combined it with the rest of their skills to create new works more efficiently and reduce their workload. Either by being able to produce more, or being able to spend more time refining works. It's just a tool that has made their life easier.

1 more...

Awesome and great explanation for a layperson. Because the industry has been faking lighting for so long and lighting is quite important, the industry has become incredibly good at it. But it also takes a lot of development time that could be spent on adding more content or features. There's a reason the opinion about ray tracing is extremely positive within the game development industry. But also nobody's expecting it to become the norm over night, and the period with hybrid support for both raytracing and legacy lighting is only just starting.

NovelAI - They even train their own models specifically for storytelling (and to avoid undue censorship from an outside model provider like OpenAI)