diffuselight

@diffuselight@lemmy.world
0 Post – 63 Comments
Joined 12 months ago

It’s a bullshit article by a bullshit website. The law in question is a decade old. Japan hasn’t decided anything - they are slow to decide new things. It’s just this page clickbaiting.

1 more...

Just facists destroying faith in everything that’s not them. Normal modus operandi. These people need sheep who are so confused what to believe that they don’t trust anyone anymore and instead substitute trust for blind faith.

Hexbear neonazi tankies one one side, overbearing blocking on the other.

9 more...

That’s a fundamental misunderstanding of how diffusion models work. These models extract concepts and can effortlessly combine them to new images.

If it learns woman + crown = queen

and queen - woman + man = king

it is able to combine any such concept together

As Stability has noted. any model that has the concept of naked and the concept of child in it can be used like this. They tried to remove naked for Stable Diffusion 2 and nobody used it.

Nobody trained these models on CSAM and the problem is a dilemma in the same way a knife is a dilemma. We all know a malicious person can use a knife for murder, including of children Yet society has decided that knives sufficient other uses that we still allow their sale pretty much everywhere.

6 more...

Oh they are downright into extortion. Try asking them to delete your data and see what happens.

it’s facebook….

You don’t show full posts because then your team gets to count an ‘active’ user when people click to expand.

Metrics becoming the goal 101 and active user growth is important to get investors to hold the bag for your VCs. Every action right now, that the VC money is getting scarce is aimed at making Reddit look like a profitable target for street investors so your VCs can cash out. Doesn’t matter if what you do isn’t sustainable, because you are the VCs bitch now and they want their payout before you crash and burn.

None of that really works anymore in the age of AI inpainting. Hashes / Perceptual worked well before but the people doing this are specifically interested in causing destruction and chaos with this content. they don’t need it to be authentic to do that.

It’s a problem that requires AI on the defensive side but even that is just going to be eternal arms race. This problem cannot be solved with technology, only mitigated.

The ability to exchange hashes on moderation actions against content may offer a way out, but it will change the decentralized nature of everything - basically bringing us back to the early days of the usenet, Usenet Death Penaty, etc.

4 more...

Sasha Baron Cohen got it right the second time

You cannot detect AI generated content. Not with any real world accuracy and it will only get worse.

Also, because google relies on growth for everything from compensation structure to business model, they are in a bind - ads is not growing anymore, it’s done.

And while they managed to create an illusion of growth this earnings round by juicing subscription fees 20% and increasing ad load everywhere, it’s not a sustainable tactic. We are already seeing a tech sell off as people are getting less and less secure.

So they rely on AI narrative to keep investors invested Google needs AI to work or the investors will move it to a place that may offer higher returns than a squeezed out ads model.

Worse even they are being attacked by AI - on the quality front (junk content) and in the marketplace (openAI), they don’t have a choice but to take a pro AI stance.

3 more...

Exactly, how ever would we keep our own tribal community safe from people who want to learn /s

There’s actually a setting at least on Facebook to specifically disable alcohol and gambling based on some settlement a few years ago but they hid it really welll

She’s an amazing engineer and inventor and was one of the best Chinese voices in the global tech community, consistently brining her own perspectives without holding back or letting herself be bullied. What a massive loss.

Whisper is perfectly fine as a subtitle generator you can run at home and subtitle a movie in minutes

3 more...

I just retained an LLM on your comment you put on the public internet. You feel violated enough to equate it to physical violation?

10 more...

you are answering a question with a different question. LLMs don’t make pictures of your mom. And this particular question?. One that has roughly existed since Photoshop existed.

It just gets easier every year. It was already easy. You could already pay someone 15 bucks on Fiver to do all of that, for years now.

Nothing really new here.

The technology is also easy. Matrix math. About as easy to ban as mp3 downloads. Never stopped anyone. It’s progress. You are a medieval knight asking to put gunpowder back into the box, but it’s clear it cannot be put back - it is already illegal to make non consensual imagery just as it is illegal to copy books. And yet printers exist and photocopiers exist.

Let me be very clear - accepting the reality that the technology is out there, it’s basic, easy to replicate and on a million computers now is not disrespectful to victims of no consensual imagery.

You may not want to hear it, but just like with encryption, the only other choice society has is full surveillance of every computer to prevent people from doing “bad things”. everything you complain about is already illegal and has already been possible - it just gets cheaper every year. What you want to have protection from is technological progress because society sucks at dealing with the consequences of it.

To be perfectly blunt, you don’t need to train any generative AI model for powerful deepfakes. You can use technology like Roop and Controlnet to synthesize any face on any image from a singe photograph. Training not necessary.

When you look at it that way, what point is there to try to legislate training with these arguments? None.

2 more...

Or common clients like voyager need it. should be doable. Would sell me on a client instantly

6 more...

That’s because Repuglicans are self selecting. People need to stop seeing them as half of the population. They are not.

I really tried to even understand this app … I mean really tried. I just don’t get it.

Waaay too convoluted UX and even suffering through the youtube videos does not help

Oh it’s worse. They paid Rupert Murdoch who was the one who forced the government to do the law. He owns all their balls down under. So the bad guys won however you look at it.

It’s already covered under those laws. So what are you doing that’s different from ChatGPT hallucinating here ?

Those laws don’t spell out the tools (photoshop); they hinge on reproducing likeness.

Imagine being forced to make a better product.

That said, after the Threads position the actual story here is that Meta no longer thinks that the benefit of news outweighs the hassle. If it didn’t, they’d pay.

And they are likely right - AI + US election cycle news isn’t gonna be a net positive for them

Wait wait after decades of satanic panic they are now D&D fans ?

Its not possible to tell AI generated text from human writing at any level of real world accuracy. Just accept that.

7 more...

That may have been their plan, but Meta fucked them from behind and released LLama which now runs on local machines, up to 30B parameter size and by end of the year will run at better than GPt3.5 ability on an iphone.

Local llms, like airoboros, WizardLm, Stable Vicuña or Stable Coder are real alternatives in many domains.

Game industry professional here: We know Riccitello. He presided over EA at critical transition periods and failed them. Under his tenure, Steam won total supremacy because he was trying to shift people to pay per install / slide your credit card to reload your gun. Yes his predecessor jumped the shark by publishing the Orange Box, but Riccitellos greed sealed the total failure of the largest company to deal with digital distribution by ignoring that gamers loved collecting boxes (something Valve understood and eventually turned into the massive Sale business where people buy many more games than they consume)

He presided over EA earlier than that too, and failed.

Both of times, he ended up getting sacked after the stock reached a record low. But personally he made out like a bandit selling EA his own investment in VG Holdings (Bioware/Pandemic) after becoming their CEO.

He’s the kind of CEO a board of directors would appoint to loot a company.

At unity, he invested into ads heavily and gambled on being able to become another landlord. He also probably paid good money on reputation management (search for Riccitello or even his full name on google and marvel at the results) after certain accusations were made.

Why would the state need to make a deal?

These things never work in the real world. We’ve seen this over and over. It’s snakeoil. Latent space mapping may survive compression but don’t work across encoders.

Oh no, anyway, back to talking about rich people’s submarine suicide /s

Given chinchilla law, nobody in their right mind trains models via shotgun ingesting all data anymore. Gains are made with quality of data at this point, less than volume.

please the internet was great 10 years ago

Proxmox with alpine containers or Unraid are both painless

Lol:

Content industry: It can reproduce our stuff OpenAI: Content industry: They are hiding that it can reproduce us

The trajectory is such that current L2 70B models are easily beating 3.5 and are approaching GPT4 performance - an A6000 can run them comfortably and this is a few months only after release.

Nah the trajectory is not in favor of proprietary, especially since they will have to dumb down due to alignment more and more

https://www.anyscale.com/blog/llama-2-is-about-as-factually-accurate-as-gpt-4-for-summaries-and-is-30x-cheaper?trk=feed_main-feed-card_feed-article-content

8 more...

go cloudflare, renew for 10 years. nobrainer.

I definitely suggest people buying business class if you have having your legs broken in economy.

They will do it and then their constitutional court will declare it unconstitutional and then they will do it again. Basically game of the last 2 decades. eventually they’ll manage to stuff the court

Clearly not :)

The entropy in text is not good enough to provide enough space for watermarking. No it does not get better in longer text because you have control over i lot/chunking. You have control over top-k and temperature and prompt which creates infinite output space. Open text-generation-webui, go to the parameter page and count the number of parameters you can adjust to guide outcome. In the future you can add wasm encoded grammar to that list too.

Server side hashing / watermarking can be trivially defeated via transformations / emoji injection Latent space positional watermarking breaks easily with post processing. It would also kill any company trying to sell it (Apple be like … you want all your chats at openAI or in the privacy of your phone?) and ultimately be massively dystopian.

Unlike plagiarism checks you can’t compare to a ground truth.

Prompt guidance can box in the output space to a point you could not possibly tell it’s not human. The technology has moved from central servers to the edge, even id you could build something for one LLM, another one not in your control, like a local LLAMA which is open source (see how quickly Stable Diffusion 2 Vae watermarking was removed after release)

In a year your iphone will have a built in LLM. Everything will have LLMs, some highly purpose bound with only a few M parameters. Finetuning like LoRa is accessible to a large number of people with consumer GPUs today and will be commoditized in a year. Since it can shape the output, it again increases the possibility space of outputs and will scramble patterns.

Finally, the bar is not “better than a flip of a coin. If you are going to accuse people or ruin their academic career, you need triple nine accuracy or you’ll wrongfully accuse hundreds of essays a semester.

The most likely detection would be if someone finds a remarkable stable signature that magically works for all the models out there (100s by now), doesn’t break with updates (lol - see chatgpt presumably getting worse), survives quantisation and somehow can be kept secret from everyone including AI which can trivially spot patterns in massive data sets. Not Going To Happen.

Even if it was possible to detect, it would be model or technology specific and lagging technology - we are moving at 2000miles and hour and in a year it may mot be transformers. They’ll be GAN or RNN elements fused into it or something completely new.

The entire point of the technology is to approximate humanity - plus we are moving at it from the other direction - more and more conventional tools embed AI (from your camera not being able to take non AI touched pictures anymore to Photoshop infill to word autocomplete to new spellchecking and grammar models).

People latch onto the idea that you can detect it because it provides an escapism fantasy and copium so they don’t have to face the change that is happening. If you can detect it you can keep it out. You can’t. Not against anyone who has even the slightest idea of how to use this stuff.

It’s like gunpowder was invented and Samurai would throw themselves into the machine guns because it rendered decades of training and perfection, of knowledge about fortification, war and survival moot.

On video detection will remain viable for a long time due to the available entropy. Text. It’s always been snakeoil and everyone peddling it should be shot.

What is convenience worth for you? What if you can’t have all the things?

Counterpoint. You could launder all incoming content through an LLM running locally with the task to detoxify content. the technology cuts both ways.