WormGPT Is a ChatGPT Alternative With 'No Ethical Boundaries or Limitations'

L4sBot@lemmy.worldmod to Technology@lemmy.world – 343 points –
WormGPT Is a ChatGPT Alternative With 'No Ethical Boundaries or Limitations'
pcmag.com

WormGPT Is a ChatGPT Alternative With 'No Ethical Boundaries or Limitations'::undefined

105

You are viewing a single comment

Is it using chatgpt as a backend, like most so called chatgpt "alternatives"? If so, it will get banned soon enough.

If not, it seems extremely impressive, and extremely costly to create. I wonder who's behind it, in that case.

Really feeling like this is Reddit with how everyone didnt read the article in this chain:

"To create the chatbot, the developer says they used an older, but open-source large language model called GPT-J from 2021"

So no expensive gpu usage but not none either, they added some training about specifically malware in there

Ah, right you are. I'm surprised they're able to get the kind of results described in the article out of GPT-J. I've tinkered with it a bit myself, and it's nowhere near GTP-3.5 in terms of "intelligence". Haven't tried it for programming though; might be that it's better at that than general chat.

I could see programming almost being an easier target too, easier to recognize patterns that crazy ass English.

Though the article did say they got good pishing emails out of it too which is saying something

Genie is out of the bag. It was shown early on how you can use AI like ChatGPT to create and enhance datasets needed to generate AI language models like ChatGPT. Now, OpenAI say that isn’t allowed, but since it’s already been done, it’s too late.

Rogue AI will spring up with specialized purposes en masse the next six months, and many of them we’ll never hear about.

I don't think it'll be a new AI I think it'll just be using chat GPT and then some prompts that cause it to be jailbroken.

Essentially you could probably get chat GPT to do this without having to go to this service it's just they're keeping whatever prompts they're using secret.

I don't know this for sure but it's just very unlikely that they've gone to the expensive buying a bunch of GPUs to build an AI.

Isn't the Rogue AI already here? Weren't some models already leaked? And haven't some of those already proved to be doing things it wasn't supposed to be?

If it is using chatgpt as a backend, my guess is that they are using Azure OpenAI and know what they are doing.

Azure OpenAI allows you to turn off abuse monitoring and content filtering if you have legitimate reasons to do so.

It would be very hard for a malicious actor to get the approval to turn off both using a front company. But if one would manage to do it, they could create such malicious chatGPT service with little to no chance to be found out.