But its the only thing I want!

The Picard Maneuver@startrek.websitemod to Lemmy Shitpost@lemmy.world – 905 points –

(sorry if anyone got this post twice. I posted while Lemmy.World was down for maintenance, and it was acting weird, so I deleted and reposted)

66

You are viewing a single comment

This is why I run local uncensored LLMs. There's nothing it won't answer.

That seems awesome. I wondered if it was possible for users to manage at home.

Yeah just use llamacpp which uses cpu instead of gpu. Any model you see on huggingface.co that has "GGUF" in the name is compatible with llamacpp as long as you're compiling llamacpp from source using the github repository.

There is also gpt4all which is runs on llamacpp and is ui based but I've had trouble getting it to work.

The best general purpose uncensored model is wizard vicuna uncensored

You can literally get it up and running in 10 minutes if you have fast internet.