abcdqfr

@abcdqfr@lemmy.world
2 Post – 26 Comments
Joined 2 months ago

Small typo in the headline *gestapo

Wake me up when it works offline "The Llama 3.1 models are available for download through Meta's own website and on Hugging Face. They both require providing contact information and agreeing to a license and an acceptable use policy, which means that Meta can technically legally pull the rug out from under your use of Llama 3.1 or its outputs at any time."

10 more...

What even if this stock photo with the kid getting held up by the dick?

2 more...

Are we going to war or is the author bad at writing?

Yes an ad, and a free one at that.

The boots lick themselves

Because some smart TVs will up and brick themselves by irreparably filling their storage with various updates to the point of no longer being able to install or even update anything on the TV whatsoever THANK YOU Samsung)

The preferred alternative is a healthy relationship after enough therapy, the latter being a [pay]wall for some

I'm putting ten in the bathtub and naming it Steve.

We are not alone then. Thanks for your input!

I'll definitely be keeping my nvidia card for ai/ml /cuda purposes. It'll live in a dual boot box for windows gaming when necessary (bigscreen beyond, for now). II am curious to see what 16gb of amd vram will let me get up to anyway.

SMB/NFS

So what was the conspiracy theory around tpm requirements, bitlocker and copilot? Some new privacy nightmare?

Is that perhaps the setting of letterkenny?

Well I finally got the nvidia card working to some extent. On the recommended driver it only works in lowvram. medvram maxes vram too easily on this driver/cuda version for whatever reason. Does anyone know the current best nvidia driver for sd on linux? Perhaps 470, the other provided by the LM driver manager..?

2 more...

How bad are your crashes? Mine will either freeze the system entirely or crash the current lightdm session, sometimes recovering, sometimes freezing anyway. Needs power cycle to rescue. What is the DE you speak of? openbox?

5 more...

What a treat! I just got done setting up a second venv within the sd folder. one called amd-venv the other nvidia-venv. Copied the webui.sh and webui-user.sh scripts and made separate flavors of those as well to point to the respective venv. Now If I just had my nvidia drivers working I could probably set my power supply on fire running them in parallel.

Since only one of us is feeling helpful, here is a 6 minute video for the rest of us to enjoy https://www.youtube.com/watch?v=lRBsmnBE9ZA

I might take the docker route for the ease of troubleshooting if nothing else. So very sick of hard system freezes/crashes while kludging through the troubleshooting process. Any words of wisdom?

2 more...

Ah, thanks. It is my AMD card causing crashes with SD in my experience. NVIDIA is native to CUDA hence the stability.

Intriguing. Is that an 8gb card? Might have to try this after all

2 more...

What the heck is whisper? Ive been fooling around with hass for ages, haven't heard of it even after at least two minutes of searching. Is it openai affiliated hardwae?

Do you need to hazardously close to a tower for good stability? Fascinating for the future of wireless power!

I had that concern as well with it being a new card. It performs fine in gaming as well as in every glmark benchmark so far. I have it chalked up to amd support being in experimenntal status on linux/SD. Any other stress tests you recommend while I'm in the return window!? lol

1 more...

Thank you!! I may rely on this heavily. Too many different drivers to try willy-nilly. I am in the process of attempting with this guide/driver for now. Will report back with my luck or misfortunes https://hub.tcno.co/ai/stable-diffusion/automatic1111-fast/

version for whatever reason. Does anyone know the current best nvidia driver for

I started reading into the ONNX business here https://rocm.blogs.amd.com/artificial-intelligence/stable-diffusion-onnx-runtime/README.html Didn't take long to see that was beyond me. Has anyone distilled an easy to use model converter/conversion process? One I saw required a HF token for the process, yeesh

1 more...