It would have cost you nothing to just not post this
It would have cost you nothing to just not post this
Small typo in the headline *gestapo
Wake me up when it works offline "The Llama 3.1 models are available for download through Meta's own website and on Hugging Face. They both require providing contact information and agreeing to a license and an acceptable use policy, which means that Meta can technically legally pull the rug out from under your use of Llama 3.1 or its outputs at any time."
Yeah! Jobs that bros can reciprocate! There should be a word for that...
What even if this stock photo with the kid getting held up by the dick?
Are we going to war or is the author bad at writing?
Yes an ad, and a free one at that.
The boots lick themselves
Because some smart TVs will up and brick themselves by irreparably filling their storage with various updates to the point of no longer being able to install or even update anything on the TV whatsoever THANK YOU Samsung)
checks notes Shareholders?!
The preferred alternative is a healthy relationship after enough therapy, the latter being a [pay]wall for some
And tape the outlet anyway because the painters are hooligans -electrician
I'm putting ten in the bathtub and naming it Steve.
Se bullet 4. It takes a non-zero amount of effort. Keep showing up with only 1 effort just out of spite if you have to. But don't let it be zero.
We are not alone then. Thanks for your input!
Burger patty press still in the cardboard
SMB/NFS
I'll definitely be keeping my nvidia card for ai/ml /cuda purposes. It'll live in a dual boot box for windows gaming when necessary (bigscreen beyond, for now). II am curious to see what 16gb of amd vram will let me get up to anyway.
Since only one of us is feeling helpful, here is a 6 minute video for the rest of us to enjoy https://www.youtube.com/watch?v=lRBsmnBE9ZA
So what was the conspiracy theory around tpm requirements, bitlocker and copilot? Some new privacy nightmare?
Well I finally got the nvidia card working to some extent. On the recommended driver it only works in lowvram. medvram maxes vram too easily on this driver/cuda version for whatever reason. Does anyone know the current best nvidia driver for sd on linux? Perhaps 470, the other provided by the LM driver manager..?
How bad are your crashes? Mine will either freeze the system entirely or crash the current lightdm session, sometimes recovering, sometimes freezing anyway. Needs power cycle to rescue. What is the DE you speak of? openbox?
I might take the docker route for the ease of troubleshooting if nothing else. So very sick of hard system freezes/crashes while kludging through the troubleshooting process. Any words of wisdom?
What a treat! I just got done setting up a second venv within the sd folder. one called amd-venv the other nvidia-venv. Copied the webui.sh and webui-user.sh scripts and made separate flavors of those as well to point to the respective venv. Now If I just had my nvidia drivers working I could probably set my power supply on fire running them in parallel.
Is that perhaps the setting of letterkenny?
Meanwhile, in the engineering dungeon
Thank you!! I may rely on this heavily. Too many different drivers to try willy-nilly. I am in the process of attempting with this guide/driver for now. Will report back with my luck or misfortunes https://hub.tcno.co/ai/stable-diffusion/automatic1111-fast/
version for whatever reason. Does anyone know the current best nvidia driver for
I had that concern as well with it being a new card. It performs fine in gaming as well as in every glmark benchmark so far. I have it chalked up to amd support being in experimenntal status on linux/SD. Any other stress tests you recommend while I'm in the return window!? lol
Do you need to hazardously close to a tower for good stability? Fascinating for the future of wireless power!
Intriguing. Is that an 8gb card? Might have to try this after all
Ah, thanks. It is my AMD card causing crashes with SD in my experience. NVIDIA is native to CUDA hence the stability.
What the heck is whisper? Ive been fooling around with hass for ages, haven't heard of it even after at least two minutes of searching. Is it openai affiliated hardwae?
I started reading into the ONNX business here https://rocm.blogs.amd.com/artificial-intelligence/stable-diffusion-onnx-runtime/README.html Didn't take long to see that was beyond me. Has anyone distilled an easy to use model converter/conversion process? One I saw required a HF token for the process, yeesh
All right I'll bite. I crave more information.