Smaug-72B-v0.1: The New Open-Source LLM Roaring to the Top of the Leaderboard

hexual@lemmy.world to Technology@lemmy.world – 195 points –
abacusai/Smaug-72B-v0.1 · Hugging Face
huggingface.co

Abacus.ai:

We recently released Smaug-72B-v0.1 which has taken first place on the Open LLM Leaderboard by HuggingFace. It is the first open-source model to have an average score more than 80.

59

You are viewing a single comment

Unless you’re getting used datacenter grade hardware for next to free, I doubt this. You need 130 gb of VRAM on your GPUs

So can I run it on my Radeon RX 5700? I overclocked it some and am running it as a 5700 XT, if that helps.

To run this model locally at gpt4 writing speed you need at least 2 x 3090 or 2 x 7900xtx. VRAM is the limiting factor in 99% of cases for interference. You could try a smaller model like mistral-instruct or SOLAR with your hardware though.