Smaug-72B-v0.1: The New Open-Source LLM Roaring to the Top of the Leaderboard

hexual@lemmy.world to Technology@lemmy.world – 195 points –
abacusai/Smaug-72B-v0.1 · Hugging Face
huggingface.co

Abacus.ai:

We recently released Smaug-72B-v0.1 which has taken first place on the Open LLM Leaderboard by HuggingFace. It is the first open-source model to have an average score more than 80.

59

You are viewing a single comment

Every billion parameters needs about 2 GB of VRAM - if using bfloat16 representation. 16 bits per parameter, 8 bits per byte -> 2 bytes per parameter.

1 billion parameters ~ 2 Billion bytes ~ 2 GB.

From the name, this model has 72 Billion parameters, so ~144 GB of VRAM

Ok but will this run on my TI-83? It's a + model.

Only if it’s silver.

Dang. So close.

My 83 was ganked by some kid I knew so my folks bought me a silver. He denied it. I learned that day to write my name in secret spots.

That kid you knew was a dick. At least he taught you a valuable lesson, I guess.

He absolutely was a dick. I stopped being mates with him after that. My school was like “yeah the cameras didn’t work that day actually”

Leads me to believe that the cameras never actually worked.

I believe that. Or they just didn’t want to be responsible for dealing with theft. Both ways make perfect sense to me.

Very true. Saying the cameras don't work is enough to keep the cops out of it. After all, hearsay between kids is never taken seriously. No footage, means no proof, means no cops. Then they can keep the facade of the school being crime-free.

Absolutely my dude! My school pretended the cams didn’t work when my calculator was stolen… they worked fantastically the next week.

That is some BS! Honestly, I probably would have dealt with that kid schoolyard-style (though, I'm not proud of that as an adult). My school never pretended the cameras were out of service. They just got rid of them after a teacher was accused of inappropriate behavior with a senior. Claimed they never worked. He "left for another state" later that week. I'm sure it was unrelated...

Hahaha he was a lil dude and he came from a rough family. Plus I was like 115-120lbs at the time, I ain’t no fightgurl. I knew my folks would get me a better one (it wasn’t even a plus model…) so I was easy on him.

Still stopped talking to him.

Also at your last coupe sentence… oh dear me. That’s… not good.

1 more...
1 more...
1 more...
1 more...
1 more...
1 more...
1 more...
1 more...
1 more...
1 more...
1 more...
2 more...

It's been discovered that you can reduce the bits per parameter down to 4 or 5 and still get good results. Just saw a paper this morning describing a technique to get down to 2.5 bits per parameter, even, and apparently it 's fine. We'll see if that works out in practice I guess

I'm more experienced with graphics than ML, but wouldn't that cause a significant increase in computation time, since those aren't native types for arithmetic? Maybe that's not a big problem?

If you have a link for the paper I'd like to check it out.

My understanding is that the bottleneck for the GPU is moving data into and out of it, not the processing of the data once it's in there. So if you can get the whole model crammed into VRAM it's still faster even if you have to do some extra work unpacking and repacking it during processing time.

The paper was posted on /r/localLLaMA.

You can take a look at exllama and llama.cpp source code on github if you want to see how it is implemented.

Though with quantisation you can get it down to like 30GB of vram or less.

Llama 2 70B with 8b quantization takes around 80GB VRAM if I remember correctly. I’ve tested it a while ago.

2 more...