1-bit LLM performs similarly to full-precision Transformer LLMs with the same model size and training tokens but is much more efficient in terms of latency, memory, throughput, and energy consumption.

☆ Yσɠƚԋσʂ ☆@lemmy.ml to Technology@lemmy.ml – 15 points –
arxiv.org
4

Says 1-bit then goes on to describe inputs as -1, 0, or 1. That's 2-bit. Am I missing something here?

It’s actually 1.58bits weirdly. The addition of 0 here was the significant change/improvement in this experiment. The paper isn’t too dense and has some decent tables that explain things fairly accessibly.