I wonder if this is actually comparable to the way our brains store long-term memory?
Interesting. I'm just thinking aloud to understand this.
In this case, the models are looking at a few sequence of bytes in their context and are able to predict the next byte(s) with good accuracy, which allows efficient encoding. Most of our memories are associative, i.e. we associate them with some concept/name/idea. So, do you mean, our brain uses the concept to predict a token which gets decoded in the form of a memory?
Firstly—maybe what we consider an “association” is actually an indicator that our brains are using the same internal tokens to store/compress the memories.
But what I was thinking of specifically is narrative memories: our brains don’t store them frame-by-frame like video, but rather, they probably store only key elements and use their predictive ability to extrapolate the omitted elements on demand.
This seems likely to me. The common saying is that "you hear what you want to hear", but I think more accurately it's "you remember what has meaning to you". Recently there was a study that even visual memory was tightly integrated with spoken language: https://www.science.org/doi/10.1126/sciadv.adh0064
No, because our brains also use hierarchical activation for association, which is why if we're talking about bugs and I say "I got a B" you assume its a stinging insect, not a passing grade.
If it was simple word2vec we wouldn't have that additional means of noise suppression.
does anyone know whether these results were obtained while taking the size of the dictionary into account?
Do you mean the number of tokens in the LLM's tokenizer, or the dictionary size of the compression algorithm?
The vocab size of the pretrained models is not mentioned anywhere in the paper. Although, they did conduct an experiment where they measured compression performance while using tokenizers of different vocabulary sizes.
If you meant the dictionary size of the compression algorithm, then there was no dictionary because they only used arithmetic coding to do the compression which doesn't use dictionaries.
It looks like they did it both ways (“raw rate” vs “adjusted rate”):
In the case of the adjusted compression rate, the model's size is also added to the compressed size, i.e., it becomes (compressed size + number of model parameters) / raw size. This metric allows us to see the impact of model parameters on the compression performance. A very large model might be able to compress the data better compared to a smaller model, but when its size is taken into account, the smaller model might be doing better. This metric allows us to see that.
Yes. They also mention that using such large models for compression is not practical because their size thwarts any amount of data you might want to compress. But this result gives a good picture into how generalized such large models are, and how well they are able to predict the next tokens for image/audio data at a high accuracy.
I wonder if this is actually comparable to the way our brains store long-term memory?
Interesting. I'm just thinking aloud to understand this.
In this case, the models are looking at a few sequence of bytes in their context and are able to predict the next byte(s) with good accuracy, which allows efficient encoding. Most of our memories are associative, i.e. we associate them with some concept/name/idea. So, do you mean, our brain uses the concept to predict a token which gets decoded in the form of a memory?
Firstly—maybe what we consider an “association” is actually an indicator that our brains are using the same internal tokens to store/compress the memories.
But what I was thinking of specifically is narrative memories: our brains don’t store them frame-by-frame like video, but rather, they probably store only key elements and use their predictive ability to extrapolate the omitted elements on demand.
This seems likely to me. The common saying is that "you hear what you want to hear", but I think more accurately it's "you remember what has meaning to you". Recently there was a study that even visual memory was tightly integrated with spoken language: https://www.science.org/doi/10.1126/sciadv.adh0064
However, there's a lot of variation in memory among humans. See: The Mind of a Mnemonist.
Yes, that makes much more sense.
No, because our brains also use hierarchical activation for association, which is why if we're talking about bugs and I say "I got a B" you assume its a stinging insect, not a passing grade.
If it was simple word2vec we wouldn't have that additional means of noise suppression.
does anyone know whether these results were obtained while taking the size of the dictionary into account?
Do you mean the number of tokens in the LLM's tokenizer, or the dictionary size of the compression algorithm?
The vocab size of the pretrained models is not mentioned anywhere in the paper. Although, they did conduct an experiment where they measured compression performance while using tokenizers of different vocabulary sizes.
If you meant the dictionary size of the compression algorithm, then there was no dictionary because they only used arithmetic coding to do the compression which doesn't use dictionaries.
It looks like they did it both ways (“raw rate” vs “adjusted rate”):
Yes. They also mention that using such large models for compression is not practical because their size thwarts any amount of data you might want to compress. But this result gives a good picture into how generalized such large models are, and how well they are able to predict the next tokens for image/audio data at a high accuracy.