LLMs are surprisingly great at compressing images and audio, DeepMind researchers findpavnilschanda@lemmy.world to Technology@lemmy.world – 96 points – 10 months agoventurebeat.com12Post a CommentPreviewYou are viewing a single commentView all commentsShow the parent commentDo you need the dataset to do the compression? Is the trained model not effective on its own?Well from the article a dataset is required, but not always the heavier one. Tho it doesn't solve the speed issue, where the llm will take a lot more time to do the compression. gzip can compress 1GB of text in less than a minute on a CPU, an LLM with 3.2 million parameters requires an hour to compress 1 more...1 more...
Do you need the dataset to do the compression? Is the trained model not effective on its own?Well from the article a dataset is required, but not always the heavier one. Tho it doesn't solve the speed issue, where the llm will take a lot more time to do the compression. gzip can compress 1GB of text in less than a minute on a CPU, an LLM with 3.2 million parameters requires an hour to compress 1 more...1 more...
Well from the article a dataset is required, but not always the heavier one. Tho it doesn't solve the speed issue, where the llm will take a lot more time to do the compression. gzip can compress 1GB of text in less than a minute on a CPU, an LLM with 3.2 million parameters requires an hour to compress 1 more...
Do you need the dataset to do the compression? Is the trained model not effective on its own?
Well from the article a dataset is required, but not always the heavier one.
Tho it doesn't solve the speed issue, where the llm will take a lot more time to do the compression.