88
you are viewing a single comment's thread
view the rest of the comments
[-] PlexSheep@feddit.de 9 points 1 year ago

So like, mp3, gzip and zstd? Why would you use a LLM for compression??

[-] rubikcuber@programming.dev 32 points 1 year ago

The research specifically looked at lossless algorithms, so gzip

"For example, the 70-billion parameter Chinchilla model impressively compressed data to 8.3% of its original size, significantly outperforming gzip and LZMA2, which managed 32.3% and 23% respectively."

However they do say that it's not especially practical at the moment, given that gzip is a tiny executable compared to the many gigabytes of the LLM's dataset.

[-] NaibofTabr@infosec.pub 9 points 1 year ago

Do you need the dataset to do the compression? Is the trained model not effective on its own?

[-] Tibert@compuverse.uk 13 points 1 year ago

Well from the article a dataset is required, but not always the heavier one.

Tho it doesn't solve the speed issue, where the llm will take a lot more time to do the compression.

gzip can compress 1GB of text in less than a minute on a CPU, an LLM with 3.2 million parameters requires an hour to compress

load more comments (1 replies)
load more comments (1 replies)
this post was submitted on 27 Sep 2023
88 points (94.9% liked)

Technology

59390 readers
2557 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS