1598

It's all made from our data, anyway, so it should be ours to use as we want

you are viewing a single comment's thread
view the rest of the comments
[-] 31337@sh.itjust.works 2 points 4 days ago

Last time I looked it up and calculated it, these large models are trained on something like only 7x the tokens as the number of parameters they have. If you thought of it like compression, a 1:7 ratio for lossless text compression is perfectly possible.

I think the models can still output a lot of stuff verbatim if you try to get them to, you just hit the guardrails they put in place. Seems to work fine for public domain stuff. E.g. "Give me the first 50 lines from Romeo and Juliette." (albeit with a TOS warning, lol). "Give me the first few paragraphs of Dune." seems to hit a guardrail, or maybe just forced through reinforcement learning.

A preprint paper was released recently that detailed how to get around RL by controlling the first few tokens of a model's output, showing the "unsafe" data is still in there.

[-] FaceDeer@fedia.io 4 points 4 days ago

I've been working with local LLMs for over a year now. No guardrails, and many of them fine-tuned against censorship. They can't output arbitrary training material verbatim.

Llama 3 was trained on 15 trillion tokens, both the 8B and 70B parameter versions.. So around 1:1000, not 1:7.

this post was submitted on 22 Dec 2024
1598 points (97.4% liked)

Technology

60105 readers
2004 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS