12
you are viewing a single comment's thread
view the rest of the comments
[-] Even_Adder@lemmy.dbzer0.com 0 points 1 month ago

Last I heard, LoRAs cause catastrophic forgetting in the model, and full fine-tuning doesn't really work.

[-] clb92@feddit.dk 2 points 1 month ago

Oh well, in practice I'll just continue to enjoy this (possibly forgetful and not-fully-finetunable) model then, that still gives me amazing results ๐Ÿ˜Š

[-] erenkoylu@lemmy.ml 0 points 1 month ago* (last edited 1 month ago)

quite the opposite. Lora's are very effective against catastrophic forgetting, and full finetuning is very dangerous (but also much more powerful).

this post was submitted on 16 Sep 2024
12 points (92.9% liked)

Stable Diffusion

4301 readers
12 users here now

Discuss matters related to our favourite AI Art generation technology

Also see

Other communities

founded 1 year ago
MODERATORS