[-] polystruct@lemmy.world 1 points 1 year ago

Nice answer! Is there a number of concepts, to your knowledge/experience, where a LORA no longer works? For instance, if I want to make a model that understands all car brands and types, assuming that the base model doesn't of course, would a LORA still be sensible here?

Most LORAs I find have a more specialized narrow focus, and I don't know if I would just start with multiple LORAs dealing with individual concepts (a LORA for a "1931 Minerva 8 AL Rollston Convertible Sedan", a LORA for a "Maybach SW 42, 1939", etc.) or if I should try and generate one LORA to ~~rule~~ know them all...

1

I understand that, when we generate images, the prompt itself is first split into tokens, after which those tokens are used by the model to nudge the image generation in a certain direction. I have the impression that the model gets a higher impact of one token compared to another (although I don't know if I can call it a weight). I mean internally, not as part of the prompt where we can also force a higher weight on a token.

Is it possible to know how much a certain token was 'used' in the generation? I could empirically deduce that by taking a generation, stick to the same prompt, seed, sampling method, etc. and remove words gradually to see what the impact is, but perhaps there is a way to just ask the model? Or adjust the python code a bit and retrieve it there?

I'd like to know which parts of my prompt hardly impact the image (or even at all).

polystruct

joined 1 year ago