1598
Make illegally trained LLMs public domain as punishment
(www.theregister.com)
This is a most excellent place for technology news and articles.
In the case of Stable Diffusion, they used 5 billion images to train a model 1.83 gigabytes in size. So if you reduce a copyrighted image to 3 bits (not bytes - bits), then yeah, I think you're probably pretty safe.
Your calculation is assuming that the input images are statistically independent, which is certainly not the case (otherwise the model would be useless for generating new images)
Of course it's silly. Of course the images are not statistically independent, that's the point. There are still people to this day who claim that stable diffusion and its ilk are producing "collages" of their training images, please tell this to them.
The way that these models work is by learning patterns from their training material. They learn styles, shapes, meanings. None of those things are covered by copyright.