112
These Mini AI Models Match OpenAI With 1,000 Times Less Data.
(singularityhub.com)
This is a most excellent place for technology news and articles.
I'm pretty sure that would be three orders of magnitude.
They're not talking about the same thing.
That's in reference to the size of the model itself.
That's in reference to the size of the training data that was used to train the model.
Minimizing both of those things is useful, but for different reasons. Smaller training sets make the model cheaper to train, and a smaller model makes the model cheaper to run.
After a quick skim, seems like the article has lots of errors. Molmo is trained on top of Qwen. The smallest ones are trained on something by the same company as Molmo.