But what makes this AI model unique is that it’s lightweight enough to work efficiently on a CPU, with TechCrunch saying an Apple M2 chip can run it.
An Apple M2 can run bigger, higher-precision models than this FWIW. More important than this is perhaps whether older CPUs can run it with acceptable performance.
AI models are often criticized for taking too much energy to train and operate. But lightweight LLMs, such as BitNet b1.58 2B4T, could help us run AI models locally on less powerful hardware. This could reduce our dependence on massive data centers and even give people without access to the latest processors with built-in NPUs and the most powerful GPUs to use artificial intelligence.
This is definitely relevant to my interests especially with NPU support for such models coming. Dirt cheap ARM-based PCs based on e.g. the RK3588 are shipping with small NPUs