this post was submitted on 19 Mar 2025
12 points (100.0% liked)

LocalLLaMA

2775 readers
39 users here now

Welcome to LocalLLama! This is a community to discuss local large language models such as LLama, Deepseek, Mistral, and Qwen.

Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.

As ambassadors of the self-hosting machine learning community, we strive to support eachother and share our enthusiasm in a positive constructive way.

founded 2 years ago
MODERATORS
top 5 comments
sorted by: hot top controversial new old
[–] Smokeydope@lemmy.world 2 points 1 week ago (1 children)

Looks promising, hope this ends up in an open source process that improved RAG type task.

[–] thickertoofan@lemm.ee 1 points 1 week ago

It will, they have released a repo with code.

[–] MonsterBug@sh.itjust.works 1 points 1 week ago
[–] autonomoususer@lemmy.world 1 points 1 week ago* (last edited 1 week ago) (1 children)

How might this impact VRAM requirements? ~~I would also like to see a libre software implementation.~~

[–] thickertoofan@lemm.ee 2 points 1 week ago

There is a repo they released.