this post was submitted on 21 Mar 2025
22 points (100.0% liked)

LocalLLaMA

2747 readers
19 users here now

Welcome to LocalLLama! This is a community to discuss local large language models such as LLama, Deepseek, Mistral, and Qwen.

Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.

As ambassadors of the self-hosting machine learning community, we strive to support eachother and share our enthusiasm in a positive constructive way.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[โ€“] Karkitoo@lemmy.ml 6 points 1 day ago (1 children)

Looks impressive and it's truly open-source.

However, I see it requires CUDA. Could it run anyway:

  1. Without this?
  2. With AMD hardware?
  3. On mobile (as the model is only 1B) ?
[โ€“] thickertoofan@lemm.ee 2 points 1 day ago

I think the bigger bottleneck is SLAM, running that is intensive, it wont directly run on video, and SLAM is tough i guess, reading the repo doesn't give any clues of it being able to run on CPU inference.