1924
The dream (lemmy.world)
you are viewing a single comment's thread
view the rest of the comments
[-] Xanaus@lemmy.ml 4 points 1 year ago

Could you please share your process for us mortals ?

[-] CeeBee@lemmy.world 6 points 1 year ago

Stable diffusion SXDL Turbo model running in Automatic1111 for image generation.

Ollama with Ollama-webui for an LLM. I like the Solar:7b model. It's lightweight, fast, and gives really good results.

I have some beefy hardware that I run it on, but it's not necessary to have.

[-] Ookami38@sh.itjust.works 2 points 1 year ago

Depends on what AI you're looking for. I don't know of an LLM (a language model,think chatgpt) that works decently on personal hardware, but I also haven't really looked. For art generation though, look up automatic1111 installation instructions for stable diffusion. If you have a decent GPU (I was running it on a 1060 slowly til I upgraded) it's a simple enough process to get started, there's tons of info online about it, and it's all run on local hardware.

[-] CeeBee@lemmy.world 2 points 1 year ago

I don't know of an LLM that works decently on personal hardware

Ollama with ollama-webui. Models like solar-10.7b and mistral-7b work nicely on local hardware. Solar 10.7b should work well on a card with 8GB of vram.

[-] ParetoOptimalDev@lemmy.today 1 points 11 months ago

If you have really low specs use the recently open sourced Microsoft Phi model.

this post was submitted on 25 Dec 2023
1924 points (97.9% liked)

People Twitter

5390 readers
1081 users here now

People tweeting stuff. We allow tweets from anyone.

RULES:

  1. Mark NSFW content.
  2. No doxxing people.
  3. Must be a tweet or similar
  4. No bullying or international politcs
  5. Be excellent to each other.
  6. Provide an archived link to the tweet (or similar) being shown if it's a major figure or a politician.

founded 2 years ago
MODERATORS