173
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 12 Jun 2024
173 points (96.8% liked)
Technology
59287 readers
4429 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
Would be cool to be AI horde compatible and just ditch the GPU requirements entirely.
I don't think everyone got a GPU that could run stable diffusion easily, even more for laptops
You don't have to run the AI stuff on the same computer running Krita. At home I have my gaming PC set up for that for the whole family. And if I recall correctly the plugin also promotes a specific cloud service, but you can enter any URL to a compatible service.
They were planning it long time ago but ai horde devs don't have time for it right now.
You can see discussion regarding it and the old krita horde integration in this discord
The thing is that AI Horde relies on donated hardware. There are only so many people willing to donate relative to people who want to use.
Vast.ai lets people rent hardware, but not on a per-operation basis. That's cheaper then buying and keeping it idle a lot of the time, reduces costs, but it's still gonna have idle time.
I think what would be better is some kind of service that can sell compute time on a per-invocation basis. Most of the "AI generation services" do thus, but they also entail that you use their software.
So, it's expensive to upload models to a card, and you don't want tonnage to re-upload a model for each run. But hash the model and remember what the last thing run on the card is. If someone queues a run with the same model again, just use the existing uploaded model.
Don't run the whole Stable Diffusion or whatever package on the cloud machine.
That makes the service agnostic to the software involved. Like, you can run whatever version of whatever LLM software you want and use whatever models. It makes the admin-side work relatively light. It makes sure that the costs get covered, but people aren't having to pay to buy hardware that's idle a lot of the time.
Might be that some service like that already exists, but if so, I'm not aware of it.
replied under