198
you are viewing a single comment's thread
view the rest of the comments
[-] abcdqfr@lemmy.world 19 points 3 months ago

Wake me up when it works offline "The Llama 3.1 models are available for download through Meta's own website and on Hugging Face. They both require providing contact information and agreeing to a license and an acceptable use policy, which means that Meta can technically legally pull the rug out from under your use of Llama 3.1 or its outputs at any time."

[-] admin@lemmy.my-box.dev 33 points 3 months ago* (last edited 3 months ago)

WAKE UP!

It works offline. When you use with ollama, you don't have to register or agree to anything.

Once you have downloaded it, it will keep on working, meta can't shut it down.

[-] MonkderVierte@lemmy.ml 1 points 3 months ago* (last edited 3 months ago)

Well, yes and no. See the other comment, 64 GB VRAM at the lowest setting.

[-] admin@lemmy.my-box.dev 9 points 3 months ago

Oh, sure. For the 405B model it's absolutely infeasible to host it yourself. But for the smaller models (70B and 8B), it can work.

I was mostly replying to the part where they claimed meta can take it away from you at any point - which is simply not true.

[-] RandomLegend@lemmy.dbzer0.com 14 points 3 months ago* (last edited 3 months ago)

It's available through ollama already. i am running the 8b model on my little server with it's 3070 as of right now.

It's really impressive for a 8b model

[-] abcdqfr@lemmy.world 1 points 3 months ago

Intriguing. Is that an 8gb card? Might have to try this after all

[-] RandomLegend@lemmy.dbzer0.com 1 points 3 months ago

Yup, 8GB card

Its my old one from the gaming PC after switching to AMD.

It now serves as my little AI hub and whisper server for home assistant

[-] abcdqfr@lemmy.world 1 points 3 months ago

What the heck is whisper? Ive been fooling around with hass for ages, haven't heard of it even after at least two minutes of searching. Is it openai affiliated hardwae?

[-] RandomLegend@lemmy.dbzer0.com 4 points 3 months ago

whisper is an STT application that stems from openAI afaik, but it's open source at this point.

i wrote a little guide on how to install it on a server with an NVidia GPU and hw acceleration and integrate it into your homeassistant after. https://a.lemmy.dbzer0.com/lemmy.dbzer0.com/comment/5330316

it's super fast with a GPU available and i use those little M5 ATOM Echo microphones for this.

[-] Kuvwert@lemm.ee 12 points 3 months ago* (last edited 3 months ago)

I'm running 3.1 8b as we speak via ollama totally offline and gave info to nobody.

https://ollama.com/library/llama3.1

[-] sunzu@kbin.run 4 points 3 months ago

I was able to set up small one via open webui.

It did ask to make an account but I didn't see any pinging home when I did it.

What am I missing here?

[-] Fiivemacs@lemmy.ca 1 points 3 months ago

Through meta...

That's where I stop caring

this post was submitted on 24 Jul 2024
198 points (92.7% liked)

Technology

59174 readers
2111 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS