675
you are viewing a single comment's thread
view the rest of the comments
[-] mellowheat@suppo.fi 27 points 10 months ago* (last edited 10 months ago)

An exclusively locally running, opensource LLM might be a good thing though. In my amazing dreams where that's what they're planning to do.

[-] venoft@lemmy.world 12 points 10 months ago* (last edited 10 months ago)

What if you don't have a decent graphics card? Wait 5 minutes for your URL completion to finish?

[-] zwaetschgeraeuber@lemmy.world 2 points 10 months ago

You can run a 7b model on cpu really fast even on a phone.

[-] gentooer@programming.dev -3 points 10 months ago

Using an LLM is quite fast, especially if it's optimised to run on normal hardware

[-] cley_faye@lemmy.world 5 points 10 months ago

Decent models are huge; an average one requires 8GB to be kept in memory (better models requires something like 40 to 70 GB), and most currently available engines are extremely slow on a CPU and requires dedicated hardware (and even relatively powerful GPU requires a few seconds of "thinking" time). It is unlikely that these requirements will be easily squeezable in current computers, and more likely that dedicated hardware will be required.

[-] barsoap@lemm.ee 3 points 10 months ago

I don't think any inference engines have actually been optimised to run on CPUs. You're stuck with 32-bit floats but OTOH that just means that you can do gigantic winograd transformations with the excess precision, needing far fewer fmuladds in total and CPUs are better at dealing with the memory access patterns that come with transforming the convolution. Most people have at least around 1TFLOP of compute in their CPU (e.g. a Ryzen 3600 has that much) that's not ever seeing the light of day. About a fifth of what an RX 570 has, it's a difference but not a magnitude and you can run SDXL with that kind of class of card (maybe not the 570 dunno about software support but a 5500 works, despite AMD's best efforts to cripple rocm).

Also from what I gather they're more or less doing summarybot for your browsing history, that's not a ChatGPT or Llama-style giant model you can talk with.

Also to all those people complaining: There's already AI in firefox, the translation models are about 17MB per language pair, gzipped.

[-] model_tar_gz@lemmy.world 1 points 10 months ago* (last edited 10 months ago)

ONNX Runtime is actually decently well optimized to run on CPUs; even with large models. However, the simple truth is that there’s really no escaping that Billion+parameter models need to be quantized and even pruned heavily to fit in memory and not saturate the CPU cache so inferences/generations don’t take forever. That’s a reduction in accuracy, so the quality of the generations aren’t great.

There is a lot of really interesting research and development being done right now on smart quantization and pruning. Model serving technologies are improving rapidly too—paged attention is a really cool technique (for transformer based models) for effectively leveraging tensor core hardware—I don’t think that’s supported on CPU yet but it’s probably not that far off.

It’s a really active field and there’s just as much interest in running huge models on huge hardware as there is big models on small hardware. I recently heard of layerwise inference for CPUs; load each layer of the network to the CPU cache on demand. That’s typically a bottleneck operation on GPUs but CPU memoery so bloody fast that it might actually work fine. I haven’t played with it myself, or read the paper all that deeply so I can’t really comment more than it’s an interesting idea.

[-] __matthew__@lemmy.world 0 points 10 months ago

Sorry but has anyone in this thread actually tried running local LLMs on CPU? You can easily run a 7B model at varying levels of quantization (ie. 5 bit quantization) and get a generalized prompt-able LLM. Yeah, of course it's going to take ~4GB of RAM (which is mem-mapped and paged into memory), but you can easily fine tune smaller more specific models (like the translation one mentioned above) and have surprising intelligence at a fraction of the resources.

Take, for example, phi-2 which performs as well as 13B param models but with 2.7B params. Yeah, that's still going to take 1.5GB RAM which Firefox wouldn't reasonably ship, but many lighter weight specialized tasks could easily use something like a fine tuned 0.3B model with quantization.

[-] cley_faye@lemmy.world 1 points 10 months ago

Yes, I did. And yes, it is possible. It's terribly slow in comparison, making it less useful. It very quickly devolves into random mumbling or get stuck in weird loops. It also hogs resources that are actually used by other tasks you may be doing.

I mainly test dev AI solutions, and moving from 1B to 7B models made them vastly more pertinent. And moving from CPU implementation (Ryzen 7 3700X) to GPU (RTX 3080 Ti) made them fast enough to be used as quick completion and immediate suggestion without breaking workflow, in addition to freeing resources for IDE, building tools and the actual software being run, while running it on CPU had multi-seconds delay, which made this use case completely useless.

[-] raspberriesareyummy@lemmy.world 2 points 10 months ago* (last edited 10 months ago)

yeah, CPU vendors will love the increased sales thanks to an even more resource hogging shitty web browser

this post was submitted on 14 Feb 2024
675 points (95.5% liked)

Technology

60052 readers
2959 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS