[-] sith@lemmy.zip 1 points 5 hours ago

That's what I thought. Thanks. The specification says that only 24 of the 28 lanes are available. Do you know why?

9
submitted 7 hours ago* (last edited 7 hours ago) by sith@lemmy.zip to c/amd@lemmy.zip

Hello!

I'm looking into buying a system for running inference with small to medium size LLM models. I wonder, is there any AM5 CPU + Chipset combination that supports 2x PCIe 16x with all lanes connected directly to the CPU? From what I've gathered my understanding is that there is no such configuration because the Ryzen 7000/9000 only have 24 PCIe lanes at best. This means I have to go for a Threadripper configuration, which is much more expensive. (The ROCm mGPU documentation states that all lanes shall be connected directly to the CPU.)

It's possible that I can manage running two GPUs with 8x lanes, but it's for sure not optimal..

But the thing is, it is quite hard for me to navigate the AMD website and the websites of various motherboard producers. I might very well be wrong.

So again: Is there any AM5 CPU + chipset combination that supports 2x PCIe 16x with all lanes connected directly to the CPU?

10
submitted 1 day ago by sith@lemmy.zip to c/emacs@lemmy.ml
[-] sith@lemmy.zip 1 points 1 day ago
2
submitted 3 days ago by sith@lemmy.zip to c/scheme@programming.dev
[-] sith@lemmy.zip 2 points 3 days ago* (last edited 3 days ago)

I can relate to this. And off the record (I know it's not always a super appreciated opinion in the Fediverse): for this kind of problem I find that LLMs help a lot.

[-] sith@lemmy.zip 3 points 3 days ago

Thanks for sharing!

[-] sith@lemmy.zip 24 points 3 days ago

That kind of behavior can also be a sign that the documentation is hard to find or hard to comprehend. Or that something isn't documented at all, but the seniors imagine it is, because the answer is obvious to them.

[-] sith@lemmy.zip 8 points 3 days ago* (last edited 3 days ago)

If someone actually wants help searching Lemmy or the Fediverse, I recommend this site: https://fedi-search.com/

Very simple, but it does the job. It's also good if one wants to learn advanced Google queries.

[-] sith@lemmy.zip 38 points 4 days ago

Remember that most people don't even know there is something called "rankings" or "indexer" in this context.

781
submitted 4 days ago* (last edited 4 days ago) by sith@lemmy.zip to c/fediverse@lemmy.world

It is clear that the signal to noise ratio of the WWW is getting worse. It's much harder to find good content when using a good old search engine. And if it's good it is usually hosted on Reddit or Stackexchange.

So remember, even if it's easy too Google something (well, it isn't nowadays), we want to create a fediverse of good content that helps people (I hope). So, it's always better to write a real answer if you have the time and energy. Please help boost the SNR and reverse the AI fueled information degradation loop.

[-] sith@lemmy.zip 8 points 4 days ago

Actually I did. Not thanks to you though.

[-] sith@lemmy.zip 5 points 4 days ago

Probably good, but I want to stay away from anything related to Kubernetes. My experience is that it's an overkill black hole of constant debugging. Unfortunately. Thanks though!

[-] sith@lemmy.zip 2 points 4 days ago

Looks good. Thanks!

58
submitted 4 days ago by sith@lemmy.zip to c/selfhosted@lemmy.world

Good FOSS software and reliable service providers? Etc.

51
submitted 4 days ago by sith@lemmy.zip to c/technology@lemmy.world
11

Howdy!

(moved this comment from the noob question thread because no replies)

I'm not a total noob when it comes to general compute and AI. I've been using online models for some time, but I've never tried to run one locally.

I'm thinking about buying a new computer for gaming and for running/testing/developing LLMs (not training, only inference and in context learning) . My understanding is that ROCm is becoming decent (and I also hate Nvidia) , so I'm thinking that a Radeon Rx 7900 XTX might be a good start. If I buy the right motherboard I should be able to put another XTX in there as well, later. If I use watercooling.

So first, what do you think about this? Are the 24 gigs of VRAM worth the extra bucks? Or should I just go for a mid-range GPU like the Arc B580?

I'm also curious experimenting with a no-GPU setup. I.e. CPU + lots of RAM. What kind of models do you think I'll be able to run, with decent performance, if I have something like a Ryzen 7 9800X3D and 128/256 GB of DDR5? How does it compare to the Radeon RX 7900 XTX? Is it possible to utilize both CPU and GPU when running inference with a single model, or is it either or?

Also.. Is it not better if noobs post questions in the main thread? Then questions will probably reach more people. It's not like there is super much activity..

94
submitted 1 week ago by sith@lemmy.zip to c/opensource@lemmy.ml
22
A corny Emacs? (lemmy.zip)
submitted 1 week ago* (last edited 1 week ago) by sith@lemmy.zip to c/emacs@lemmy.ml

I just got myself a Corne 3x6 keyboard. This probably means that I will drop evil-mode and instead solve ergonomics through home row mods. I will also try out Colemak. But one step at a time.

I'm curious if any of my fellow Lemmies also use Emacs with Corne and if you would like to share your key maps? Or hard learned lessons?

71
submitted 1 week ago by sith@lemmy.zip to c/technology@lemmy.world
10
submitted 1 week ago* (last edited 1 week ago) by sith@lemmy.zip to c/librewolf@lemmy.ml

Howdy!

I recently started using LibreWolf (because there is no good Firefox package for Guix ATM). However, when I tried to install my standard extensions, I was redirected to the Mozzarella web page. I thought "fine, I'm a good GNU citizen" and installed a few extensions. But soon I realized that the were seriously outdated and also dysfunctional. For example, the Bitwarden extension is from July 2023.

Shouldn't LibreWolf stop sending users to Mozzarella if it's a dead project hosting outdated extensions, considering all the security issues that implies?

1
submitted 1 week ago by sith@lemmy.zip to c/scheme@programming.dev
view more: next ›

sith

joined 3 weeks ago
MODERATOR OF