37
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 06 Aug 2023
37 points (95.1% liked)
Apple
17435 readers
53 users here now
Welcome
to the largest Apple community on Lemmy. This is the place where we talk about everything Apple, from iOS to the exciting upcoming Apple Vision Pro. Feel free to join the discussion!
Rules:
- No NSFW Content
- No Hate Speech or Personal Attacks
- No Ads / Spamming
Self promotion is only allowed in the pinned monthly thread
Communities of Interest:
Apple Hardware
Apple TV
Apple Watch
iPad
iPhone
Mac
Vintage Apple
Apple Software
iOS
iPadOS
macOS
tvOS
watchOS
Shortcuts
Xcode
Community banner courtesy of u/Antsomnia.
founded 1 year ago
MODERATORS
I went all out and got the 192, I've been using it to run local machine learning models successfully. Llama2 70b runs fairly well after quantizing to 16 instead of the original 32 which ate all 192GB and 40GB of swap before running out of system memory. Smaller models like the llama2 7b are wicked fast.
Performance as far as normal development goes is simply divine, I can have basically every project I ever work on open on my dual 4k monitors without any slowdown ever. Simultaneously compiling and running models in the background without a stutter.
My biggest complaint so far is with my thunderbolt 4 dock not supporting 144hz my monitors can crank out.
I have had one system crash so far, not sure of the cause, but overall stability has been impeccable.
I'm used to x86 machines, one flaw with the apple silicon switch in general is that some of my react native libraries were compiled in a way that make it difficult to compile without rosetta, that's obviously not apple's problem, nor is it specifically a studio issue.
9k was incredibly painful, but I'm happy to have a machine that outperforms most retail machines on the market for vram and machine learning without spending even more.