That was my question... How much on-chip memory do they have? And what are applications for that amount of memory? I think an image generator needs like 4-5GB and a LLM that's smart enough as a general porpose chatbot needs like 8-10GB. More will be better. And at that point you'd better make it unified memory like with the M-series Macs or other APUs? Or this isn't targeted at generative AI but some other applications. Hence my question.
Depending on the chip, they have somewhere from 100 to 400 GB/s. I'm not sure on the numbers on Intel processors. I think the consumer processors have about 50 - 80 GB/s. (~Alder Lake, dual channel DDR5) Mine seems to have way less. And a recent GPU will be somewhere in the range of 400 to 1000 GB/s. But consumer graphics cards stop at 24GB of VRAM and these flagship models are super expensive. Even compared to Apple products.
The people from the llama.cpp project did some measurements and I believe the Apple "Metal" framework seems to outperform the x86 computers by an order of magnitude or so. I'm not sure, it's been some time since i skimmed the discussions on their Github page.
Ich frage mich sowieso immer, warum Leute gegen Hilfe sind. Ich meine, klar ist es verwerflich mit Waffen zu handeln. Und teuer ist es wahrscheinlich auch. Aber die Alternative führt ja auch zu Konsequenzen, die wir dann morgen bezahlen müssen. Und Putin will ja bekanntermaßen die Sowjetunion wieder in altem Glanz erstrahlen lassen. Da geben wir mit dieser Entscheidung die baltischen Staaten et cetera auch mit auf. Oder wir verschieben das Problem nur um ein paar Jahre auf nachdem die Ukraine eingenommen ist. Letztendlich wird es aber dann ein größeres Problem geworden sein.
Uh, ist das EU Petitionsportal safe? Und möchte ich da all meine Daten, Adresse etc eintragen?
Get a different hobby. Find some activity you're interested in. Then focus on that and slowly let go. And keep in mind what matters to you.
What kind of AI workloads are these NPUs good at? I mean it can't be most of generative AI like LLMs, since that's mainly limited by the memory bandwith and at this point it doesn't really matter if you have a NPU, GPU or CPU... You first need lots of fast RAM and a wide interface to it.
The Apple chips also have a wide interface to the RAM. That means you can run chatbots (LLMs) and other AI workloads that are memory-bound at crazy speeds compared to an Intel (or AMD) computer.
Ja, Englisch und Deutsch sind auch beides germanische Sprachen, sind also durchaus enger verwandt. Wobei ich finde, Englisch ist auch sowieso eine der einfachsten Sprachen zu erlernen. Man muss kein der/die/das mitlernen, die unregelmäßigen Verben sind finde ich ein Witz gegen Deutsch und Französisch (was ich mal in der Schule hatte), wo es ja zu jeder Regel ohnehin zig Ausnahmen gibt … Ich hab mal etwas im Internet herumgeschaut, die Leute sagen man kann B2 Deutsch so in circa 1-2 Jahren nebenher erlernen.
Software: https://github.com/awesome-selfhosted/awesome-selfhosted
Guide: https://github.com/mikeroyal/Self-Hosting-Guide
As a beginner you might want to start out with one of the all-in-one turnkey operating systems like yunohost.org , dietPi.com or unRaid or a bunch of others (see the awesome-selfhosted list)
Has that been tried since 1790 when the french decided to behead all the rich people?
It's shiny, they advertise, put in a money to spread the word. And the onboarding process probably is way easier?! Also back when Mastodon was in the media, it wasn't yet the right time. Now, especially with Musk, it is. And the attention is on Bluesky since that is newer and what's hyped right now.