[-] StubbornCassette8@feddit.nl 5 points 1 month ago

What are the hardware requirements on these larger LLMs? Is it worth quantizing them for lower-end hardware for self hosting? Not sure how doing so would impact their usefulness.

[-] StubbornCassette8@feddit.nl 20 points 9 months ago

Please conduct yourself.

[-] StubbornCassette8@feddit.nl 1 points 10 months ago

I've been getting around that mostly with Chocolatey and other PowerShell scripts on Windows. I'm sure the same can be done on Linux.

Are you hosting the desktop or server version of Trilium?

[-] StubbornCassette8@feddit.nl 1 points 10 months ago

Also not bashing Trilium. If it works for you, great. It's a self hosted solution to keep your notes. Can't complain about that!

[-] StubbornCassette8@feddit.nl 1 points 10 months ago

Have you considered using Obsidian paired with Syncthing?

I keep my Obsidian notebook in Syncthing folders and find it works well enough across Windows and Android devices. The plugins transfer too. You would only have to trust the authors when setting up Obsidian for the first time after pointing to the right directory.

You will have conflicts with certain files if you open Obsidian on multiple devices at the same time. The note.md files should be preserved, which I think is what is really needed.

[-] StubbornCassette8@feddit.nl 2 points 11 months ago

I'm not a biologist. The only context I have regarding rapid cellular reproduction in space is the 2017 movie "Life" where a fictional alien implies doom for humanity and all of Earth as we know it.

Are there any positives to this news? My understanding is that multi-celled organisms have a hard time repairing themselves in microgravity; bones in particular being affected among other processes. Hoping the research being conducted here helps advance medicine on that front.

I can only imagine what having the runs is like in space. Super powered E. coli coursing through your gut sounds extra....you know 💩

[-] StubbornCassette8@feddit.nl 1 points 11 months ago* (last edited 11 months ago)

Oh wait, I think I misunderstood. I thought you had local language models running on your computer. I have seen that be discussed before with varying results.

Last time I tried running my own model was in the early days of the Llama release and ran it on an RTX 3060. The speed of delivery was much slower than OpenAI's API and the material was way off.

It doesn't have to be perfect, but I'd like to do my own API calls from a remote device phoning home instead of OpenAI's servers. Using my own documents as a reference would be a plus to, just to keep my info private and still accessible by the LLM.

Didn't know about Elevenlabs. Checking them out soon.

Edit because writing is hard.

[-] StubbornCassette8@feddit.nl 10 points 11 months ago

Can you share details? Been thinking of doing this with a new PC build. Curious what your performance and specs are.

StubbornCassette8

joined 1 year ago