this post was submitted on 07 Jul 2025
3 points (71.4% liked)
ObsidianMD
4688 readers
13 users here now
Unofficial Lemmy community for https://obsidian.md/
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
@emory @obsidianmd i will be honest that is a question you might be more equipped to answer than i, but here are the links if you wanna check it out. i see it has a folder named
LLMproviders
, but i am not sure if that is what you mean or not.obsidian://show-plugin?id=copilot
https://github.com/logancyang/obsidian-copilot
https://github.com/logancyang/obsidian-copilot/tree/master/src/LLMProviders
@mitch @obsidianmd this extension is great and i wish others used it instead of reimplementing things https://github.com/pfrankov/obsidian-ai-providers
Any chance for something like this working with Le Chat/Mistral? I don't see it in readmes
@INeedMana if they offer an OpenAI-ish compatible API (e.g.
https://blahblah/v1
) you can add it as an OpenAI service with a new endpoint and your own API creds inside AI Providers but IDK about the various other extensions.the mesh-AI one uses fabric though and you can configure fabric to use mistral's API just fine i reckon?
@mitch @emory @obsidianmd Do you pay for it?
@gabek @obsidianmd @emory I do not. It seems to function with all the features when you use local inferencing via Ollama.
@gabek @mitch @obsidianmd i don't either, i have other ways of doing what the paid version supports. i use cloud foundation models and local; my backends for embeddings are always ollama, lmstudio, and/or anythingLLM.
#anythingLLM has an easily deployed docker release and desktop application. it's not as able in managing and cross-threading conversations as LM (really Msty does it best) but #aLLM has a nice setup for agents and RAG.
@gabek @mitch @obsidianmd some of the small models i like using with obsidian vaults locally are deepseek+llama distills and MoE models for every occasion. fiction and creative, classification and vision. there's a few 8x merged models that are extremely fun for d&d.
i have a speech operated adventure like #Zork that uses a 6x MoE that can be really surreal.
there's a phi2-ee model on hf that is small and fast at electrical eng work, i use that for a radio and electronics project vault!