[-] Fingerthief@infosec.pub 4 points 4 months ago

Web search is definitely something I want to add, haven't quite figured out the route I want to take implementing it just yet though.

Hopefully I can get it added sooner rather than later!

43
submitted 4 months ago* (last edited 4 months ago) by Fingerthief@infosec.pub to c/opensource@lemmy.ml

cross-posted from: https://infosec.pub/post/13676291

I've been building MinimalChat for a while now, and based on the feedback I've received, it's in a pretty decent place for general use. I figured I'd share it here for anyone who might be interested!

Quick Features Overview:

  • Mobile PWA Support: Install the site like a normal app on any device.
  • Any OpenAI formatted API support: Works with LM Studio, OpenRouter, etc.
  • Local Storage: All data is stored locally in the browser with minimal setup. Just enter a port and go in Docker.
  • Experimental Conversational Mode (GPT Models for now)
  • Basic File Upload and Storage Support: Files are stored locally in the browser.
  • Vision Support with Maintained Context
  • Regen/Edit Previous User Messages
  • Swap Models Anytime: Maintain conversational context while switching models.
  • Set/Save System Prompts: Set the system prompt. Prompts will also be saved to a list so they can be switched between easily.

The idea is to make it essentially foolproof to deploy or set up while being generally full-featured and aesthetically pleasing. No additional databases or servers are needed, everything is contained and managed inside the web app itself locally.

It's another chat client in a sea of clients but it is unique in its own ways in my opinion. Enjoy! Feedback is always appreciated!

Self Hosting Wiki Section https://github.com/fingerthief/minimal-chat/wiki/Self-Hosting-With-Docker

I thought sharing here might be a good idea as well, some might find it useful!

I've added some updates since even the initial post which gave a huge improvement to message rendering speed as well as added a plethora of new models to choose from and load/run fully locally in your browser (Edge and Chrome) with WebGPU and WebLLM

[-] Fingerthief@infosec.pub 10 points 4 months ago

I haven't personally tried it yet with Ollama but it should work since it looks like Ollama has the ability to use OpenAI Response Formatted API https://github.com/ollama/ollama/blob/main/docs/openai.md

I might give it go here in a bit to test and confirm.

[-] Fingerthief@infosec.pub 15 points 4 months ago* (last edited 4 months ago)

Local models are indeed already supported! In fact any API (local or otherwise) that uses the OpenAI response format (which is the standard) will work.

So you can use something like LM Studio to host a model locally and connect to it via the local API it spins up.

If you want to get crazy...fully local browser models are also supported in Chrome and Edge currently. It will download the selected model fully and load it into the WebGPU of your browser and let you chat. It's more experimental and takes actual hardware power since you're fully hosting a model in your browser itself. As seen below.

[-] Fingerthief@infosec.pub 11 points 4 months ago* (last edited 4 months ago)

This app is more of an interface to use while connecting to any number of LLM Models that have an API available. The application itself has no model.

For example you can choose to use GPT-4 Omni by providing an API key from OpenAI.

But you can also connect to services like OpenRouter with an API key and select between 20+ different models that they provide access to as seen below

It also supports connecting to fully local models via programs like LM Studio which downloads models from Hugging Face to your machine and will spin up a local API to connect and chat with the model.

113
submitted 4 months ago* (last edited 4 months ago) by Fingerthief@infosec.pub to c/selfhosted@lemmy.world

I've been building MinimalChat for a while now, and based on the feedback I've received, it's in a pretty decent place for general use. I figured I'd share it here for anyone who might be interested!

Quick Features Overview:

  • Mobile PWA Support: Install the site like a normal app on any device.
  • Any OpenAI formatted API support: Works with LM Studio, OpenRouter, etc.
  • Local Storage: All data is stored locally in the browser with minimal setup. Just enter a port and go in Docker.
  • Experimental Conversational Mode (GPT Models for now)
  • Basic File Upload and Storage Support: Files are stored locally in the browser.
  • Vision Support with Maintained Context
  • Regen/Edit Previous User Messages
  • Swap Models Anytime: Maintain conversational context while switching models.
  • Set/Save System Prompts: Set the system prompt. Prompts will also be saved to a list so they can be switched between easily.

The idea is to make it essentially foolproof to deploy or set up while being generally full-featured and aesthetically pleasing. No additional databases or servers are needed, everything is contained and managed inside the web app itself locally.

It's another chat client in a sea of clients but it is unique in its own ways in my opinion. Enjoy! Feedback is always appreciated!

Self Hosting Wiki Section https://github.com/fingerthief/minimal-chat/wiki/Self-Hosting-With-Docker

[-] Fingerthief@infosec.pub 4 points 1 year ago

I used Apple for the last few years until recently and I can't say I've ever really noticed stuff like apps faking being another app. That's not to say it doesn't happen of course.

I do know the Apple app approval process is definitely more strict than what is required for the Play Store.

I'm not very experienced with Apple or Android development so I'd be curious to hear from devs that use both platforms as well.

[-] Fingerthief@infosec.pub 11 points 1 year ago

Seems like a friendly enough response was given to your comment and you automatically assumed they were only interested in saying you're wrong.

Having a discussion is not "proving everyone wrong"

1
submitted 1 year ago* (last edited 1 year ago) by Fingerthief@infosec.pub to c/minimalgpt@infosec.pub

Changes from release notes

  • Adjusted chat message bubbles max width to take up nearly the entire width of the chat.

  • Increase sized of message label logos and font.

  • Adjusted message font size and line-height for a better reading experience

  • Added a border to one side of message bubbles for some UI design changes

1
submitted 1 year ago* (last edited 1 year ago) by Fingerthief@infosec.pub to c/minimalgpt@infosec.pub
1
submitted 1 year ago* (last edited 1 year ago) by Fingerthief@infosec.pub to c/minimalgpt@infosec.pub

I've created a fairly thorough overview of MinimalGPT with all the basic info to get started. Please feel free to take a look!

1
submitted 1 year ago* (last edited 1 year ago) by Fingerthief@infosec.pub to c/minimalgpt@infosec.pub

Link to a live version of MinimalGPT that I host, you can always spin up a local version youself via the GItHub project.

201
submitted 1 year ago* (last edited 1 year ago) by Fingerthief@infosec.pub to c/cat@lemmy.world
[-] Fingerthief@infosec.pub 15 points 1 year ago

As a dev it’s nice to check all the official guideline boxes, as a user I’d much rather actually have features.

[-] Fingerthief@infosec.pub 15 points 1 year ago

They’re just going to source the allowed parts from Red Bull basically exactly like they used to do with Toro Rosso.

To think that will equate to a RB19 is a bit insane in my opinion. They will likely improve, but still be a mid midfield team like they used to be with Toro Rosso.

[-] Fingerthief@infosec.pub 7 points 1 year ago* (last edited 1 year ago)

Now it’s broken, I guess I I don’t use it this way often enough. Interesting nonetheless!

Edit - it’s very semantic, it matters if I include an uppercase “S” or not. That’s amusing.

I wonder if the temperature settings adjustment would fix that or just make it even weirder.

[-] Fingerthief@infosec.pub 11 points 1 year ago

Idk what I’m doing wrong, thankfully it always seems to listen and work fine for me lmao

[-] Fingerthief@infosec.pub 4 points 1 year ago

You’ve never actually used them properly then.

view more: next ›

Fingerthief

joined 1 year ago
MODERATOR OF