74
submitted 5 months ago by NoTimeLeft@lemmy.world to c/privacy@lemmy.ml

As you may know, ChatGPT collects a lot of data on the users for the improvement of their AI, but this poses risks in its own way. I was wondering whether there are privacy alternatives to ChatGPT. Perhaps on F-Droid or Aurora/PlayStore, or for Linux.

Are there any alternatives you know of? Or are there other ways to interact with ChatGPT without giving personal information, such as a privacy focussed front-end?

all 36 comments
sorted by: hot top controversial new old
[-] projectmoon@lemm.ee 18 points 5 months ago
[-] anamethatisnt@lemmy.world 13 points 5 months ago

!localllama@sh.itjust.works

[-] CommunityLinkFixer 9 points 5 months ago

Hi there! Looks like you linked to a Lemmy community using a URL instead of its name, which doesn't work well for people on different instances. Try fixing it like this: !localllama@sh.itjust.works

[-] shootwhatsmyname@lemm.ee 17 points 5 months ago* (last edited 5 months ago)

Are you able to run LLMs on your own computer?

If so:

[-] mynamesnotrick@lemmy.zip 2 points 5 months ago* (last edited 5 months ago)

Yep, I use: https://github.com/oobabooga/text-generation-webui and currently am using this model: mistralai/Mistral-7B-Instruct-v0.2.

[-] simple@lemm.ee 12 points 5 months ago

Be aware OP that local LLMs are quite a bit worse than what's available online. Llama 3 is (probably?) the best one available now and even that has a habit of being very stupid sometimes compared to claude or chatgpt.

[-] NoTimeLeft@lemmy.world 1 points 5 months ago

Alright. Will keep that in mind, thanks!

[-] shootwhatsmyname@lemm.ee 1 points 5 months ago

Phi3 is surprisingly good for its size and speed too

[-] aleph@lemm.ee 11 points 5 months ago

I use https://huggingface.co/chat/, which is an open source alternative to ChatGPT.

Their privacy policy is here.

[-] pineapplelover@lemm.ee 10 points 5 months ago
[-] RiQuY@lemm.ee 6 points 5 months ago* (last edited 5 months ago)

One of these is programming oriented but they let you choose the model so I guess it is fine:

OpenAI API alternative:

[-] NoTimeLeft@lemmy.world 1 points 5 months ago

Good to have, I'm a bit of a programmer myself :)

[-] A1kmm@lemmy.amxl.com 4 points 5 months ago

The best option is to run them models locally. You'll need a good enough GPU - I have an RTX 3060 with 12 GB of VRAM, which is enough to do a lot of local AI work.

I use Ollama, and my favourite model to use with it is Mistral-7b-Instruct. It's a 7 billion parameter model optimised for instruction following, but usable with 4 bit quantisation, so the model takes about 4 GB of storage.

You can run it from the command line rather than a web interface - run the container for the server, and then something like docker exec -it ollama ollama run mistral, giving a command line interface. The model performs pretty well; not quite as well on some tasks as GPT-4, but also not brain-damaged from attempts to censor it.

By default it keeps a local history, but you can turn that off.

[-] delirious_owl@discuss.online 1 points 5 months ago

Is that 12GB RAM just for the model or total?

[-] yonder@sh.itjust.works 2 points 5 months ago

I think the GPU has 12GB of physical ram.

[-] anamethatisnt@lemmy.world 4 points 5 months ago

There are private GPT solutions coming, f.e. https://www.fujitsu.com/global/products/data-transformation/data-driven/ai-test-drive/
They are aimed at companies that for compliance reasons want to self host.

I wouldn't trust an llm solution with sensitive information unless I host it myself.

[-] NoTimeLeft@lemmy.world 2 points 5 months ago

Do you happen to know how this self-hosting would work? Can I run it at my desktop/phone or even a raspberry pi? How is the quality of generated results compared to ChatGPT?

[-] brianorca@lemmy.world 2 points 5 months ago

I can run 7B models on my laptop with its embedded GPU. Running on a phone or a Pi is possible with smaller models, but very slow. Expect good speed with a desktop Nvidea GPU. Later this year, there should be new computers with an NPU integrated to the CPU which should speed up computers that don't have a dedicated GPU. (But a GPU will still outperform them by a lot.)

70B models will run very slowly on even the best consumer hardware due to memory limitations.

[-] a4ng3l@lemmy.world 1 points 5 months ago

Typically llm are rather ressource intensive - you need beefy hardware to run those at speed. Especially if you intend to train them with your data to improve their relevance. I don’t think mobile phones or run to the mill laptops are going to be enough for any non-trivial implementations. I might be skewed by experiences on non-personal projects though.

[-] Ilandar@aussie.zone 3 points 5 months ago

You could try DuckDuckGo's implementation of 3.5. According to their privacy policy, no personal data is sent back to OpenAI for training. The model is also offline (it cannot access the internet in realtime to provide you with a more accurate answer).

[-] CrabAndBroom@lemmy.ml 2 points 5 months ago

I've been using GPT4All and quite liking it so far.

[-] Croquette@sh.itjust.works 2 points 5 months ago

I'm taking a chance, but which model would you use for learning new programming language?

[-] CrabAndBroom@lemmy.ml 3 points 5 months ago

Hmm, not sure exactly. I've been using Llama3 because it seems to give decent results for most things quickly, but I haven't really done much coding with it outside of some simple bash scripts TBH.

[-] Croquette@sh.itjust.works 2 points 5 months ago

I am looking for a basic coding AI. I don't want to create a complex software, just something to get me started by example. So bash script level is good enough for me.

[-] AVincentInSpace@pawb.social 2 points 5 months ago

Your future coworkers will thank you if you do not use an AI for that at all

[-] Croquette@sh.itjust.works 3 points 5 months ago

I'm talking about really basic stuff. AI is great as an entry point to a new language.

For example, in python, finding out the current folder in which the script is running or in preact, how to use simple hooks.

It's fast, and once I know the name of the functions used, I can look up the documentation I need to and find appropriate tutorials and examples.

[-] jjlinux@lemmy.ml 2 points 5 months ago

I use venice.ai because it's browser based and does not require an account. It is somewhat limited, but it works for my extremely limited purposes.

[-] aStonedSanta@lemm.ee 1 points 5 months ago

Anyone know if there is a way to feed a local AI some of the one notes from my job so I can search like a lazy bastard?

[-] invisiblegorilla@sh.itjust.works 2 points 5 months ago

Yeah. You can use ollama, and openwebui. Openwebui allows you to RAG (search your own files) with little effort

[-] Zerush@lemmy.ml 0 points 5 months ago

I use Andi, which is enough for my needs, and for sure is the most private and trustworth AI out there (maybe locally hosted apart). It was the first AI chat/search ever, long before the others from Google, Bing and Fakebook. Former LazyWeb.

[-] tufek@sopuli.xyz 0 points 5 months ago* (last edited 5 months ago)
this post was submitted on 23 May 2024
74 points (92.0% liked)

Privacy

31607 readers
153 users here now

A place to discuss privacy and freedom in the digital world.

Privacy has become a very important issue in modern society, with companies and governments constantly abusing their power, more and more people are waking up to the importance of digital privacy.

In this community everyone is welcome to post links and discuss topics related to privacy.

Some Rules

Related communities

Chat rooms

much thanks to @gary_host_laptop for the logo design :)

founded 5 years ago
MODERATORS