33
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 14 Jul 2024
33 points (100.0% liked)
TechTakes
1427 readers
109 users here now
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
founded 1 year ago
MODERATORS
Proton, who I use for mail and various other services, has gone against the wishes of the majority of their userbase as measured by their own survey and implemented an LLM writing assistant in protonmail, which is a real laugh given Proton’s main hook is its services are end-to-end encrypted
(supposedly this piece of shit will run locally if you meet these incredibly high system requirements including a high end GPU or recent, high end Apple M chipset and a privacy-violating Chromium-based browser. otherwise it breaks e2e by sending your emails unencrypted to Proton’s servers, and they do a lot to try to talk over that fact)
Not to downplay what proton mail is doing, but they're saying that you can run this locally with a 2 core, 4 thread CPU from 2017 (the i3 7100, which is a 7000 series processor), and a RTX 2060, a GPU that was never considered high end. Perhaps they changed the requirements while you weren't looking. Or Am I reading this wrong?
only one of the 8 computers I own (and I’m not being cheeky here and counting embedded or retro systems, just laptops and desktops) is physically capable of meeting the model’s minimum requirements, and that’s only if I install chromium on the Windows gaming VM my bigger GPU’s dedicated to and access protonmail from there. nothing else I do needs a GPU that big, professional or otherwise — that hardware exists for games and nothing else. compared with the integrated GPUs most people have, a 2060’s fucking massive.
do you see how these incredibly high system requirements (for a webmail client of all things), alongside them already treating the local model as strictly optional, can act as a funnel redirecting people towards the insecure cloud version of the feature? “this feature only works securely on one of the computers where you write mail, at best” feels like a dark pattern to me.
Unfortunately, "extremely expensive" and "high-end" aren't really synonyms, thanks to, y'know, bitcoin. Of course, I don't disagree with your argument that having to buy a GPU just to ensure your webmail does what it's advertised to do is, well, dumb.
What I don't know is what the LLM even is. Did they just tack on Llama to their webmail app and call it a day? Did they train a model? Was it trained on emails? If so, whose emails? What an advertisement that would be: "Use Protonmail to encrypt your emails so that companies like Protonmail can't use them to train an LLM."
David’s article has some details on what the LLM is. I don’t think it’s trained on emails, but that doesn’t make me feel much better.