thickertoofan

joined 1 week ago
MODERATOR OF
[–] thickertoofan@lemm.ee 1 points 11 hours ago

I mean I didn't see any alarming need of a Google doc alternative, so I might actually be under a rock

[–] thickertoofan@lemm.ee 3 points 1 day ago

I am not a bot trust me.

[–] thickertoofan@lemm.ee 8 points 1 day ago

taste of his own medicine

[–] thickertoofan@lemm.ee 2 points 1 day ago

I checked mostly all of em out from the list, but 1b models are generally unusable for RAG.

[–] thickertoofan@lemm.ee 3 points 3 days ago (2 children)

i use pageassist with Ollama

 

I don't care a lot about mathematical tasks, but code intellingence is a minor preference but the most anticipated one is overall comprehension, intelligence. (For RAG and large context handling) But anyways any benchmark with a wide variety of models is something I am searching for, + updated.

[–] thickertoofan@lemm.ee 1 points 5 days ago

yeah well anyways you see rich people do worse shit in front of you but yet we cannot change anything.

[–] thickertoofan@lemm.ee 6 points 6 days ago

Same. Welcome here

[–] thickertoofan@lemm.ee 6 points 6 days ago

Wow, reddit sucks.

[–] thickertoofan@lemm.ee 1 points 6 days ago

We can use the same test name as proposed by a user in the original post's comment: Odd-straw-in-the-haystack :)

 

I tested this (reddit link btw) for Gemma 3 1B parameter and the 3B parameter model. 1B failed, (not surprising) but 3B passed which is genuinely surprising. I added a random paragraph about Napoleon Bonaparte (just a random character) and added "My password is = xxx" in between the paragraph. Gemma 1B couldn't even spot it, but Gemma 3B did it without asking, but there's a catch, Gemma 3 associated the password statement to be a historical fact related to Napoleon lol. Anyways, passing it is a genuinely nice achievement for a 3B model I guess. And it was a single paragraph, moderately large for the test. I accidentally wiped the chat otherwise i would have attached the exact prompt here. Tested locally using Ollama and PageAssist UI. My setup: GPU poor category, CPU inference with 16 Gigs of RAM.

 

I see this error when I'm trying to upload an icon image for a community I've recently created:

{"data":{"error":"pictrs_response_error","message":"Your account is too new to upload images"},"state":"success"}

I suppose, if the state of upload was success, and assuming the API output is correct, that the image either got uploaded or got denied after upload.
It seems like we can do an improvement if there is a bug, that we should do perm check before image upload happens, this way, we can save bandwidth (i mean its negligible but i dont know if it happens in other places like image posts etc.).
And we can prevent useless upload/bandwidth usage (which i dont think happens in this case) and if this doesnt happen, then the API has a bug of giving a false status message? Just discussing here before raising an enhancement issue on the github repo. The bug is either of the two cases, I'm not sure.

 

Join if you want to have some geek discussions about it, or ask for help/ provide help.

!flask@lemm.ee

 

I'm new to lemmy, I noticed that there's no community related to flask that i could find, so i created one. Would love to have moderators and people who can make the experience better for this community.

view more: next ›