seang96

joined 2 years ago
[–] seang96@spgrn.com 2 points 8 months ago (2 children)

I thought training a model from AI data reduced its effectiveness? Wouldn't this mean they still did something crazy since they got the opposite results?

[–] seang96@spgrn.com 4 points 8 months ago* (last edited 8 months ago) (3 children)

If an employer doesn't think I am worth it and won't pay me more without asking I'd rather find one that apreciates what I do and leave. Last place I worked had every VP call me and ask if they could do anything or pay me more. Welp should've asked me that a year ago mate. They also said misogynistic things and made it a lot easier to leave lol.

[–] seang96@spgrn.com 2 points 8 months ago

Fines would have to be something crazy like Tik Tok ban $5000 per user type of deal

[–] seang96@spgrn.com 2 points 8 months ago (2 children)

Sounds similar to OpenId connect for authentication, service requests scopes which pulls varying info and user can be shown a consent screen with what data is being requested for approval.

I'd like a similar model for data sharing, though you will need privacy laws since you can revoke access in this case, but currently there would be nothing preventing storing your data at the time elsewhere or sharing it.

[–] seang96@spgrn.com 9 points 8 months ago (1 children)

Nzbgeek and nzbplanet are good. From my experience these 2 have very similar quality.

Drunkenslug is a lot harder to get into but it does find items the other two don't have on occasion. On the downside, drunkenslug lacks a one time subscription and the other two have a one time subscription. It does however have a free tier so you can probably be fine still using it as a last resort to try finding harder to get content.

[–] seang96@spgrn.com 2 points 8 months ago

Docker Hub is the bane of my existence lol. I updated every image I use that has github as alternative yesterday now giving time for my rate limiting to go down. Unfortunately still a few that are doxker hub only ironically including lemmy!

[–] seang96@spgrn.com 2 points 8 months ago

Ah yeah I haven't seen anything on that. That'll be next weeks headlines probably lol

[–] seang96@spgrn.com 10 points 8 months ago (2 children)

They aren't necessarily smaller from my understanding. Say it has 600B parameters, it more efficiently uses them. You ask it a programming question, it pulls 37B parameters most related to it and responds using those instead of processing all 600B.

Think of it like a model with specialized submodels that a specific one may provide the best answer, and uses it.

[–] seang96@spgrn.com 7 points 8 months ago (1 children)

But the robot has to run windows vista

[–] seang96@spgrn.com 14 points 8 months ago (10 children)

I believe it would have lowerr operational costs assuming the models the only thing different and they target the same size. Deepseek does the "mixture of experts" approach which makes it use a subset of parameters thus making it faster / less computational.

That's said I have a basic understanding of AI so maybe my understanding is flawed.

[–] seang96@spgrn.com 2 points 8 months ago* (last edited 8 months ago) (2 children)

I added renovate to my project over the weekend. I got 26 PRs for updating things I have missed, so it is working well for theost part!

The only issue I have with it are a few docker images come from docker hub and I am getting 429 response codes for pinning digests. Do you have any tips for renovate on this? Ideally I'd like it to just update and pin digests on the next update to avoid api hits.

I am doing a regex datasource for most of them since my k8s resources are in yaml files and found right now it strips - alpine and such from the version tags... Haven't looked into this issue too much yet though.

[–] seang96@spgrn.com 2 points 8 months ago

If its at the end forever it might be because you are putting too much soap in.

view more: ‹ prev next ›