zbyte64

joined 1 year ago
[–] zbyte64@awful.systems 32 points 9 hours ago* (last edited 9 hours ago)

Don't bother with this guy, @ArbitraryValue@sh.itjust.works has done some mental gymnastics to justify starving the Palestinian population. So it really isn't surprising they would want to vote for the most ghoulish option.

[–] zbyte64@awful.systems 1 points 15 hours ago (1 children)

When requirements are "Whatever" then by all means use the "Whatever" machine: https://eev.ee/blog/2025/07/03/the-rise-of-whatever/

And then look for a better gig because such an environment is going to be toxic to your skill set. The more exacting the shop, the better they pay.

[–] zbyte64@awful.systems 1 points 15 hours ago* (last edited 15 hours ago) (3 children)

Literally the opposite experience when I helped material scientists with their R&D. Breaking in production would mean people who get paid 2x more than me are suddenly unable to do their job. But then again, our requirements made sense because we would literally look at a manual process to automate with the engineers. What you describe sounds like hell to me. There are greener pastures.

[–] zbyte64@awful.systems 2 points 16 hours ago

The stock market makes no sense to me.

That's because "The market can remain irrational longer than you can remain solvent."

[–] zbyte64@awful.systems 2 points 17 hours ago* (last edited 16 hours ago) (5 children)

Maybe it is because I started out in QA, but I have to strongly disagree. You should assume the code doesn't work until proven otherwise, AI or not. Then when it doesn't work I find it is easier to debug you own code than someone else's and that includes AI.

[–] zbyte64@awful.systems 3 points 17 hours ago

Why would you ever yell at an employee unless you're bad at managing people? And you think you can manage an LLM better because it doesn't complain when you're obviously wrong?

[–] zbyte64@awful.systems 4 points 22 hours ago* (last edited 22 hours ago) (2 children)

A junior developer actually learns from doing the job, an LLM only learns when they update the training corpus and develop an updated model.

[–] zbyte64@awful.systems 4 points 22 hours ago (8 children)

It’s usually vastly easier to verify an answer than posit one, if you have the patience to do so.

I usually write 3x the code to test the code itself. Verification is often harder than implementation.

[–] zbyte64@awful.systems 7 points 23 hours ago

DOGE has entered the chat

[–] zbyte64@awful.systems 3 points 23 hours ago (2 children)

When LLMs get it right it's because they're summarizing a stack overflow or GitHub snippet it was trained on. But you loose all the benefits of other humans commenting on the context, pitfalls and other alternatives.

[–] zbyte64@awful.systems 3 points 23 hours ago

Pepper Ridge Farms remembers when you could just do a web search and get it answered in the first couple results. Then the SEO wars happened....

[–] zbyte64@awful.systems 0 points 23 hours ago

I call it colonial trauma. Our forefathers would rape the help and the worse that would happen is getting berated by the judge.

 

A critical and funny critique of an AI written song.

view more: next ›