this post was submitted on 27 May 2025
2090 points (99.5% liked)

Programmer Humor

24772 readers
1339 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] MudMan@fedia.io 19 points 1 month ago (11 children)

It's pretty random in terms of what is or isn't doable.

For me it's a big performance booster because I genuinely suck at coding and don't do too much complex stuff. As a "clean up my syntax" and a "what am I missing here" tool it helps, or at least helps in figuring out what I'm doing wrong so I can look in the right place for the correct answer on something that seemed inscrutable at a glance. I certainly can do some things with a local LLM I couldn't do without one (or at least without getting berated by some online dick who doesn't think he has time to give you an answer but sure has time to set you on a path towards self-discovery).

How much of a benefit it's for a professional I couldn't tell. I mean, definitely not a replacement. Maybe helping read something old or poorly commented fast? Redundant tasks on very commonplace mainstream languages and tasks?

I don't think it's useless, but if you ask it to do something by itself you can't trust that it'll work without singificant additional effort.

[–] wise_pancake@lemmy.ca 8 points 1 month ago (4 children)

It catches things like spelling errors in variable names, does good autocomplete, and it’s useful to have it look through a file before committing it and creating a pull request.

It’s very useful for throwaway work like writing scripts and automations.

It’s useful not but a 10x multiplier like all the CEOs claim it is.

[–] MudMan@fedia.io 2 points 1 month ago (3 children)

Fully agreed. Everybody is betting it'll get there eventually and trying to jockey for position being ahead of the pack, but at the moment there isn't any guarantee that it'll get to where the corpos are assuming it already is.

Which is not the same as not having better autocomplete/spellcheck/"hey, how do I format this specific thing" tools.

[–] jcg@halubilo.social 2 points 1 month ago (1 children)

I think the main barriers are context length (useful context. GPT-4o has "128k context" but it's mostly sensitive to the beginning and end of the context and blurry in the middle. This is consistent with other LLMs), and just data not really existing. How many large scale, well written, well maintained projects are really out there? Orders of magnitude less than there are examples of "how to split a string in bash" or "how to set up validation in spring boot". We might "get there", but it'll take a whole lot of well written projects first, written by real humans, maybe with the help of AI here and there. Unless, that is, we build it with the ability to somehow learn and understand faster than humans.

[–] MudMan@fedia.io 1 points 1 month ago

I don't know, some of these guys have acccess to a LOT of code, and even more debate about what those good codebases entail.

I think the other issue is more relevant. Even 128K tokens is not enough for something really big, and the memory and processing costs for that do skyrocket. People are trying to work around it with draft models and summarization models, so they try to pick out the relevant parts of a codebase in one pass and then base their code generation just on that, and... I don't think that's going to work reliably at scale. The more chances you give a language model to lose their goddamn mind and start making crap up unsupervised the more work it's going to be to take what they spit out and shape it into something reasonable.

load more comments (1 replies)
load more comments (1 replies)
load more comments (7 replies)