this post was submitted on 15 Aug 2025
50 points (86.8% liked)

Programming

22218 readers
226 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 2 years ago
MODERATORS
 

I've seen a few articles saying that instead of hating AI, the real quiet programmers young and old are loving it and have a renewed sense of purpose coding with llm helpers (this article was also hating on ed zitiron, which makes sense why it would).

Is this total bullshit? I have to admit, even though it makes me ill, I've used llms a few times to help me learn simple code syntax quickly (im and absolute noob who's wanted my whole life to learn code but cant grasp it very well). But yes, a lot of time its wrong.

you are viewing a single comment's thread
view the rest of the comments
[–] pupbiru@aussie.zone 0 points 21 hours ago* (last edited 21 hours ago)

Implementing a function isn't for a "fancy autocomplete", it's for a brain to do. Unless all you do is reinventing the wheel, then yeah, it can generate a decent wheel for you.

pretty much every line of code we write in modern software isn’t unique… we use so many orders of magnitude more lines of other people’s code than our own, we’re really just plumbing pipes together

most functions we write that aren’t business logic specific to the problem domain of our software (and even sometimes then) has been written before… the novel part isn’t in the function body: the low level instructions… the novel part is how those instructions are structured… that may as well be pseudocode, and that pseudocode may as well take the form of function headers

Fuck no. If it gets the test wrong, it won't necessarily fail. It might very well pass even when it should fail, and that's something you won't know unless you review every single line it spits out. That's one of the worst areas to use an LLM.

write tests, tests fail, write code, tests slowly start to pass until you’re done… this is how we’ve always done TDD because it ensures the tests fail when they should. this is a good idea with or without LLMs because humans fuck up unit tests all the time

I'm not sure what you mean by that.

for example, you have an external API of some kind with an enum expressed via JSON as a string and you want to implement that API including a proper Enum object… an LLM can more easily generate that code than i can, and the longer the list of values the more cumbersome the task gets

especially effective for generating API wrappers because they basically amount to function some_name -> api client -> call /api/someName

this is basically a data transformation problem: translate from some structure to a well-defined chunk of code that matches the semantics of your language of choice

this is annoying for a human, and an LLM can smash out a whole type safe library in seconds based on little more than plain english docs

it might not be 100% right, but the price for failure is an error that you’ll see and can fix before the code hits production

and of course it’s better to generate all this using swagger specs, but they’re not always available and tend not to follow language conventions quite so well

for a concrete example, i wanted to interact with blackmagic pocket cinema cameras via bluetooth in swift on ios: something they don’t provide an SDK for… they do, however document their bluetooth protocols

https://documents.blackmagicdesign.com/UserManuals/BlackmagicPocketCinemaCameraManual.pdf?_v=1742540411000

(page 157 if you’re interested)

it’s incredibly cumbersome, and basically involves packing binary data into a packet that represents a different protocol called SDI… this would have been horrible to try and work out on my own, but with the general idea of how the protocol worked, i structured the functions, wrote some test case using the examples they provided, handed chatgpt the pdf and used it to help me with the bitbanging nonsense and translating their commands and positionally placed binaries into actual function calls

could i have done it? sure, but why would i? chat gpt did in 10 seconds what probably would have taken me at least a few hours of copying data from 7 pages of a table in a pdf - a task i dont enjoy doing, in a language i don’t know very well