this post was submitted on 15 Aug 2025
40 points (87.0% liked)

Programming

22186 readers
221 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 2 years ago
MODERATORS
 

I've seen a few articles saying that instead of hating AI, the real quiet programmers young and old are loving it and have a renewed sense of purpose coding with llm helpers (this article was also hating on ed zitiron, which makes sense why it would).

Is this total bullshit? I have to admit, even though it makes me ill, I've used llms a few times to help me learn simple code syntax quickly (im and absolute noob who's wanted my whole life to learn code but cant grasp it very well). But yes, a lot of time its wrong.

top 50 comments
sorted by: hot top controversial new old
[–] Ledivin@lemmy.world 10 points 5 hours ago* (last edited 5 hours ago)

Its an absolute gamechanger, IMO - the research phase of any task is reduced to effectively nothing, and I get massive amounts of work done when I walk away from my desk, because I plan for and keep lists of longer tasks to accomplish during those times.

You need to review every line of code it writes, but that's no different than it ever was when working with junior devs 🤷‍♂️ but now I get the code in minutes instead of weeks and the agents actually react to my comments.

We're using this with a massive monorepo containing hundreds of thousands of lines of code, and in tiny tool repos that serve exactly one purpose. If our code quality checks and standards werent as strict as they have been for the past decade, I think it wouldn't work well with the monorepo.

The important part is that my company is paying for it - I have no clue what these tools cost. I am definitely more productive, there is absolutely no debate there IMO. Is the extra productivity worth the extra cost? I have literally no idea.

[–] MXX53@programming.dev 2 points 4 hours ago

I like using it. Mostly for quick ideation, and also for getting rid of some of the tedious shit I do.

Sometimes it suggests a module or library I have never heard of, then I go and look it up to make sure it is real, not malicious and well documented.

I also like using my self hosted AI to document my code base in a readme following a template I provide. It gets it pretty good and usually is like 60-80% accurate and to the form I like. I just edit up the remaining and correct mistakes. Saves me a ton of time.

I think the best way to use AI is to use it like a tool. Don’t have it write code for you, but use it to enhance your own ability.

[–] ieGod@lemmy.zip 2 points 4 hours ago

I use it to vet ideas, concepts, approaches, and paradigms. It's amazing for rubber ducking. I don't use it for wholesale code gen though.

And as a documentation companion it's pretty rad. Not always right but generally gets things in the correct direction.

[–] communism@lemmy.ml 2 points 5 hours ago

I wouldn't know about professionally as I don't work in the industry, but anecdotally a lot of young people I see use LLMs for everything. Meanwhile in the FOSS community online I see very little of AI/LLMs. I think it's a cultural thing that will vary depending on what circle of people you're looking at.

[–] iglou@programming.dev 5 points 7 hours ago (1 children)

I'm not against AI use in software development... But you need to understand what the tools you use actually do.

An LLM is not a dev. It doesn't have the capability to think on a problem and come up with a solution. If you use an LLM as a dev, you are an idiot pressing buttons on a black box you understand nothing about.

An LLM is a predictive tool. So use it as a predictive tool.

  • Boilerplate code? It can do that, yeah. I don't like to use it that way, but it can do that.
  • Implementing a new feature? Maybe, if you're lucky, it has been trained on enough data that it can put something together. But you need to consider its output completely untrustworthy, and therefore it will require so much reviewing that it's just better to write it yourself in the first place.
  • Implementing something that solves a problem not solved before? Just don't. Use your own brain, for fuck's sake. That's what you have been trained on.

The one use of AI, at the moment, that I actually like and actually improves my workflow is JetBrains' full line completion AI. It very often accurately predicts what I want to write when it's boilerplate-ish, and shuts up when I write something original.

[–] Zexks@lemmy.world -4 points 2 hours ago* (last edited 2 hours ago) (2 children)

Yes they do have the abikity to think and reason just like you (generally mush faster and slightly better)

https://medium.com/@leucopsis/how-gpt-5-compares-to-claude-opus-4-1-fd10af78ef90

96% on the AIME with zero tools. Only reading the question and reasoning through the answer

https://www.datacamp.com/blog/gpt-5

[–] speculate7383@lemmy.today 1 points 45 minutes ago* (last edited 43 minutes ago)

No, they can't think and reason. However, they can replicate and integrate the thinking and reasoning of many people who have written about similar problems. And yes, they can do it must faster than we could read a hundred search result pages. And yes, their output looks slightly better than many of us in many cases, because they are often dispensing best practices by duplicating the writings of experts. (In the best cases, that is.)

https://arxiv.org/pdf/2508.01191

https://arstechnica.com/ai/2025/08/researchers-find-llms-are-bad-at-logical-inference-good-at-fluent-nonsense/

[–] NewNewAugustEast@lemmy.zip 3 points 1 hour ago* (last edited 1 hour ago)

This is not true. They do not think or reason. They have code that appears to reason, but it definitely is not.

Once it gets off track it doesn't consider that it is obviously wrong.

A simple math problem can fail and it is really obvious to a human for example.

[–] Electricd@lemmybefree.net 1 points 5 hours ago

I do and it’s great for small tasks. Wouldn’t trust it on an existing code base or more than a hundred lines of code.

I always review what it does and often cherry pick stuff

The only thing I vibe code are small websites / front ends because fuck HTML,CSS,JS

[–] djmikeale@feddit.dk 7 points 14 hours ago

Not total bullshit, but it's not great for all use cases:

For coding tasks the output looks good on the surface but often I end up changing stuff, meaning it would have been faster up do myself.

For coding I know little about (currently writing some GitHub actions), it's great at explaining alternatives, pros and cons, to give me a rudimentary understanding of stuff

I've also used it to transcribe tutorial screencasts, and then afterwards having a secondary LLM use the transcription to generate documentation (include in prompt: "when relevant, generate examples, use markdown tables, generate plantuml etc)

[–] Kolanaki@pawb.social 15 points 19 hours ago (11 children)

I don't see how it could be more effecient to have AI generate something that you then have to review and make sure actually works over just writing the code yourself, unless you don't know enough to code it yourself and just accept the AI generated code as-is without further review.

[–] Zexks@lemmy.world -1 points 2 hours ago

You can type at 300 words per minute with zero mistakes. Youre able to do than on systems youve never worked on before in languages youve never seen. #Doubt

load more comments (10 replies)
[–] sobchak@programming.dev 17 points 21 hours ago

In the grand scheme of things, I think AI code generators make people less efficient. Some studies have come out that indicate this. I've tried to use various AI tools, as I do like fields of AI/ML in general, but they would end up hampering my work in various ways.

[–] beejjorgensen@lemmy.sdf.org 31 points 23 hours ago (2 children)

I'm pretty sure every time you use AI for programming your brain atrophies a little, even if you're just looking something up. There's value in the struggle.

So they can definitely speed you up, but be careful how you use it. There's no value in a programmer who can only blindly recite LLM output.

There's a balance to be struck in there somewhere, and I'm still figuring it out.

[–] Zexks@lemmy.world 2 points 2 hours ago

This is literally the exact same argument made against using books and developing writing.

[–] 0x1C3B00DA@piefed.social 14 points 20 hours ago (2 children)

I'm pretty sure every time you use AI for programming your brain atrophies a little, even if you're just looking something up. There's value in the struggle.

I assume you were joking but some studies have come out recently that found this is exactly what happens and for more than just programming. (sorry it was a while ago so I dont have links)

[–] Honytawk@lemmy.zip 1 points 7 hours ago

There are similar studies on the effects of watching a Youtube video instead of reading a manual.

[–] ripcord@lemmy.world 8 points 14 hours ago

Doesn't sound like they're joking to me.

[–] Sicklad@lemmy.world 48 points 1 day ago (4 children)

From my experience it's great at doing things that have been done 1000x before (which makes sense given the training data), but when it comes to building something novel it really struggles, especially if there's 3rd party libraries involved that aren't commonly used. So you end up spending a lot of time and money hand holding it through things that likely would have been quicker to do yourself.

[–] kewjo@lemmy.world 14 points 1 day ago (2 children)

the 1000x before bit has quite a few sideffects to it as well.

  • lesser used languages suffer because there's not enough training data. this gets annoying quickly when it overrides your static tools and suggests nonsense.
  • larger training sets contain more vulnerabilities as most code is pretty terrible and may just be snippets that someone used once and threw away. owasp has a top 10 for a reason. take input validation for example, if I'm working on parsing a string there's usually context such as is this trusted data or untrusted? if i don't have that mental model where I'm thinking about the data i might see generated code and think it looks correct but in reality its extremely nefarious.
[–] MagicShel@lemmy.zip 6 points 19 hours ago* (last edited 18 hours ago)

It's decent at reviewing its own code, especially if you give it different lenses to look though.

"Analyze this code and look for security vulnerabilities." "Analyze this code and look for ways to reduce complexity."

And then.... think about the response like it's a random dude online reviewing your code. Lots of times it raises good issues but sometimes it tries too hard to find little shit that is at best a sidegrade.

[–] mesamunefire@piefed.social 12 points 23 hours ago

Its also trained on old stuff.

And because its old, you get some very strange side effects and less maintainability.

load more comments (2 replies)
[–] Naich 21 points 1 day ago (1 children)

You can either spend your time generating prompts, tweaking them until you get what you want and then using more prompts to refining the code until you end up with something that does what you want...

or you can just fucking write it yourself. And there's the bonus of understanding how it works.

AI is probably fine for generating boiler plate code or repetitive simple stuff, but personally I wouldn't trust it any further than that.

[–] MagicShel@lemmy.zip 5 points 20 hours ago

There is a middle ground. I have one prompt I use. I might tweak it a little for different technologies, languages, etc. only so I can fit more standards, documentation and example code in the upload limit.

And I ask it questions rather than asking it to write code. I have it review my code, suggest other ways of doing something, have it explain best practices, ask it to evaluate the maintainability, conformance to corporate standards, etc.

Sometimes it takes me down a rabbit hole when I'm outside my experience (so does Google and stack overflow for what it's worth), but if you're executing a task you understand well on your own, it can help you do it faster and/or better.

[–] fubarx@lemmy.world 8 points 20 hours ago

I use it mainly to tweak things I can't be bothered to dig into, like Jekyll or Wordpress templates. A few times I let it run and do a major refactor of some async back-end code. It botched the whole thing. Fortunately, easy to rewind everything from remote git repo.

Last week I started a brand new project, thought I'd have it write the boilerplate starter code. Described in detail what I was looking for. It sat there for ten minutes saying 'Thinking' and nothing happened. Killed it and created it myself. This was with Cursor using Claude. I've noticed it's gotten worse lately, maybe because of the increased costs.

[–] asm@programming.dev 5 points 20 hours ago

I’m somewhat new to the field ~1.5 years, so my opinion doesn’t hold too much weight.

But in the embedded field I’ve found AI to not be as helpful as it seems to be for many others. The one BIG thing is has helped me with is I can give it a data sheet and it’ll spit out all the register fields that I need, or help me quickly find information that I’m too lazy to Ctrl-f, which saves a couple minutes.

It has not proven it’s worth when it comes to the firmware itself. I’ve tried to get it to instantiate some peripheral instances and they never ended up working, no matter how I phrased the prompt or what context i’ve given it.

[–] atzanteol@sh.itjust.works 7 points 22 hours ago

The hate is ridiculous as is the hype.

It's a new tool that is often useful when used correctly. Don't use it to write entire applications - that's a recipe for disaster.

But if you're learning a new language it's amazing. You have an infinitely patient and immediately available tutor that can teach you a language's syntax, best practices, etc. I don't know why that would make you "ill" besides all the shame "real developers" seem to want to lump on anybody who uses AI. If you're not concerned about passing some "I don't use an IDE" nerd's purity test you'll be fine.

[–] criss_cross@lemmy.world 12 points 1 day ago

From my experience it’s really great at bootstrapping new projects for you. It’s good at getting you sample files and if you’re using cursor just building out a sample project.

It’s decent at being an alternative to google/SO for syntax or previously encountered errors. There’s a few things it hallucinates but generally it can save time as long as you don’t trust it blindly.

It struggles when you give it complex tasks or not-straightforward items. Or things that require a lot of domain knowledge. I once wanted to see what css classes were still in use across a handful of react components and it just shat the bed.

The people who champion AI as a human replacement will build a quick proof of concept with it and proclaim “oh shit this is awesome!” And not realize that that’s the easy part of software engineering.

[–] VoterFrog@lemmy.world 5 points 21 hours ago (1 children)

My favorite use is actually just to help me name stuff. Give it a short description of what the thing does and get a list of decent names. Refine if they're all missing something.

Also useful for finding things quickly in generated documentation, by attaching the documentation as context. And I use it when trying to remember some of the more obscure syntax stuff.

As for coding assistants, they can help quickly fill in boilerplate or maybe autocomplete a line or two. I don't use it for generating whole functions or anything larger.

So I get some nice marginal benefits out of it. I definitely like it. It's got a ways to go before it replaces the programming part of my job, though.

[–] leftzero@lemmy.dbzer0.com 1 points 9 hours ago

My favorite use is actually just to help me name stuff.

Reverse dictionary lookup, more or less.

Now, that is something LLMs should be actually good at, unlike practically any other thing they're being sold as being good at.

[–] TabbsTheBat@pawb.social 11 points 1 day ago

The truth, as always, is in the middle. Plenty of programmers hate it, and plenty of programmers like it. It just so happens that these kinds of spaces attracts more or the hate crowd, and paid shill articles attract more or the like crowd ¯⁠\⁠_⁠(⁠ツ⁠)⁠_⁠/⁠¯

load more comments
view more: next ›