559
you are viewing a single comment's thread
view the rest of the comments
[-] FarceOfWill@infosec.pub 10 points 3 months ago

Until someone uses it for a little more than boilerplate, and the reviewer nods that bit through as it's hard to review and not something a human/the person who "wrote" it would get wrong.

Unless all the ai generated code is explicitly marked as ai generated this approach will go wrong eventually.

[-] admin@lemmy.my-box.dev 6 points 3 months ago

Unless all the ai generated code is explicitly marked as ai generated this approach will go wrong eventually.

Undoubtedly. Hell, even when you do mark it as such, this will happen. Because bugs created by humans also get deployed.

Basically what you're saying is that code review is not a guarantee against shipping bugs.

[-] HauntedCupcake@lemmy.world 1 points 3 months ago* (last edited 3 months ago)

Agreed, using LLMs for code requires you to be an experienced dev who can understand what it pukes out. And for those very specific and disciplined people it's a net positive.

However, generally, I agree it's more risk than it's worth

this post was submitted on 07 Jun 2024
559 points (99.3% liked)

Technology

58137 readers
7390 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS