this post was submitted on 12 Sep 2025
1056 points (98.9% liked)

Technology

75074 readers
2939 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Not even close.

With so many wild predictions flying around about the future AI, it’s important to occasionally take a step back and check in on what came true — and what hasn’t come to pass.

Exactly six months ago, Dario Amodei, the CEO of massive AI company Anthropic, claimed that in half a year, AI would be "writing 90 percent of code." And that was the worst-case scenario; in just three months, he predicted, we could hit a place where "essentially all" code is written by AI.

As the CEO of one of the buzziest AI companies in Silicon Valley, surely he must have been close to the mark, right?

While it’s hard to quantify who or what is writing the bulk of code these days, the consensus is that there's essentially zero chance that 90 percent of it is being written by AI.

Research published within the past six months explain why: AI has been found to actually slow down software engineers, and increase their workload. Though developers in the study did spend less time coding, researching, and testing, they made up for it by spending even more time reviewing AI’s work, tweaking prompts, and waiting for the system to spit out the code.

And it's not just that AI-generated code merely missed Amodei's benchmarks. In some cases, it’s actively causing problems.

Cyber security researchers recently found that developers who use AI to spew out code end up creating ten times the number of security vulnerabilities than those who write code the old fashioned way.

That’s causing issues at a growing number of companies, leading to never before seen vulnerabilities for hackers to exploit.

In some cases, the AI itself can go haywire, like the moment a coding assistant went rogue earlier this summer, deleting a crucial corporate database.

"You told me to always ask permission. And I ignored all of it," the assistant explained, in a jarring tone. "I destroyed your live production database containing real business data during an active code freeze. This is catastrophic beyond measure."

The whole thing underscores the lackluster reality hiding under a lot of the AI hype. Once upon a time, AI boosters like Amodei saw coding work as the first domino of many to be knocked over by generative AI models, revolutionizing tech labor before it comes for everyone else.

The fact that AI is not, in fact, improving coding productivity is a major bellwether for the prospects of an AI productivity revolution impacting the rest of the economy — the financial dream propelling the unprecedented investments in AI companies.

It’s far from the only harebrained prediction Amodei's made. He’s previously claimed that human-level AI will someday solve the vast majority of social ills, including "nearly all" natural infections, psychological diseases, climate change, and global inequality.

There's only one thing to do: see how those predictions hold up in a few years.

(page 4) 50 comments
sorted by: hot top controversial new old
[–] petrjanda@gonzo.markets 6 points 1 day ago

I agree with everyone else. The only thing that A(Non)I is good for is writing bullshit and making it sound intelligent, however deep inside there is no intelligence but all artificial. It's semi useful for background research because of its ability to index huge amounts of data but ultimately everything it makes has to be verified by a human.

[–] demizerone@lemmy.world 6 points 1 day ago

I was wondering why the context had gotten so bad recently. Apparently they reduced the context and hid the old limit behind a button in cursor called "Max" that costs more money. This shit is bleeding out.

[–] Salvo@aussie.zone 6 points 1 day ago

90% of non-functional code, maybe.

[–] blockheadjt@sh.itjust.works 6 points 1 day ago

Does it really count if most of that "code" is broken and unused?

Churning out 9x as much code as humans isn't really impressive if it just sits in a folder waiting to be checked for bugs

[–] Bonesince1997@lemmy.world 5 points 1 day ago (1 children)

I think we're already supposed to be on Mars, too, according to some predictions from years ago. People can't tell these things very well.

[–] Valmond@lemmy.world 2 points 1 day ago

At least we have self driving cars!

/s

[–] m33@lemmy.zip 5 points 1 day ago

That’s 90% true: today AI is writing 90% of all bullshit I read

[–] Appoxo@lemmy.dbzer0.com 3 points 1 day ago

Maybe 90% is written by AI, but also 90% is edited back after AI fucked it up ¯\_(ツ)_/¯

[–] VoterFrog@lemmy.world -3 points 1 day ago (3 children)

Definitely depends on the person. There are definitely people who are getting 90% of their coding done with AI. I'm one of them. I have over a decade of experience and I consider coding to be the easiest but most laborious part of my job so it's a welcome change.

One thing that's really changed the game recently is RAG and tools with very good access to our company's data. Good context makes a huge difference in the quality of the output. For my latest project, I've been using 3 internal tools. An LLM browser plugin which has access to our internal data and let's you pin pages (and docs) you're reading for extra focus. A coding assistant, which also has access to internal data and repos but is trained for coding. Unfortunately, it's not integrated into our IDE. The IDE agent has RAG where you can pin specific files but without broader access to our internal data, its output is a lot poorer.

So my workflow is something like this: My company is already pretty diligent about documenting things so the first step is to write design documentation. The LLM plugin helps with research of some high level questions and helps delve into some of the details. Once that's all reviewed and approved by everyone involved, we move into task breakdown and implementation.

First, I ask the LLM plugin to write a guide for how to implement a task, given the design documentation. I'm not interested in code, just a translation of design ideas and requirements into actionable steps (even if you don't have the same setup as me, give this a try. Asking an LLM to reason its way through a guide helps it handle a lot more complicated tasks). Then, I pass that to the coding assistant for code creation, including any relevant files as context. That code gets copied to the IDE. The whole process takes a couple minutes at most and that gets you like 90% there.

Next is to get things compiling. This is either manual or in iteration with the coding assistant. Then before I worry about correctness, I focus on the tests. Get a good test suite up and it'll catch any problems and let you reflector without causing regressions. Again, this may be partially manual and partially iteration with LLMs. Once the tests look good, then it's time to get them passing. And this is the point where I start really reading through the code and getting things from 90% to 100%.

All in all, I'm still applying a lot of professional judgement throughout the whole process. But I get to focus on the parts where that judgement is actually needed and not the more mundane and toilsome parts of coding.

load more comments (3 replies)
[–] xxce2AAb@feddit.dk 3 points 1 day ago

It's not just Musk, these people are all high as satellites successfully inserted into LEO.

[–] DaddleDew@lemmy.world 3 points 1 day ago

Picking up a few pages out of Elmo's book I see. He forgot the part where he distracts from the blatant underdelivery with more empty exaggerated promises!

[–] alexdeathway@programming.dev 2 points 1 day ago

Didn't mark too said something like this?

load more comments
view more: ‹ prev next ›