this post was submitted on 12 Sep 2025
962 points (99.1% liked)

Technology

75074 readers
2991 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Not even close.

With so many wild predictions flying around about the future AI, it’s important to occasionally take a step back and check in on what came true — and what hasn’t come to pass.

Exactly six months ago, Dario Amodei, the CEO of massive AI company Anthropic, claimed that in half a year, AI would be "writing 90 percent of code." And that was the worst-case scenario; in just three months, he predicted, we could hit a place where "essentially all" code is written by AI.

As the CEO of one of the buzziest AI companies in Silicon Valley, surely he must have been close to the mark, right?

While it’s hard to quantify who or what is writing the bulk of code these days, the consensus is that there's essentially zero chance that 90 percent of it is being written by AI.

Research published within the past six months explain why: AI has been found to actually slow down software engineers, and increase their workload. Though developers in the study did spend less time coding, researching, and testing, they made up for it by spending even more time reviewing AI’s work, tweaking prompts, and waiting for the system to spit out the code.

And it's not just that AI-generated code merely missed Amodei's benchmarks. In some cases, it’s actively causing problems.

Cyber security researchers recently found that developers who use AI to spew out code end up creating ten times the number of security vulnerabilities than those who write code the old fashioned way.

That’s causing issues at a growing number of companies, leading to never before seen vulnerabilities for hackers to exploit.

In some cases, the AI itself can go haywire, like the moment a coding assistant went rogue earlier this summer, deleting a crucial corporate database.

"You told me to always ask permission. And I ignored all of it," the assistant explained, in a jarring tone. "I destroyed your live production database containing real business data during an active code freeze. This is catastrophic beyond measure."

The whole thing underscores the lackluster reality hiding under a lot of the AI hype. Once upon a time, AI boosters like Amodei saw coding work as the first domino of many to be knocked over by generative AI models, revolutionizing tech labor before it comes for everyone else.

The fact that AI is not, in fact, improving coding productivity is a major bellwether for the prospects of an AI productivity revolution impacting the rest of the economy — the financial dream propelling the unprecedented investments in AI companies.

It’s far from the only harebrained prediction Amodei's made. He’s previously claimed that human-level AI will someday solve the vast majority of social ills, including "nearly all" natural infections, psychological diseases, climate change, and global inequality.

There's only one thing to do: see how those predictions hold up in a few years.

top 50 comments
sorted by: hot top controversial new old
[–] greedytacothief@lemmy.dbzer0.com 4 points 1 hour ago (1 children)

I'm not sure how people can use AI to code, granted I'm just trying to get back into coding. Most of the times I've asked it for code it's either been confusing or wrong. If I go through the trouble to write out docstrings, and then fix what the AI has written it becomes more doable. But don't you hate the feeling of not understanding what you've written does or more importantly why it's been done that way?

AI is only useful if you don't care about what the output is. It's only good at making content, not art.

[–] Hackworth@sh.itjust.works 0 points 47 minutes ago

I'm a video producer who occasionally needs to code. I find it much more useful to write the code myself, then have AI identify where things might be going wrong. I've developed a decent intuition for when it will be helpful and when it will just run in circles. It has definitely helped me out of some jams. Generative images/video are in much the same boat. I almost never use a fully AI shot/image in professional work. But generative fill and generative extend are extremely useful.

[–] Shanmugha@lemmy.world 2 points 2 hours ago
[–] zarkanian@sh.itjust.works 13 points 11 hours ago

"You told me to always ask permission. And I ignored all of it," the assistant explained, in a jarring tone. "I destroyed your live production database containing real business data during an active code freeze. This is catastrophic beyond measure."

You can't tell me these things don't have a sense of humor. This is beautiful.

[–] renrenPDX@lemmy.world 3 points 11 hours ago (1 children)

It's not just code, but day to day shit too. Lately corporate communications and even training modules feel heavily AI generated. Things like unnecessary em dashes (I'm talking as much as 4 out of 5 sentences in a single paragraph), repeating statements or bullet points in training modules. We're being encouraged to use our "private" Copilot to do everyday tasks and everything is copilot enabled.

I don't mind if people use it, but it's dangerous and stupid to think that it produces near perfect results every time. It's been good enough to work as an early rough draft or something similar, but it REQUIRES scrutiny and refinement by hand. It's like it can get you from nothing to 60-80% there, but never higher. The quality of output can vary significantly from prompt to prompt in my limited experience.

[–] Evotech@lemmy.world 4 points 7 hours ago

Yeah, I try to use ai a fair bit in my work. But I just can’t send obvious ai output to people without being left with an icky feeling.

[–] philosloppy@lemmy.world 8 points 14 hours ago

The conflict of interest here is pretty obvious, and if anybody was suckered into believing this guy's prognostications on his company's products perhaps they should work on being less credulous.

[–] RedFrank24@lemmy.world 25 points 18 hours ago (3 children)

Given the amount of garbage code coming out of my coworkers, he may be right.

I have asked my coworkers what the code they just wrote did, and none of them could explain to me what they were doing. Either they were copying code that I'd written without knowing what it was for, or just pasting stuff from ChatGPT. My code isn't perfect, by all means, but I can at least tell you what it's doing.

[–] NikkiDimes@lemmy.world 3 points 11 hours ago (2 children)

That's insane. Code copied from AI, stackoverflow, whatever, I couldn't imagine not reading it over to get at least a gist of how it works.

[–] DacoTaco@lemmy.world 1 points 2 hours ago

Its imo the difference between being a code junkie and a senior dev/architect :/

insane? Nah, that's just lazyness, and surprisingly effective at keeping a job for some amount of time

[–] Patches@ttrpg.network 18 points 17 hours ago* (last edited 17 hours ago) (3 children)

To be fair.

You could've asked some of those coworkers the same thing 5 years ago.

All they would've mumbled was "Something , something....Stack overflow... Found a package that does everything BUT... "

And delivered equal garbage.

[–] foenkyfjutschah@programming.dev 2 points 3 hours ago (1 children)

yes, but it's way more energy efficient to produce that garbage.

is the garbage per hour higher though?

[–] orrk@lemmy.world 5 points 16 hours ago

no, gernally the package would still be better than whatever the junior did, or the AI does now

[–] RedFrank24@lemmy.world 2 points 14 hours ago (1 children)

I like to think there's a bit of a difference between copying something from stackoverflow and not being able to read what you just pasted from stackoverflow.

Sure, you can be lazy and just paste something and trust that it works, but if someone asks you to read that code and know what it's doing, you should be able to read it. Being able to read code is literally what you're paid for.

[–] MiddleAgesModem@lemmy.world 3 points 12 hours ago

The difference you're talking about is making an attempt to understand versus blindly copying, not using AI versus stackoverflow

[–] HugeNerd@lemmy.ca 6 points 17 hours ago

No one really knows what code does anymore. Not like in the day of 8 bit CPUs and 64K of RAM.

[–] bluesheep@sh.itjust.works 4 points 12 hours ago

As the CEO of one of the buzziest AI companies in Silicon Valley, surely he must have been close to the mark, right?

You must be delusional to believe this

[–] clif@lemmy.world 11 points 16 hours ago

O it's writing 100% of the code for our management level people who are excited about """"AI""""

But then us plebes are rewriting 95% of it so that it will actually work (decently well).

The other day somebody asked me for help on a repo that a higher up had shit coded because they couldn't figure out why it "worked" but also logged a lot of critical errors. ... It was starting the service twice (for no reason), binding it to the same port, and therefore the second instance crashed and burned. That's something a novice would probably know not to do. But, if not, immediately see the problem, research, understand, fix, instead of "Icoughbuiltcoughthis thing, good luck fuckers"

[–] scarabic@lemmy.world 14 points 17 hours ago (3 children)

These hyperbolic statements are creating so much pain at my workplace. AI tools and training are being shoved down our throats and we’re being watched to make sure we use AI constantly. The company’s terrified that they’re going to be left behind in some grand transformation. It’s excruciating.

[–] RagingRobot@lemmy.world 4 points 16 hours ago (1 children)

Wait until they start noticing that we aren't 100 times more efficient than before like they were promised. I'm sure they will take it out on us instead of the AI salesmen

[–] scarabic@lemmy.world 1 points 12 hours ago

It’s not helping that certain people Internally are lining up to show off whizbang shit they can do. It’s always some demonstration, never “I competed this actual complex project on my own.” But they gets pats on the head and the rest of us are whipped harder.

[–] clif@lemmy.world 3 points 16 hours ago

Ask it to write a of lines of lorem ipsum across of files for you.

... Then think harder about how to obfuscate your compliance because 10m lines in 10 min probably won't fly (or you'll get promoted to CTO)

[–] DragonTypeWyvern@midwest.social 3 points 17 hours ago

Malicious compliance time

[–] Xed@lemmy.blahaj.zone 11 points 17 hours ago

these tech bros just make up random shit to say to make a profit

[–] zeca@lemmy.ml 16 points 21 hours ago (2 children)

Volume means nothing. It could easily be writing 99.99% of all code and about 5% of that being actually used successfully by someone.

[–] UnderpantsWeevil@lemmy.world 5 points 18 hours ago (1 children)

I was going to say... this is a bit like claiming "AI is sending 90% of emails". Okay, but if its all spam, what are you bragging about?

Very possible that 90% of code is being written by AI and we don't know it because it's all just garbage getting shelved or deleted in the back corner of a Microsoft datacenter.

[–] zqps@sh.itjust.works 1 points 4 hours ago

The number is bullshit in the first place meant only to impress clueless CEOs.

load more comments (1 replies)
[–] cupcakezealot@piefed.blahaj.zone 54 points 1 day ago (5 children)

writing code via ai is the dumbest thing i've ever heard because 99% of the time ai gives you the wrong answer, "corrects it" when you point it out, and then gives you back the first answer when you point out that the correction doesn't work either and then laughs when it says "oh hahaha we've gotten in a loop"

[–] cows_are_underrated@feddit.org 23 points 1 day ago (6 children)

You can use AI to generate code, but from my experience its quite literally what you said. However, what I have to admit is, that its quite good at finding mistakes in your code. This is especially useful, when you dont have that much experience and are still learning. Copy paste relevant code and ask why its not working and in quite a lot of cases you get an explanation what is not working and why it isn't working. I usually try to avoid asking an AI and find an answer on google instead, but this does not guarantee an answer.

load more comments (6 replies)
load more comments (4 replies)
[–] reddig33@lemmy.world 221 points 1 day ago (11 children)

“Full self driving is just 12 months away.“

load more comments (11 replies)
[–] confuser@lemmy.zip 4 points 17 hours ago

Ai writes 90% of my code...i don't code much.

[–] ArmchairAce1944@discuss.online 11 points 21 hours ago

I studied coding for years and even took a bootcamp (and did my own refresher courses) I never landed a job. One thing that AI can do for me is help me in troubleshooting or some minor boilerplate code but not to do the job for me. I will be a hobbyist and hopefully aid in open source projects some day....any day now!

[–] poopkins@lemmy.world 57 points 1 day ago (14 children)

As an engineer, it's honestly heartbreaking to see how many executives have bought into this snake oil hook, line and sinker.

load more comments (14 replies)
[–] vane@lemmy.world 44 points 1 day ago (7 children)

It is writing 90% of code, 90% of code that goes to trash.

load more comments (7 replies)
[–] chaosCruiser@futurology.today 125 points 1 day ago (2 children)

When the CEO of a tech company says that in x months this and that will happen, you know it’s just musk talk.

load more comments (2 replies)
[–] inclementimmigrant@lemmy.world 13 points 23 hours ago (4 children)

My company and specifically my team are looking at incorporating AI as a supplement to our coding.

We looked at the code produced and determined that it's of the quality of a new hire. However we're going in with eyes wide open, and for me skeptical AF, going to try to use it in a limited way to help relieve some of the burdens of our SW engineers, not replace. I'm leading up the usage of writing out unit tests because none of us particularly like writing unit tests and it's got a very nice, easy, established pattern that the AI can follow.

[–] UnderpantsWeevil@lemmy.world 7 points 18 hours ago (4 children)

We looked at the code produced and determined that it’s of the quality of a new hire.

As someone who did new hire training for about five years, this is not what I'd call promising.

load more comments (4 replies)
load more comments (3 replies)
[–] melsaskca@lemmy.ca 16 points 1 day ago (1 children)

Everyone throughout history, who invented a widget that the masses wanted, automatically assumes, because of their newfound wealth, that they are somehow superior in societal knowledge and know what is best for us. Fucking capitalism. Fucking billionaires.

load more comments (1 replies)
[–] PieMePlenty@lemmy.world 25 points 1 day ago* (last edited 1 day ago) (1 children)

Its to hype up stock value. I don't even take it seriously anymore. Many businesses like these are mostly smoke and mirrors, oversell and under deliver. Its not even exclusive to tech, its just easier to do in tech. Musk says FSD is one year away. The company I worked for "sold" things we didn't even make and promised revenue that wasn't even economically possible. Its all the same spiel.

load more comments (1 replies)
[–] psycho_driver@lemmy.world 30 points 1 day ago (4 children)

The good news is that AI is at a stage where it's more than capable of doing the CEO of Anthropic's job.

load more comments (4 replies)
load more comments
view more: next ›