this post was submitted on 09 Oct 2025
402 points (96.7% liked)

Technology

76090 readers
3143 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
(page 2) 50 comments
sorted by: hot top controversial new old
[–] chunes@lemmy.world 46 points 1 week ago (1 children)

Software has a serious "one more lane will fix traffic" problem.

Don't give programmers better hardware or else they will write worse software. End of.

[–] fluckx@lemmy.world 19 points 1 week ago (13 children)

This is very true. You don't need a bigger database server, you need an index on that table you query all the time that's doing full table scans.

load more comments (13 replies)
[–] squaresinger@lemmy.world 40 points 1 week ago* (last edited 1 week ago) (8 children)

The article is very much off point.

  • Software quality wasn't great in 2018 and then suddenly declined. Software quality has been as shit as legally possible since the dawn of (programming) time.
  • The software crisis has never ended. It has only been increasing in severity.
  • Ever since we have been trying to squeeze more programming performance out of software developers at the cost of performance.

The main issue is the software crisis: Hardware performance follows moore's law, developer performance is mostly constant.

If the memory of your computer is counted in bytes without a SI-prefix and your CPU has maybe a dozen or two instructions, then it's possible for a single human being to comprehend everything the computer is doing and to program it very close to optimally.

The same is not possible if your computer has subsystems upon subsystems and even the keyboard controller has more power and complexity than the whole apollo programs combined.

So to program exponentially more complex systems we would need exponentially more software developer budget. But since it's really hard to scale software developers exponentially, we've been trying to use abstraction layers to hide complexity, to share and re-use work (no need for everyone to re-invent the templating engine) and to have clear boundries that allow for better cooperation.

That was the case way before electron already. Compiled languages started the trend, languages like Java or C# deepened it, and using modern middleware and frameworks just increased it.

OOP complains about the chain "React → Electron → Chromium → Docker → Kubernetes → VM → managed DB → API gateways". But he doesn't even consider that even if you run "straight on bare metal" there's a whole stack of abstractions in between your code and the execution. Every major component inside a PC nowadays runs its own separate dedicated OS that neither the end user nor the developer of ordinary software ever sees.

But the main issue always reverts back to the software crisis. If we had infinite developer resources we could write optimal software. But we don't so we can't and thus we put in abstraction layers to improve ease of use for the developers, because otherwise we would never ship anything.

If you want to complain, complain to the mangers who don't allocate enough resources and to the investors who don't want to dump millions into the development of simple programs. And to the customers who aren't ok with simple things but who want modern cutting edge everything in their programs.

In the end it's sadly really the case: Memory and performance gets cheaper in an exponential fashion, while developers are still mere humans and their performance stays largely constant.

So which of these two values SHOULD we optimize for?


The real problem in regards to software quality is not abstraction layers but "business agile" (as in "business doesn't need to make any long term plans but can cancel or change anything at any time") and lack of QA budget.

[–] Reginald_T_Biter@lemmy.world 9 points 1 week ago (1 children)

The software crysis has never ended

MAXIMUM ARMOR

[–] ICastFist@programming.dev 5 points 1 week ago

Shit, my GPU is about to melt!

[–] Valmond@lemmy.world 5 points 1 week ago (1 children)

Yeah what I hate that agile way of dealing with things. Business wants prototypes ASAP but if one is actually deemed useful, you have no budget to productisize it which means that if you don't want to take all the blame for a crappy app, you have to invest heavily in all of the prototypes. Prototypes who are called next gen project, but gets cancelled nine times out of ten 🤷🏻‍♀️. Make it make sense.

load more comments (1 replies)
load more comments (6 replies)
[–] fodor@lemmy.zip 28 points 1 week ago (11 children)

All of the examples are commercial products. The author doesn't know or doesn't realize that this is a capitalist problem. Of course, there is bloat in some open source projects. But nothing like what is described in those examples.

And I don't think you can avoid that if you're a capitalist. You make money by adding features that maybe nobody wants. And you need to keep doing something new. Maintenance doesn't make you any money.

So this looks like AI plus capitalism.

load more comments (11 replies)
[–] afk_strats@lemmy.world 27 points 1 week ago (7 children)

Accept that quality matters more than velocity. Ship slower, ship working. The cost of fixing production disasters dwarfs the cost of proper development.

This has been a struggle my entire career. Sometimes, the company listens. Sometimes they don't. It's a worthwhile fight but it is a systemic problem caused by management and short-term profit-seeking over healthy business growth

[–] dual_sport_dork@lemmy.world 24 points 1 week ago (3 children)

"Apparently there's never the money to do it right, but somehow there's always the money to do it twice."

Management never likes to have this brought to their attention, especially in a Told You So tone of voice. One thinks if this bothered pointy-haired types so much, maybe they could learn from their mistakes once in a while.

[–] ozymandias117@lemmy.world 15 points 1 week ago (1 children)

We'll just set up another retrospective meeting and have a lessons learned.

Then we won't change anything based off the findings of the retro and lessons learned.

[–] PattyMcB@lemmy.world 5 points 1 week ago (2 children)

Post-mortems always seemed like a waste of time to me, because nobody ever went back and read that particular confluence page (especially me executives who made the same mistake again)

load more comments (2 replies)
load more comments (2 replies)
[–] HertzDentalBar@lemmy.blahaj.zone 6 points 1 week ago (1 children)

That applies in so many industries 😅 like you want it done right... Or do you want it done now? Now will cost you 10x long term though...

Welp now it is I guess.

[–] PattyMcB@lemmy.world 7 points 1 week ago (1 children)

You can have it fast, you can have it cheap, or you can have it good (high quality), but you can only pick two.

load more comments (1 replies)
[–] ryathal@sh.itjust.works 5 points 1 week ago

There's levels to it. True quality isn't worth it, absolute garbage costs a lot though. Some level that mostly works is the sweet spot.

load more comments (4 replies)
[–] vane@lemmy.world 26 points 1 week ago* (last edited 1 week ago)

Quality in this economy ? We need to fire some people to cut costs and use telemetry to make sure everyone that's left uses AI to pay AI companies because our investors demand it because they invested all their money in AI and they see no return.

[–] panda_abyss@lemmy.ca 24 points 1 week ago (5 children)

Fabricated 4,000 fake user profiles to cover up the deletion

This has got to be a reinforcement learning issue, I had this happen the other day.

I asked Claude to fix some tests, so it fixed the tests by commenting out the failures. I guess that’s a way of fixing them that nobody would ever ask for.

Absolutely moronic. These tools do this regularly. It’s how they pass benchmarks.

Also you can’t ask them why they did something, they have no capacity of introspection, they can’t read their input tokens, they just make up something that sounds plausible for “what were you thinking”.

load more comments (5 replies)
[–] kayazere@feddit.nl 21 points 1 week ago* (last edited 1 week ago)

Another big problem not mentioned in the article is companies refusing to hire QA engineers to do actual testing before releasing.

The last two American companies I worked for had fired all the QA engineers or refused to hire any. Engineers were supposed to “own” their features and test them themselves before release. It’s obvious that this can’t provide the same level of testing and the software gets released full of bugs and only the happy path works.

[–] MonkderVierte@lemmy.zip 20 points 1 week ago
[–] neclimdul@lemmy.world 20 points 1 week ago (4 children)

"AI just weaponized existing incompetence."

Daamn. Harsh but hard to argue with.

load more comments (4 replies)
[–] geoff@midwest.social 18 points 1 week ago (4 children)

Anyone else remember a few years ago when companies got rid of all their QA people because something something functional testing? Yeah.

The uncontrolled growth in abstractions is also very real and very damaging, and now that companies are addicted to the pace of feature delivery this whole slipshod situation has made normal they can’t give it up.

load more comments (4 replies)
[–] cygnus@lemmy.ca 18 points 1 week ago (5 children)

I wonder if this ties into our general disposability culture (throwing things away instead of repairing, etc)

[–] anamethatisnt@sopuli.xyz 15 points 1 week ago (1 children)

That and also man hour costs versus hardware costs. It's often cheaper to buy some extra ram than it is to pay someone to make the code more efficient.

load more comments (1 replies)
load more comments (4 replies)
[–] Pika@sh.itjust.works 18 points 1 week ago

I'm glad that they added CloudStrike into that article, because it adds a whole extra level of incompetency in the software field. CS as a whole should have never happens in the first place if Microsoft properly enforced their stance they claim they had regarding driver security and the kernel.

The entire reason CS was able to create that systematic failure was because they were(still are?) abusing the system MS has in place to be able to sign kernel level drivers. The process dodges MS review for the driver by using a standalone driver that then live patches instead of requiring every update to be reviewed and certified. This type of system allowed for a live update that directly modified the kernel via the already certified driver. Remote injection of un-certified code should never have been allowed to be injected into a secure location in the first place. It was a failure on every level for both MS and CS.

[–] PattyMcB@lemmy.world 15 points 1 week ago

Non-technical hiring managers are a bane for developers (and probably bad for any company). Just saying.

[–] _NetNomad@fedia.io 14 points 1 week ago (2 children)

i think about this every time i open outlook on my phone and have to wait a full minute for it to load and hopefully not crash, versus how it worked more or less instantly on my phone ten years ago. gajillions of dollars spent on improved hardware and improved network speed and capacity, ans for what? machines that do the same thing in twice the amount of time if you're lucky

[–] socialsecurity@piefed.social 10 points 1 week ago (1 children)

Well obviously it has to ping 20 different servers from 5 different mega corporations!

[–] snoons@lemmy.ca 8 points 1 week ago

And verify your identity three times, for good measure, to make sure you're you and not someone that should be censored.

load more comments (1 replies)
[–] The_Decryptor@aussie.zone 14 points 1 week ago

The calculator leaked 32GB of RAM, because the system has 32GB of RAM. Memory leaks are uncontrollable and expand to take the space they're given, if you had 16MB of RAM in the system then that's all it'd be able to take before crashing.

Abstractions can be super powerful, but you need an understanding of why you're using the abstraction vs. what it's abstracting. It feels like a lot of them are being used simply to check off a list of buzzwords.

[–] odama626@lemmy.world 10 points 1 week ago (3 children)

Accurate but ironically written by chatgpt

[–] BillBurBaggins@lemmy.world 8 points 1 week ago

And you can't even zoom into the images on mobile. Maybe it's harder than they think if they can't even pick their blogging site without bugs

load more comments (2 replies)
[–] themaninblack@lemmy.world 10 points 1 week ago* (last edited 1 week ago) (1 children)

Being obtuse for a moment, let me just say: build it right!

That means minimalism! No architecture astronauts! No unnecessary abstraction! No premature optimisation!

Lean on opinionated frameworks so as to focus on coding the business rules!

And for the love of all that is holy, have your developers sit next to the people that will be using the software!

All of this will inherently reduce runaway algorithmic complexity, prevent the sort of artisanal work that causes leakiness, and speed up your code.

[–] Axolotl_cpp@lemmy.ml 7 points 1 week ago (5 children)

Electron should be illegal

load more comments (5 replies)
[–] FreedomAdvocate@lemmy.net.au 5 points 1 week ago* (last edited 1 week ago) (3 children)

These aren't feature requirements. They're memory leaks that nobody bothered to fix.

Yet all those examples have been fixed 🤣. Most of them are from 3-5 years ago and were fixed not long after being reported.

Software development is hard - that’s why not everyone can do it. You can do everything perfectly in your development, testing, and deployment, and there will still be tonnes of people that get issues if enough people use your program because not everyone’s machines are the same, not everyone’s OS is the same, etc. If you’ve ever run one of those “debloat windows” type programs, for example, your OS is probably fucked beyond belief and any problem you encounter will be due to that.

Big programs are updated almost constantly - some daily even! As development gets more and more advanced with more and more features and more and more platforms, it doesn’t get easier. What matters is if the problems get fixed, and these days you basically wait 24 hours max for a fix.

load more comments (3 replies)
[–] ThePowerOfGeek@lemmy.world 5 points 1 week ago (9 children)

I don't trust some of the numbers in this article.

Microsoft Teams: 100% CPU usage on 32GB machines

I'm literally sitting here right now on a Teams call (I've already contributed what I needed to), looking at my CPU usage, which is staying in the 4.6% to 7.3% CPU range.

Is that still too high? Probably. Have I seen it hit 100% CPU usage? Yes, rarely (but that's usually a sign of a deeper issue).

Maybe the author is going with worst case scenario. But in that case he should probably qualify the examples more.

[–] MotoAsh@piefed.social 9 points 1 week ago (2 children)

Well, it's also stupid to use RAM size as an indicator of a machines CPU load capability...

Definitely sending off some tech illiterate vibes.

load more comments (2 replies)
load more comments (8 replies)
load more comments
view more: ‹ prev next ›