this post was submitted on 26 Aug 2025
70 points (98.6% liked)

Tech

1836 readers
202 users here now

A community for high quality news and discussion around technological advancements and changes

Things that fit:

Things that don't fit

Community Wiki

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] PixelatedSaturn@lemmy.world 8 points 1 day ago (4 children)

I don't get it, is ai bad in every possible way and it never works and always lies, singlehandedly destroying the planet while nobody uses it.... and jobs are being lost.

[–] Zos_Kia@lemmynsfw.com 2 points 21 hours ago

Yeah if AI worked well enough to replace jobs, it would work well enough to replace charity volunteers. This whole discourse reeks of populist thinking : "Our enemy is both pathologically weak and dangerously strong".

[–] fckreddit@lemmy.ml 10 points 1 day ago

This just goes to show how much corporations hate employees and hate paying them. They would rather sell shit than pay employees, who are actually skilled at what do.

[–] towerful@programming.dev 6 points 1 day ago (2 children)

I find AI to be extremely knowledgeable about everything, except anything I am knowledgeable about. Then it's like 80% wrong. Maybe 50% wrong. But it's significant.

So, c-suite see it churning out some basic code - not realising that code is 80% wrong - and think they don't need as many junior devs. Hell, might as well get rid of some mid level devs as well, cause AI will make the other mid level devs more efficient.

And when there aren't as many jobs for junior devs, there aren't as many people eligible for mid devs or senior devs.

I know it seems like the whole "Immigrants are lazy and leech off benefits. Immigrants are taking all our jobs" kinda thing.
But actually it's that LLMs are very good at predicting what the next word might be, not should be.
So it seems correct to people that don't actually know. While people that do know can see its wrong (but maybe not in all the ways it's wrong), and have to spend as much time fixing it as they would have if they had just fucking written it themselves in the first place.

Besides which, by the time an AI prompt is suitably created to get the LLM to generate its approximation of the solution for a problem.... Most of the work is done, the programmer has constrained the problem. The coding part is trivial in comparison.

[–] pohart@programming.dev 2 points 15 hours ago

I find AI to be extremely knowledgeable about everything, except anything I am knowledgeable about.

This matches my experience exactly. The problem is that the C suite isn't generally an expert in anything, and don't even realize it. They're going to keep thinking AI is amazing forever and not understand that's where the crash came from.

[–] PixelatedSaturn@lemmy.world 3 points 1 day ago

I think most people don't understand what programers do. They don't know why you need all these people to build an app. They think it's just coding.

[–] Part4@infosec.pub -1 points 1 day ago* (last edited 1 day ago) (2 children)

Even free tier LLM's have been great as a coding assistant for me. I can see LLM's already being useful in a lot of roles, but without really tightly controlled ai agents (intermediary programs that require quite a bit of human involvement) I don't see how they can actually replace many entry level roles effectively. There is a lot of hype around these language models that seem to have created a bit of a bubble that will likely pop.

Having said that, like the dotcom bubble popping didn't stop the internet developing, ai is likely to continue past this hype/bubble period.

If a business thinks it will get a competitive advantage by using ai that business will use it, which drives competitors to have to do the same even if they don't particularly want to. You can scale this up to national competition between the US and China. In this way it is a race to the bottom.

If you're really concerned about work and stability, a job that involve staring at a computer all day where the product of the work is something that exists in the computer might be a job that is susceptible to being made redundant by ai agents eventually. Think about stuff that happens in the real world - skilled trades etc are the safest bet wrt ai. Read around and figure out what is best for you.

[–] towerful@programming.dev 3 points 1 day ago (1 children)

Programming isn't about syntax or language.
LLMs can't do problem solving.
Once a problem has been solved, the syntax and language is easy.
But reasoning about the problem is the hard part.

Like the classic case of "how many 'r's in 'strawberry'", LLMs would state 2 occurrences.

Just check googles AI Mode.
The strawberry problem was found and reported on, and has been specifically solved.

Promoted how many 'r's in the word 'strawberry':

There are three 'r's in the word 'strawberry'. The letters are: S-T-R-A-W-B-E-R-R-Y.

Prompted how many 'c's in the word 'occurrence':

The word "occurrence" has two occurrences of the letter 'c'.

So, the specific case has been solved. But not the problem.
In fact, I could slightly alter my prompt and get either 2 or 3 as the answer.

[–] Part4@infosec.pub 1 points 1 day ago* (last edited 1 day ago)

None of this contradicts anything in my post.

Edit - but I will add that the ai agent is written to manage the limitations of the LLM. To do the kind of 'thinking' (they don't really think) the LLM can't do, in a very loose sense (to try and briefly address the point in your post).

[–] PixelatedSaturn@lemmy.world 3 points 1 day ago (2 children)

That's actually well said. I use it a lot and really love it, but I found this is a forbidden opinion on fediverse 😆 Usually I get at least insulted immediately if not banned for saying that. I was in a company that tried to develop some ai apps, but kind of failed, but I learned a lot about how to use ai, what can be done and what is not sensible to do with ai.

I was thinking a lot when this whole thing began to find a job away from tech, but slowly realized this ai is not replacing humans any time soon so I remained in tech, but for good or bad, not in AI .

[–] towerful@programming.dev 3 points 1 day ago

I was in a company that tried to develop some ai apps, but kind of failed, but I learned a lot about how to use ai, what can be done and what is not sensible to do with ai

That's basically the "AI is replacing jobs. AI can't replace jobs".
C-suite don't get it. It's a hugely accessible framework that anyone can use. But only trained people can use the results. But c-suite trust the results because software has been so predictable (so trustworthy) in the past.
C-suite replace employees with AI. AI can't actually do the job that it pretends it can do. Everyone suffers, and the people selling the shovels profit the most from the gold rush.
It lies on its resume and in it's interviews, but in ways that are hard to detect.

I bet there was a similar sentiment when automation replaced blue collar jobs.
And yet, all those automations still require tool and die manufacturing and maintenance. Buy a tool & die from wherever which is purpose built to your process, and a year down the line you require the supplier to maintain the actual die - the actuators and machine can be maintained by anyone, but the "business logic" is what produces a good high quality part. Process changes? Updated design? Changing supplier to a slightly different material? Back to the supplier to new die.
But so many jobs were made "redundant" by cheap tooling and automation, and now it's (nearly) impossible to actually manufacture something at scale in America.

Except LLMs action the next most likely step to the most likely dimensions based on the prompt and based on the popularity of similar/previous processes.
Fine for art and subjective medium, not for manufacturing and not for engineering.

I guess you could write automated tests which define the behaviour you want.
Probably better to write the behaviour you want and get AI to generate automated tests....

[–] Part4@infosec.pub 1 points 1 day ago* (last edited 21 hours ago)

I see my post has been downvoted a little bit; I don't think there is much wrong in it. I get that people don't want large parts of their job to be made redundant by tech bros, and assume that are probably just downvoting the entire concept. I haven't disclosed my opinion on the morality of a lot of what LLM builders have done, I just stated what I thought the reality of the situation was.

I will say that I think up and downvotes should be dropped, they instigate all sorts of unhealthy behaviour because the little dopamine hits they generate are addictive.

The reality is that 'bespoke' ai agents are being written and deployed on a small scale now and they are useful. And should large LLM's fail to achieve the results they have hyped (as I expect), ai agents are a rapidly developing technology waiting to go mainstream which, when successful, can harness the functionality of LLM's while eliminating a lot of the errors they make.

If you are career-minded and think your role could be affected I recommend looking into it and having a think. Or don't. Woteva - I can't predict the future but I do know the where the technology is at and am just doing people who are concerned about it a favour by giving them a bit of a heads up about ai agents. Should their 'potential' by fulfilled they will be hugely impactful but there is still time to prepare. Forewarned is forearmed.