this post was submitted on 26 Aug 2025
70 points (98.6% liked)

Tech

1836 readers
217 users here now

A community for high quality news and discussion around technological advancements and changes

Things that fit:

Things that don't fit

Community Wiki

founded 2 years ago
MODERATORS
top 41 comments
sorted by: hot top controversial new old
[–] SoftestSapphic@lemmy.world 15 points 1 day ago (1 children)

There haven't been entry-level jobs in 20 years.

They all require experience and no longer offer training.

[–] pohart@programming.dev 2 points 9 hours ago (1 children)

Every software engineering job expects you to come in trained and they have since at least the eighties. The standard training for a software engineer is a four year BS or BE. We've been churning out too many for sure, but employers have never trained developers from scratch. And they will always need to train developers in their own business and systems.

[–] SoftestSapphic@lemmy.world 1 points 8 hours ago

This is cope unfortunately.

[–] Captainautism@lemmy.dbzer0.com 11 points 23 hours ago
[–] onlinepersona@programming.dev 20 points 1 day ago (4 children)

The genius. Where are you going to get your senior engineers from in 5 years if you don't have juniors to begin with?

Anti Commercial-AI license

[–] sukhmel@programming.dev 4 points 15 hours ago (1 children)

Oh, I'm sure the plan is to not need senior engineers by that time, and just replace them with AI, too

[–] onlinepersona@programming.dev 7 points 13 hours ago (1 children)

And if that doesn't work, panic and blame everybody else.

[–] Chakravanti@monero.town 1 points 11 hours ago (1 children)

There will be no one in 5 years.

[–] onlinepersona@programming.dev 3 points 9 hours ago* (last edited 9 hours ago)

�̸̡̙͈̞̠̿̾̕�̶͔̳͂̃͂̽̉̾̚͝ ̶̢͉͈̱͔͕̝̾̿͑̇̓̀̇͌͜ͅJ̴̡͖̟̗̺̣̭͌͒ṵ̶͎͙̭̯̌̒̓̏̾̂s̶̙͍̖̻͚̪̓̓͛̏́̄͛̈͝t̶̨̯̳͓̠̮̯̻̙̗̬̫̩͕͖̲͆̉̍̓̓̆̈́̅͌͌͜͠͝͝ ̸̡͎̮͇͓͕͇̲̩̣̋̈ͅẗ̸̨̻͈̖̯̝͔̗̫̪͓̝̪́̎̈́̌̂͋͒͌̑͗̑̚͘̕ḧ̸͚͕́̇͆̇͆̓̒̊͐͒̕̚͝͠e̴̢̛̛͖̺̟͇͑̂͋͗͂̎̆̂̀͑͆͋ ̴̢̛͛͋͌͛̎̔ͅͅv̶̯̬̺̥̊̋̑́͐͋͗̐̈̏̇̾͑̃͜ͅö̵̠̼̦̫̟͖̦̦̭́̓̓̔̚į̶̡̻͇̯̦̺̼̯͇̠͍̼͓̋͘͘͝d̵̬̬͔̹̤̹̠̟͎̲͓͕̆̊̈́͐̍̕ ̶͖̜͓͇̭̪̮̤̈́͑̆̎̍͗̇̃̾̑̃͝͠ͅ⚫̸̡̖̪͖̪̪͉̦̭̜͍̖̈́̈́̄̆́̀̏͌͂̕͜͠ͅ

[–] JeremyHuntQW12@lemmy.world 4 points 19 hours ago

I don't know. Poland ?

[–] edgemaster72@lemmy.world 12 points 1 day ago

5 years? Line must go up this quarter, to hell with anything else

[–] vane@lemmy.world 2 points 1 day ago (1 children)

Let's be honest. Who will remember who did what 5 years from now ? If it's not major military conflict or flop of the year nobody gives a fuck.

When you're trying to hire senior engineers and there are 5 years worth of senior engineers missing, somebody will remember.

Anti Commercial-AI license

[–] Deflated0ne@lemmy.world 27 points 1 day ago (2 children)

That was the point.

When we say capitalism eats itself. This is the kind of thing we're referring to. Terminal end stage profit seeking. To the detriment of literally, literally everyone. Job seekers, the currently employed who will have their wages even more heavily suppressed, even career employees who will be pushed out so they can be replaced. Even ultimately the businesses engaging in this suicidal nonsense themselves.

How does capitalism continue its fantasy of endless growth on a finite planet if nobody has money to spend on products and services? Short answer is that it doesn't. It will limp along on financialization and debt. But that correction will ineviably come.

I'm going to enjoy watching the AI bubble burst.

[–] fckreddit@lemmy.ml 4 points 1 day ago

Me too. I am even going to enjoy these corporations get fucked over it. I wish there was a way to create a parallel economy that don't rely on massive corporations as being the driving force. One that prioritizes humanity over economic gain.

Maybe not, but a guy can dream.

[–] Cricket@lemmy.zip 2 points 1 day ago (1 children)

It will limp along on financialization and debt.

I'm far from an expert or even very knowledgeable about economics (never taken macro or micro economics), but hasn't that been the case for decades?

[–] Deflated0ne@lemmy.world 1 points 20 hours ago (1 children)

Yes. Since the late 90s. But we're reaching thr end now. It's taken this long for the chickens of Reaganomics to come home to roost. The fruits of "fiduciary responsibility" to bear out.

[–] Cricket@lemmy.zip 2 points 19 hours ago (1 children)

Don't some people think that it started even earlier than that, when we abandoned the gold standard? I always saw that as some kind of crank theory, but nowadays I'm not so sure. I haven't looked at that debate much in depth though.

[–] Deflated0ne@lemmy.world 3 points 17 hours ago (1 children)

I don't know. I've likewise dismissed gold standard discourse as boomer nonsense forever. Doesn't matter now. There isn't enough gold on earth to service our current debt. Forget about paying it off or building a functional economy. We've grown beyond it.

Buy with regards to supply side economics. It blew its whole load in the late 80s and 90s. By 2000 we were going downhill. Those old enough might remember the news of the day talking about us becoming a "service economy"

[–] Cricket@lemmy.zip 2 points 6 hours ago

Good points, and I agree with all of it. I definitely remember all the talk about becoming a "service economy".

[–] JeremyHuntQW12@lemmy.world 3 points 19 hours ago (1 children)

The US is probably in a recession.

These are the type of jobs (sales reps) that are usually cut first.

There are no "entry level" medical jobs.

[–] spicehoarder@lemmy.zip 2 points 16 hours ago

We are. You can tell by the housing market, but nobody in power will admit it.

[–] nymnympseudonym@piefed.social 9 points 1 day ago* (last edited 1 day ago) (1 children)

As a person who has been managing software development teams for 30+ years, I have an observation.
Invariably, some employees are "average". Not super geniuses, not workaholics, but people who (say) have been doing a good job with customer support. Generally they can code simple things and know the OS versions we support as a power user -- but not as well as a sysadmin.

I do find that if I tell them to use ChatGPT to help debug issues, they do almost as well as if a sysadmin or more experienced programmer had picked up the ticket. It gets better troubleshooting, they maybe fix an actual root cause bug in our product code.

[–] Feyd@programming.dev 10 points 1 day ago (1 children)

Any time somebody I work with uses AI for debugging they get wrapped around the axle and I have to disabuse them of the nonsense direction they've been led to get to what used to be the starting point.

[–] nymnympseudonym@piefed.social 7 points 1 day ago (1 children)

maybe our averages are different

[–] Feyd@programming.dev 2 points 1 day ago (2 children)

Maybe. Or maybe you're a disconnected manager that doesn't know what they're talking about. Or maybe you have stock in AI companies. Or maybe you've drunk the coolaid they've been passing out at every all hands meeting.

Who knows ¯_(ツ)_/¯

All I know is I'm tired of everyone saying the lie machine somehow makes people more competent when all I've seen is people get worse at their jobs in the last year.

[–] towerful@programming.dev 7 points 1 day ago (1 children)

You dropped one of these: \

It's spare, you can use that one ^

[–] Feyd@programming.dev 5 points 1 day ago

(╯°□°)╯︵ \

               ¯_(ツ)_/¯
[–] towerful@programming.dev 3 points 1 day ago

I think it can elevate the level of a power user. But not to the level of a sysadmin, unless the user is then picking apart everything the LLM is telling them to do and reading the man pages. At which point, they are pretty much just learning to become a sysadmin.

A smart power user would likely search for some solutions to a problem, get some rough background, then ask an LLM to either explain how a solution solves their problem, or to use their research to validate the response of the LLM.

I don't think an LLM can elevate a normal user to a power user.
Because the user is still going to be copying & pasting commands without understanding them (unless they want to understand them, instead of merely solving the problem in front of them. At which point they are learning to become a poweruser).

I can imagine a general sentiment amongst employees of "support the use of AI or be the first to be layed off".
So even if it lets them close tickets earlier, the tickets might not actually be resolved. Instead of kicking it to someone that actually knows how to fix it, they've just bodged it - and hopefully that bodge doesn't fuck things up down the line.
But the metrics look better, and the employees aren't going to complain.
Looks great to a manager

[–] PixelatedSaturn@lemmy.world 7 points 1 day ago (4 children)

I don't get it, is ai bad in every possible way and it never works and always lies, singlehandedly destroying the planet while nobody uses it.... and jobs are being lost.

[–] Zos_Kia@lemmynsfw.com 1 points 15 hours ago

Yeah if AI worked well enough to replace jobs, it would work well enough to replace charity volunteers. This whole discourse reeks of populist thinking : "Our enemy is both pathologically weak and dangerously strong".

[–] fckreddit@lemmy.ml 9 points 1 day ago

This just goes to show how much corporations hate employees and hate paying them. They would rather sell shit than pay employees, who are actually skilled at what do.

[–] towerful@programming.dev 6 points 1 day ago (2 children)

I find AI to be extremely knowledgeable about everything, except anything I am knowledgeable about. Then it's like 80% wrong. Maybe 50% wrong. But it's significant.

So, c-suite see it churning out some basic code - not realising that code is 80% wrong - and think they don't need as many junior devs. Hell, might as well get rid of some mid level devs as well, cause AI will make the other mid level devs more efficient.

And when there aren't as many jobs for junior devs, there aren't as many people eligible for mid devs or senior devs.

I know it seems like the whole "Immigrants are lazy and leech off benefits. Immigrants are taking all our jobs" kinda thing.
But actually it's that LLMs are very good at predicting what the next word might be, not should be.
So it seems correct to people that don't actually know. While people that do know can see its wrong (but maybe not in all the ways it's wrong), and have to spend as much time fixing it as they would have if they had just fucking written it themselves in the first place.

Besides which, by the time an AI prompt is suitably created to get the LLM to generate its approximation of the solution for a problem.... Most of the work is done, the programmer has constrained the problem. The coding part is trivial in comparison.

[–] pohart@programming.dev 2 points 9 hours ago

I find AI to be extremely knowledgeable about everything, except anything I am knowledgeable about.

This matches my experience exactly. The problem is that the C suite isn't generally an expert in anything, and don't even realize it. They're going to keep thinking AI is amazing forever and not understand that's where the crash came from.

[–] PixelatedSaturn@lemmy.world 3 points 1 day ago

I think most people don't understand what programers do. They don't know why you need all these people to build an app. They think it's just coding.

[–] Part4@infosec.pub -1 points 1 day ago* (last edited 21 hours ago) (2 children)

Even free tier LLM's have been great as a coding assistant for me. I can see LLM's already being useful in a lot of roles, but without really tightly controlled ai agents (intermediary programs that require quite a bit of human involvement) I don't see how they can actually replace many entry level roles effectively. There is a lot of hype around these language models that seem to have created a bit of a bubble that will likely pop.

Having said that, like the dotcom bubble popping didn't stop the internet developing, ai is likely to continue past this hype/bubble period.

If a business thinks it will get a competitive advantage by using ai that business will use it, which drives competitors to have to do the same even if they don't particularly want to. You can scale this up to national competition between the US and China. In this way it is a race to the bottom.

If you're really concerned about work and stability, a job that involve staring at a computer all day where the product of the work is something that exists in the computer might be a job that is susceptible to being made redundant by ai agents eventually. Think about stuff that happens in the real world - skilled trades etc are the safest bet wrt ai. Read around and figure out what is best for you.

[–] towerful@programming.dev 3 points 22 hours ago (1 children)

Programming isn't about syntax or language.
LLMs can't do problem solving.
Once a problem has been solved, the syntax and language is easy.
But reasoning about the problem is the hard part.

Like the classic case of "how many 'r's in 'strawberry'", LLMs would state 2 occurrences.

Just check googles AI Mode.
The strawberry problem was found and reported on, and has been specifically solved.

Promoted how many 'r's in the word 'strawberry':

There are three 'r's in the word 'strawberry'. The letters are: S-T-R-A-W-B-E-R-R-Y.

Prompted how many 'c's in the word 'occurrence':

The word "occurrence" has two occurrences of the letter 'c'.

So, the specific case has been solved. But not the problem.
In fact, I could slightly alter my prompt and get either 2 or 3 as the answer.

[–] Part4@infosec.pub 1 points 22 hours ago* (last edited 21 hours ago)

None of this contradicts anything in my post.

Edit - but I will add that the ai agent is written to manage the limitations of the LLM. To do the kind of 'thinking' (they don't really think) the LLM can't do, in a very loose sense (to try and briefly address the point in your post).

[–] PixelatedSaturn@lemmy.world 3 points 1 day ago (2 children)

That's actually well said. I use it a lot and really love it, but I found this is a forbidden opinion on fediverse 😆 Usually I get at least insulted immediately if not banned for saying that. I was in a company that tried to develop some ai apps, but kind of failed, but I learned a lot about how to use ai, what can be done and what is not sensible to do with ai.

I was thinking a lot when this whole thing began to find a job away from tech, but slowly realized this ai is not replacing humans any time soon so I remained in tech, but for good or bad, not in AI .

[–] towerful@programming.dev 3 points 22 hours ago

I was in a company that tried to develop some ai apps, but kind of failed, but I learned a lot about how to use ai, what can be done and what is not sensible to do with ai

That's basically the "AI is replacing jobs. AI can't replace jobs".
C-suite don't get it. It's a hugely accessible framework that anyone can use. But only trained people can use the results. But c-suite trust the results because software has been so predictable (so trustworthy) in the past.
C-suite replace employees with AI. AI can't actually do the job that it pretends it can do. Everyone suffers, and the people selling the shovels profit the most from the gold rush.
It lies on its resume and in it's interviews, but in ways that are hard to detect.

I bet there was a similar sentiment when automation replaced blue collar jobs.
And yet, all those automations still require tool and die manufacturing and maintenance. Buy a tool & die from wherever which is purpose built to your process, and a year down the line you require the supplier to maintain the actual die - the actuators and machine can be maintained by anyone, but the "business logic" is what produces a good high quality part. Process changes? Updated design? Changing supplier to a slightly different material? Back to the supplier to new die.
But so many jobs were made "redundant" by cheap tooling and automation, and now it's (nearly) impossible to actually manufacture something at scale in America.

Except LLMs action the next most likely step to the most likely dimensions based on the prompt and based on the popularity of similar/previous processes.
Fine for art and subjective medium, not for manufacturing and not for engineering.

I guess you could write automated tests which define the behaviour you want.
Probably better to write the behaviour you want and get AI to generate automated tests....

[–] Part4@infosec.pub 1 points 22 hours ago* (last edited 16 hours ago)

I see my post has been downvoted a little bit; I don't think there is much wrong in it. I get that people don't want large parts of their job to be made redundant by tech bros, and assume that are probably just downvoting the entire concept. I haven't disclosed my opinion on the morality of a lot of what LLM builders have done, I just stated what I thought the reality of the situation was.

I will say that I think up and downvotes should be dropped, they instigate all sorts of unhealthy behaviour because the little dopamine hits they generate are addictive.

The reality is that 'bespoke' ai agents are being written and deployed on a small scale now and they are useful. And should large LLM's fail to achieve the results they have hyped (as I expect), ai agents are a rapidly developing technology waiting to go mainstream which, when successful, can harness the functionality of LLM's while eliminating a lot of the errors they make.

If you are career-minded and think your role could be affected I recommend looking into it and having a think. Or don't. Woteva - I can't predict the future but I do know the where the technology is at and am just doing people who are concerned about it a favour by giving them a bit of a heads up about ai agents. Should their 'potential' by fulfilled they will be hugely impactful but there is still time to prepare. Forewarned is forearmed.