this post was submitted on 05 Sep 2025
21 points (92.0% liked)

Casual Conversation

1285 readers
309 users here now

Share a story, ask a question, or start a conversation about (almost) anything you desire. Maybe you'll make some friends in the process.


RULES

  1. Be respectful: no harassment, hate speech, bigotry, and/or trolling.
  2. Encourage conversation in your OP. This means including heavily implicative subject matter when you can and also engaging in your thread when possible.
  3. Avoid controversial topics (e.g. politics or societal debates).
  4. Stay calm: Don’t post angry or to vent or complain. We are a place where everyone can forget about their everyday or not so everyday worries for a moment. Venting, complaining, or posting from a place of anger or resentment doesn't fit the atmosphere we try to foster at all. Feel free to post those on !goodoffmychest@lemmy.world
  5. Keep it clean and SFW
  6. No solicitation such as ads, promotional content, spam, surveys etc.

Casual conversation communities:

Related discussion-focused communities

founded 2 years ago
MODERATORS
 

As much as I like AI but the last years are kind of a lot, sometimes just too much for me.

I can't even watch youtube without getting a video translated into my language which is quite annoying.

The first search on google has an AI telling me the stuff I need to know, don't even have to look at real human threads at this point.

Not even going to deep on the AI generated videos, news etc. because that is insane already and we aren't even peaking.

I am 40 years old. Imagine studying a few years on a 200.000 €/$ or whatever currency degree just to find out that the job won't even exist by 2028.

Imagine finding out that you won't be able to pay off your debt cause most fastfood restaurants will use AI/ Bots that can serve, prepare, clean etc. 24/7 while a useless human needs breaks, wants money and needs days off and can only work 8 hour shifts.

I know this sounds crazy but I really think by 2030 we will have 80% jobs replaced by AI and the new jobs that might evolve out of jobs that have vanished will only be doable for AGI/ Super Intelligence.

Think about it.... while in the 1800s machines have replaced or made jobs easier, they still required humans manpower to produce them, to maintaine them and to even use them.

Now we are basically replacing what made humans - human - our brains. As we know the human brain has never ever been replaced by something and that is the organ that put us where we are in the food chain. And we are creating something that is BETTER, FASTER, MORE EFFICIENT than our a brain.

It could all be cool and nice and fun and games if we wouldn't be replacing humans in a very short time frame til 2030. It could all be cool,... if not every country would be competing in this race to super intelligence.

Looking at this neutral and seeing what is happing, imo I might be doomed in 5 years.

I really think I will be fck.d up in 5 years. A job is annoying, but it's also a humans purpose. A way to express itself and be useful to society. If this is being taken away we will find 95% of humanity in a huge depression and suicide rates are going to rise by 2030.

Edit: We can't even imagine and visualise what super intelligence will do and can do. Before we even understand what this AI just created and offered us the next best thing is already produced.

We are basically getting smarter with every new upgrade AI will give us, but get dumber in the process cause we can't even PROCESS the new stuff.

Bland example: currently phones are getting upgrades 1-2 years and we can read whats new and see the new things. Imagine those phones being produced every week with new and better and more useful upgrades. We'd buy the phone, understand the phone and after a week the next phone with 5 new features is on the markert and we are outdated.

We can't process the new upgrades etc. in such a short time frame. We aren't capable of that. This leads us to getting dumber with every new upgrade.

Imagine AI creating a Boeing and after a year we understand how to use it. But during that year it already released a Boeing that can fly at the speed of flight. It can't be used cause humans cant use it and would die at those speeds. But AI could.

AI will be producing so much trash knowledge lol.

top 24 comments
sorted by: hot top controversial new old
[–] Plebcouncilman@sh.itjust.works 10 points 2 days ago (2 children)

No because LLMs are actually quite dumb and nowhere near close to AGI yet.

Some jobs will be lost, especially I can see concept artists getting reduced everywhere they are used. But nothing catastrophic yet.

Anyways I’m not afraid, socialism won’t be viable until we have the capacity to automate all labor.

[–] Asafum@feddit.nl 3 points 2 days ago (1 children)

Anyways I’m not afraid, socialism won’t be viable until we have the capacity to automate all labor.

Unfortunately I'm of the belief that even after that, it is going to take a metric fuckton of suffering/death and a partial economic collapse in order to finally transfer systems...

The ownership class exist on their separate islands of industry, Big Agriculture fat cat won't care that they don't employ anyone anymore. Google won't care they don't employ anyone anymore. Car manufacturers won't care they don't employ anyone anymore... Etc etc. society will suffer from poverty, but each master of their island will fight tooth and nail to prevent any changes that require they give up their power/wealth until it gets to the point where it simply cannot be sustained because no one can buy their products anymore...

Absolutely. But thus is the reality of human history. It’s very nice to want to avert it, but you really can’t. Many people will think that saying this is immoral but I’m simply being descriptive not normative. You cannot have a complete reconfiguration of society without collateral damage, it’s just not possible. I do not advocate for being the first one to throw the stone, but stones will be thrown.

[–] bizarroland@lemmy.world 1 points 2 days ago

I feel like even applying the term "dumb" to current AI, LLMs is vastly overstating their capabilities.

LLMs are just very good text prediction machines. They do not have any actual reasoning capabilities. Everything that they do is a trick. It's like Oz hiding behind the machine that makes his head big.

There is no sense of self, there is no sense of like intellectual continuity, and anything that they've tacked onto the basic maths that make these functions work so well is just more smoke and mirrors, hiding over the fact that what's actually going on is numbers being computed.

I do believe that LLMs are a puzzle piece in actual AGI, but the separation, the difference, the distance between what they currently are and what they would need to be to actually be AGI is a very long way away.

I imagine that with another 50 years of hard work by humans and trillions of dollars backing the process, they might succeed. But I don't imagine that it will be any sooner than 35 years at the most.

Further, I believe that the world will have a harder time continuing along its current path long enough to make that a possibility, then it will in actually achieving AGI.

[–] LePoisson@lemmy.world 8 points 2 days ago

I think the more you sit down and learn about current AI models (LLM models) the more you will realize a lot and I do mean a lot of the promises and pie in the sky rhetoric is just tech bros out to raise capital and take money from dumb rich people.

The more you learn about current AI, the less you will fear it because it is not some magic bullet. Every single AI out there is riddled with issues just bubbling under the surface.

They don't "think" like people do, they're basically very fancy autocorrect - yes that's a gigantic oversimplification but it isn't too far off the truth.

Even white collar work it isn't great at straight replacing people, it's a useful tool but it isn't a person and AI makes a lot of mistakes still.

Anyways I think it's a good tool, I use it, but I can tell you I'm not scared of AI taking everything away all at once. I do think present day automation tech paired with AI will sooner or later eliminate some jobs but I'm hopeful we see a systemic shift to handle that. I am concerned one day we're going to get efficient enough at enough stuff that we'll have like a permanent unemployed underclass that's 20 to 30% of the population but things like that are solvable crises if we choose to solve them. That's where universal basic income, healthcare as a right and easily accessible, upping the min wage and reducing the normal working hours week from 40 to say 30 will help. Or would help if we lived in a functioning country not teetering on the brink of all our facism (USA).

[–] cloudless@piefed.social 7 points 2 days ago (3 children)

I don’t think we will see AGI until at least a decade later. Right now LLM is no longer making significant progress (diminishing returns on scaling), AI research will need to find new ways to achieve AGI. This will probably require breakthrough in quantum computing and energy infrastructure.

[–] rauls5@lemmy.zip 4 points 2 days ago (1 children)

I’m afraid that LLM is already producing “good enough” results to replace a large portion of workers. It won’t take AGI. If a corporation can achieve mediocre results for basically free, it will over do so over workers.

It’s already happening mediocre AI customer support, photography, illustrations, writing and music are already permeating every facet of our lives.

[–] hypnicjerk@lemmy.world 2 points 2 days ago

'large portion' is doing some extremely heavy lifting there. those slices of industry that are leaning heavily into genai are still experimental and not delivering nearly the promised results. and those services are operating at heavy losses with no clear path to recoup.

enshittification isn't unique to ai at all.

[–] Valmond@lemmy.world 3 points 2 days ago

Just a note here, everyone were freaking out about AI => AGI like ten years ago (Superintelligence by Nick Bostrom came out 2015 IIRC), with LLMs it just hit the masses.

Autocorrect still can't figure out what I want to type...

IMO it's democracy and equality that we need to focus on, not some tech bros dream.

[–] Grogon@lemmy.world 3 points 2 days ago

If I was AGI I'd write this comment too.

Just kidding lol.

[–] magnetosphere@fedia.io 6 points 2 days ago (1 children)

I’m not worried about AGI because I don’t think we’re anywhere near that level of complexity or programming skill yet. If we ever do get it right, there will be numerous false starts before then.

LLMs are stupid and lack basic common sense. They can’t even take fast food orders reliably.

The hype will continue, but people are seeing through it more and more. What’s currently marketed as “AI” can’t live up to its promises.

[–] gandalf_der_12te@discuss.tchncs.de 1 points 1 day ago (1 children)

I’m not worried about AGI because I don’t think we’re anywhere near that level of complexity or programming skill yet. If we ever do get it right, there will be numerous false starts before then.

you're the type of person who navigates the titanic and sees the iceberg, but does not panic because the ship is unsinkable. the iceberg comes closer and closer, and you stay calm and say "don't worry, it can't hurt us". until you hear the screeching, and then you panic. "if only we had seen it coming sooner, we could have taken preventive measures", the crew is going to say.

[–] magnetosphere@fedia.io 1 points 1 day ago (1 children)

Thank you, internet stranger, for letting me know what type of person I am.

[–] gandalf_der_12te@discuss.tchncs.de 2 points 1 day ago (1 children)

damn it, i can't come up with a good response to this

[–] magnetosphere@fedia.io 1 points 1 day ago

lol if you’re like me, you’ll think of something hilarious while you’re in the shower, and then forget it by the time you get to your phone

[–] Feyd@programming.dev 6 points 2 days ago
  1. AI is an extremely overloaded term. We use it to mean any kind of machine learning (most of which has been around a while), generative AI, and even things like NPC logic in video games
  2. All the fear mongering you've heard about AGI/super intelligence is based on LLMs. LLMs just process language based on statistical probabilities from their training data and aren't anywhere near close to "understanding" in the sense that you might think of it.
  3. LLMs don't seem like the path to AGI, so I sincerely doubt we're anywhere close.
  4. LLM based tech isn't even that good at replacing workers for the things it kind of sort of works for. Companies keep trying to replace call center staff with LLM based systems and walking it back after it sucks. Fast food restaurants are trying but people hate it for much the same reasons, so we'll see how it goes.

Tldr: AGI being on the horizon and AI destroying the job market is all marketing to make people think AI is more capable than it is. It is good for some things, but not for everything under the sun like they'd have you believe. The narrative that AI is capable enough to replace humans is especially insidious because that is many CEOs fondest wish, so many of them will drink the coolaid and try and fail.

Tldr tldr: don't panic it's not as big a deal as they'd have you believe.

[–] MagicShel@lemmy.zip 6 points 2 days ago (1 children)

There is absolutely no sign that AGI is anything approaching reality. LLM technology is a dead end for achieving it. 80% of jobs absolutely will not be gone in ten years. I would be shocked if it were as high as 10%. Productivity will increase in certain fields and that will have a trickle down effect where jobs are slightly reduced but never eliminated. I wouldn't expect AI to be any more disruptive than when PCs or Internet became ubiquitous.

[–] Valmond@lemmy.world 4 points 2 days ago

Well tell graphic designers that 😋

But you're right, most jobs are just fine.

Democracy and equality though...

[–] Kolanaki@pawb.social 1 points 1 day ago* (last edited 1 day ago)

I think an actual super intelligence would fix a lot of shit the stupid version of AI we have now has and continues to fuck up. Among other things caused by massive stupidity.

Too bad we are nowhere near such a thing.

[–] gandalf_der_12te@discuss.tchncs.de 1 points 1 day ago* (last edited 1 day ago)

I know this sounds crazy but I really think by 2030 we will have 80% jobs replaced by AI and the new jobs that might evolve out of jobs that have vanished will only be doable for AGI/ Super Intelligence.

I don't think we'll lose 80% of jobs by 2030, but 50% of jobs by 2035 is realistic.

I really think I will be fck.d up in 5 years. A job is annoying, but it’s also a humans purpose. A way to express itself and be useful to society. If this is being taken away we will find 95% of humanity in a huge depression and suicide rates are going to rise by 2030.

But you're spot on with this. Humans earn wages because companies need humans to do things. If companies can do without humans, they won't see the need to pay wages. An economic crisis ensures, and also people lose hope and purpose in life, and that creates a large-scale depression that affects all of humanity.

One of the ways out is to give people resources sothat they at least have an income [Universal Basic Income]. It does not solve the "purpose"-crisis though. The best thing would be if people have fewer children, so there's fewer workers and they have an easier time finding jobs.

[–] BonesOfTheMoon@lemmy.world 1 points 2 days ago

All it seems to do is create dumb shit.

[–] brucethemoose@lemmy.world 1 points 2 days ago

"AI" are still tools.

The issue is their underlying technology, as of now, is way more fundamentally limited than 'Tech Bro' types will tell you. Don't get me wrong, they're neat tools, but they are fundamentally incapable of taking over intricate decision making processes. They're just a layer of human assistance and automation.

I'm as big of a local LLM enthusiast as you'll find, and I'm telling you: the AGI scaling acolytes are full of shit, and the research community knows it.

Imagine finding out that you won’t be able to pay off your debt cause most fastfood restaurants will use AI/ Bots that can serve, prepare, clean etc. 24/7 while a useless human needs breaks, wants money and needs days off and can only work 8 hour shifts.

This sucks.

...But honestly, in the long run, it's not so bad. Working fast food sucks and it would be great if people could do something else instead.


As a little silver lining, there's a good chance 'AI,' as it is now, is goin to 'race to the bottom,' and a lot heavy lifting will be done on your phone or some computer you own. So you'll have a little assistant to help you with stuff, self hosted, not corporate cloud controlled. Think Lemmy vs Reddit in that regard.

[–] pinball_wizard@lemmy.zip 1 points 2 days ago* (last edited 2 days ago)

Marvel's X-Men Comic got it exactly right.

(Ignoring, for this conversation, that current generations of AI are rubbish.)

Nimrod, trained on all of the knowledge of humanity, was able to plan millions of ways to defeat any of the X-Men who he calculated had any chance to defeat him.

Spoiler for Nimrod Story Arc

... and Nimrod got killed by a black woman (Storm) who he had confidently calculated was no threat to him.

At first it seems surreal that an ascendant AI could undersstime an Omega Level Mutant, but we've actually seen plenty of real life evidence that AI has a bias toward routinely making mistakes that obvious to a human.

The key insight is that infinite computer cycles can do lots of impressive things, but will always have critical flaws and weaknesses.

Computers are great at striving for perfect, and shit at achieving it.

Nimrod SpoilerIn this case, Nimrod inherited a crucial blind spot from common bias- underestimating the skillets of women and various minorities - in the data he based his calculations on.

[–] Valmond@lemmy.world 1 points 2 days ago

Imagine a Boeing going at the speed of flight.

That would terrify... Airbus?