this post was submitted on 20 May 2025
19 points (85.2% liked)

No Stupid Questions

40786 readers
600 users here now

No such thing. Ask away!

!nostupidquestions is a community dedicated to being helpful and answering each others' questions on various topics.

The rules for posting and commenting, besides the rules defined here for lemmy.world, are as follows:

Rules (interactive)


Rule 1- All posts must be legitimate questions. All post titles must include a question.

All posts must be legitimate questions, and all post titles must include a question. Questions that are joke or trolling questions, memes, song lyrics as title, etc. are not allowed here. See Rule 6 for all exceptions.



Rule 2- Your question subject cannot be illegal or NSFW material.

Your question subject cannot be illegal or NSFW material. You will be warned first, banned second.



Rule 3- Do not seek mental, medical and professional help here.

Do not seek mental, medical and professional help here. Breaking this rule will not get you or your post removed, but it will put you at risk, and possibly in danger.



Rule 4- No self promotion or upvote-farming of any kind.

That's it.



Rule 5- No baiting or sealioning or promoting an agenda.

Questions which, instead of being of an innocuous nature, are specifically intended (based on reports and in the opinion of our crack moderation team) to bait users into ideological wars on charged political topics will be removed and the authors warned - or banned - depending on severity.



Rule 6- Regarding META posts and joke questions.

Provided it is about the community itself, you may post non-question posts using the [META] tag on your post title.

On fridays, you are allowed to post meme and troll questions, on the condition that it's in text format only, and conforms with our other rules. These posts MUST include the [NSQ Friday] tag in their title.

If you post a serious question on friday and are looking only for legitimate answers, then please include the [Serious] tag on your post. Irrelevant replies will then be removed by moderators.



Rule 7- You can't intentionally annoy, mock, or harass other members.

If you intentionally annoy, mock, harass, or discriminate against any individual member, you will be removed.

Likewise, if you are a member, sympathiser or a resemblant of a movement that is known to largely hate, mock, discriminate against, and/or want to take lives of a group of people, and you were provably vocal about your hate, then you will be banned on sight.



Rule 8- All comments should try to stay relevant to their parent content.



Rule 9- Reposts from other platforms are not allowed.

Let everyone have their own content.



Rule 10- Majority of bots aren't allowed to participate here. This includes using AI responses and summaries.



Credits

Our breathtaking icon was bestowed upon us by @Cevilia!

The greatest banner of all time: by @TheOneWithTheHair!

founded 2 years ago
MODERATORS
 

Has been a while since AI were introduced into the daily basis of the users around all the internet. When it firstly came I was curious yeah like everyone and tried some prompts to see "what this thing can do", then, I never, ever, used AI again, because I really never saw it like something necesary, we had automatic systems already, so, the time keep moving to me until this day, when I realized something: how people is dependent of this shit. I mean, REALLY is dependent, and then they go like "I only used it for school 😒" like, are you serious dude? Do you leave your future to an algorithm? Coming back with my question, years have passed, I do think we all have an opinion more developed about AI, what do you think? Fuck it and use it anyways? If that is the case, why blame companys to make more accessible it's use? Like microsoft putting copilot even in notepad. "Microsoft just wants to compile your data." Isn't LLM about that? Why blame them if you are going to use the same problem with different flavor? Not defending Microsoft here, I'm only using it like an example, change it for the company of your own preference.

top 34 comments
sorted by: hot top controversial new old
[–] some_guy@lemmy.sdf.org 4 points 5 hours ago

Fuck off and die. That's addressed to AI and AI companies, not you.

[–] Joshi@aussie.zone 3 points 18 hours ago

Like every new technology that is hailed as changing everything it is settling into a small handful of niches.

I use a service called Consensus which will unearth relevant academic papers to a specific clinical question, in the past this could be incredibly time consuming.

I also sometimes use a service called Heidi that uses voice recognition to document patient encounters, its quite good for a specific type of visit that suits a rigid template but 90% of my consults i have no idea why they are coming in and for those i find it not much better than writing notes myself.

Obviously for creative work it is near useless.

[–] nagaram@startrek.website 9 points 1 day ago

I'm a fan generally of LLMs for work, but only if you're already an expert or well versed at all in whatever you're doing with the model because it isn't trust worthy.

If you're using a model to code you better already know how that language works and how to debug it because the AI will just lie.

If you need it to make an SOP then you better already have an idea for what that operation looks like because it will just lie.

It speeds up the work process by instantly doing the tedious parts of jobs, but it's worthless if you can't verify the accuracy. And I'm worried people don't care about the accuracy.

[–] SuiXi3D@fedia.io 6 points 1 day ago

I’m tired of people’s shit getting stolen, and I’m tired of all the AI bullshit being thrown in my face.

[–] Asafum@feddit.nl 8 points 1 day ago

It was fun for a time when their API access was free so some game developers put llms into their games. I liked being able to communicate with my ships computer, but quickly saw how flawed it was.

"Computer, can you tell me what system we're in?"

"Sure, we're in the Goober system."

"But my map says we're in Tweedledum."

"Well it appears that your map is wrong." Lol

I'm much more concerned about the future when "AGI" is actually useful and implemented into society. "We" (the ownership class) cannot accept anything other than the standard form of ownership. Those that created the AGI own the AGI and "rent" it to those that will "employ" the AGI. Pair that with more capable robotics being currently developed and there will be very little need for people to work most jobs. Because of the concept of ownership we will not let go of, if you can afford to live then you just die. There will be no "redistribution" to help those who cannot find work. We will start hearing more and more about "we don't need so many people, X billion is too many. There isn't enough work for them to support themselves." Not a fun future...

[–] Ziggurat@jlai.lu 8 points 1 day ago

Fun toy to play with, but everytime I tried to use is for real work, it ended up so bad that I spent more time than doing it properly from scratch

[–] ptz@dubvee.org 8 points 1 day ago* (last edited 1 day ago) (1 children)

In domain-specific applications (chemical / pharmaceutical / etc research) I can concede it has its uses, but in everyday life where it's shoved into every nook and cranny: don't need it, don't want it, don't respect the people who use it.

For things like bringing dead actors back into things: let the dead stay dead.

[–] capuccino@lemmy.world 2 points 1 day ago

I can't stop see the use of AI like dices that people throw and wait to be a 7.

[–] stinerman@midwest.social 5 points 1 day ago
  1. I find it useful for work (I am a software developer/tester).
  2. I think it's about as good as it's ever going to get.
  3. I believe it is not ever going to be profitable and the benefits are not worth reopening nuclear and coal power plants.
  4. If US courts rule that training AI with copyrighted materials is fair use, then I will probably stop paying for content and start pirating it again.
[–] steeznson@lemmy.world 2 points 23 hours ago

I find it a little bit useful to supplement a search engine at work as a dev but it can't write code properly yet.

[–] Krudler@lemmy.world 2 points 1 day ago

I can see it doing a lot of harm in the ways has been implemented unethically, and in some cases we don't have legal resolution on whether it's "legal" but I think any reasonable person knows that taking an original artist's work, and making a computer generate counterfeits is not really correct.

I think there is going to be a massive culling of people who are charlatans anyways, and whose artistic output is meritless. See 98% of webcomics. Most pop music. Those are already producing output that is so flavorless and bland it might as well have come from AI model. Those people are going to have to find real jobs that they are good at.

I think the worst of what AI is going to bring is not even in making art, music, video, shit like that... It's going to be that dark pattern stuff where human behavioral patterns and psychology is meticulously analyzed and used against us. Industries that target human frailties are going to use these heavily.

Effective communication will become a quaint memory of the past that seniors rant about.

[–] werty@sh.itjust.works 4 points 1 day ago

The llms have impressive mathematics but can it cure cancer or create world peace? No. Can it confuse people by pretending to be human? Yes. Put all that compute to work solving problems instead of writing emails or basic code or customer service and I'll care. I hear that AlphaFold is useful. I want to hear more about the useful machine learnimg.

[–] BlameThePeacock@lemmy.ca 4 points 1 day ago (2 children)

AI all the things? Bad

AI for specific use cases? Good

I use AI probably a dozen times a week for work tasks, saving myself about 2-4 hours of work time on tasks that I know it can do easily in seconds. Simple e-mail draft? Done. Write a complex formula for excel? Easy. Generate a summary of some longer text? Yup.

It's easy to argue that we may become dependant upon it, but that's already true for lots of things. Would you have any idea on how to preserve food if you didn't have a fridge? Would you have any idea on even how to get food if you didn't have a grocery store nearby? How would you organize a party with your friends without a phone? If a computer wasn't tracking your bank balance, how would you keep track of your money? Can you multiply 423 by 365 without using a calculator?

[–] Krudler@lemmy.world 4 points 1 day ago* (last edited 1 day ago) (2 children)

You're actually making a good point that I don't wholesale disagree with.

But the last paragraph really set me off I guess.

Personally I believe it's important to have a somewhat granular understanding of the things we use every day, otherwise we risk becoming a slave to them.

None of us can go through life believing that it's okay to have no skills and no ability to do anything because there's an easier solution there for us

Because something is going to happen at some point that will take that easy solution away and then you're fucked. What happens when all you have is a paper map, but all you've done is rely on these cool glowing boxes to tell you which direction to walk? You're out in the bush with a wet phone and you sit down to cry... Because you've made yourself a slave and you have no idea what to do now.

I'm 50 now, and I don't want to talk like an old man, but I can see that young people have no ability to manage their lives or do anything. There's always a free ad supported app to do it, and then when the internet goes down they are doomed.

If you drive a car, you need to know how to change a tire and put gas in it. If you have a fridge to preserve food, yeah, you probably should understand how and why it preserves food and what to do if power goes down for a day. You should probably further understand how to preserve and ferment things because at many points in your life you're going to get a lot of ingredients that are going to go to waste and you can eat them if you know what you're doing.

Overall I cannot go for your advocacy of self-imposed helplessness. Every time you take an easy answer, you actually screw yourself. Most of the time it's better to take the long road and do the hard work and figure out how to be a capable human being. Once you know how to do it without the easy solution, then you can use the easy solution. In a short metaphor, use the calculator once you know math.

[–] BlameThePeacock@lemmy.ca 1 points 17 hours ago (1 children)

You're not wrong, but also you aren't right. The basics that you need should be taught to you by your parents and at school before you move out. AI isn't interfering with either of those at this point.

You couldn't manage your life in the event of every possible problem either, the question then becomes which things should you know how to do yourself, and which things can be delegated.

I don't know how to repair a car beyond changing a tire or the oil, but even that isn't really necessary anymore since many cars don't even come with a spare at this point and knowing how to change the oil is now irrelevant to me, since I'm using an EV.

Knowing how to ferment for preservation may come in handy for saving a couple of dollars, but it's hardly a life saving skill anymore. Even in the event of a massive catastrophe, it's unlikely that fermentation would come in handy before aid arrived or you were able to leave the area.

[–] Krudler@lemmy.world 0 points 12 hours ago* (last edited 12 hours ago) (1 children)

Your response is actually baffling to me.

I'm not sure why you think parents are there to serve you every piece of knowledge.

You're an autonomous human being and you'd better learn how to learn on your own if you want to have a happy, functional life.

As you get older you're going to realize that nobody is going to spoon-serve you free knowledge ... That's something that is hard fought, absolutely not a gift from parents or anything else. You have to do the work.

The fact that you just cherry-pick and poo poo my comment is a little bit sad. I see you self-imposing helplessness upon yourself, it's a really poor attitude. I think you're actually just lazy.

[–] BlameThePeacock@lemmy.ca 1 points 5 hours ago

You think it's a kids job to learn how to become an adult themselves? What the fuck

I'm 40, with my own kids. I've been teaching them everything I think they should know how to do to be an adult when they move out. How to cook and clean, make a budget, fill out forms, how to show up on time, be part of a team, etc. The school is taking care of most of the academics, but I add some extra things that the school fails to cover as extensively as I'd like such as how to properly use Microsoft Excel.

What they do to grow once they're out of the house isn't my problem, I'm just setting the foundation and that absolutely is the job of parents and teachers.

[–] capuccino@lemmy.world 2 points 21 hours ago

Great answer, sir. Thank you

[–] LainTrain@lemmy.dbzer0.com 1 points 1 day ago (1 children)

Would you have any idea on how to preserve food if you didn't have a fridge?

I could use a freezer:) Jk, not entirely no, but I'm aware of other methods, given a bit of time I could probably learn how to pickle or salt and jar food properly provided I could visit the library. I understand the key problem is harmful bacteria so refrigeration extends the lifespan of food by slowing down bacterial reproduction and airtight containers prevent new bacteria coming in.

Then depending on specifics there's always vacuum-sealing and shrink-wrapping machines. If in this hypothetical collapse we still have knowledge and some way to generate electricity, and I wasn't in a crazy rush, I'd probably build a fridge. I understand the basic principle behind refrigeration.

Would you have any idea on even how to get food if you didn't have a grocery store nearby?

Yeah? If there's like societal collapse or something and there's not just food banks set up by the military or some such I'd go look for warehouses, I know I have the nation's biggest Amazon warehouse just a few blocks down from me. If not an option, generally I'd hunt animals because on my own I don't really stand a chance at agriculture, the large lead time won't help, I don't know how to hunt, but I'm sure by visiting the library I could learn how to craft a primitive spear with a knife and a sharp stick. Then long-term I'd move towards a saltwater body of water and fish.

How would you organize a party with your friends without a phone?

I'd use a computer ))))

Jk, I could write them a letter, or visit them in person. I don't know all their addresses by heart, but I could ask others who do, or simply wander about the general area and knock on doors until I find them.

If a computer wasn't tracking your bank balance, how would you keep track of your money?

I would write down my income and outgoings on a piece of paper and just do the math.

Can you multiply 423 by 365 without using a calculator?

Of course. I'm awful at math so I'll probably mess it up, but you write down the nominator over the denominator and you multiply each of the top digits by each of the bottom digits, carry any extra to the next more significant digit and sum the results.

If I did it a few times, I could probably nail the correct result.

If all else fails and this is absolutely needed I could go get spare parts and build a full-adder circuit. Heck tbqh in the long term if all my basic needs were met, I could probably deep dive into a book and build a computer, especially if we're basically talking only programmatic calculation, given 7-8 months it's not hard, maybe much less if I can use logic gates instead of ICs. If I can use OCs and have plastic and some metal bits lying around making a breadboard shouldn't be too hard. It won't host the cloud or do your laundry, but it'll do your math pretty accurately.

My point isn't to show off, my point is that we (humanity) hedge our bets. There's one thing we haven't outsourced and it's our thinking. I used to be vehemently pro-AI, but it worries me that people are outsourcing their very thought to AI.

The brain is very expensive evolutionary, and I for one, love having one, you use it or you lose it is the motto for the body, brain included, and I take great care to force myself to think on my own and understand things in as much depth as is reasonable.

Once you forget how to think and solve problems because another faux-brain does it for you, it's all over, and there's no going back. Don't do that y'all.

[–] BlameThePeacock@lemmy.ca 1 points 1 day ago (1 children)

You fail to realize that in order to get AI to do anything, you have to understand what to ask it in the first place. AI is not likely to do things you can't accomplish at all, you would have no way to validate the results and therefore it would end up causing problems (like we're seeing with people submitting papers written by AI without reviewing them) or making some code that doesn't even compile/run.

It's just a tool for speeding up that work that you already know, like learning the basics of multiplication, then using a calculator for the rest of your life. You still need to understand what multiplication and division are in order to work a calculator properly.

[–] LainTrain@lemmy.dbzer0.com 1 points 6 hours ago (1 children)

Huh? I don't know where I implied that. You ofc need an understanding but you also need practice.

[–] BlameThePeacock@lemmy.ca 1 points 5 hours ago

Practice can also be on using AI.

I think a lot of this is going to boil down to companies figuring out how to determine if someone can successfully use AI to produce output faster, or lack the skillset to do so. If you manage to get through university using AI and the profs are happy with the results, why wouldn't a company be happy with the results?

Nobody asks me if I can do the math behind the spreadsheets I build, but I couldn't do most of it by hand at this point because it's been so long since I practiced that.

[–] jlow@discuss.tchncs.de 2 points 1 day ago

Except for a very few niche use cases (subtitles for hearing-impaired) almost every aspect of it (techbros, capitalism, art-theft, energy-consumption, erosion of what is true etc etc) is awful and I'll not touch it with a stick.

[–] SattaRIP@lemmy.blahaj.zone 3 points 1 day ago* (last edited 1 day ago) (1 children)

I have less to say about the tech side than I do about this whole forced mass adoption of LLMs and how I've seen people react to it doing things.

I agree that they're unethically made by stealing data. That's indisputable. What I fail to grasp is what the purpose of hating a technology is. Blame and responsibility are weird concepts. I'm not knowledgeable in philosophy or anything related to this. What I can tell, however, is that hating on the tech itself distracts people from blaming those actually responsible, the humans doing the enshittification. The billionaires, dictators...

(tangent) and I'd go as far as to say anyone who politically represents more people than they know personally is not the type of politician that should be allowed. Same if they have enough power to violence a lot of people. But this is just my inner anarchist speculating how an ethical society with limited hierarchy might work.

[–] gashead76@lemmy.world 2 points 1 day ago* (last edited 1 day ago)

β€œWhat I can tell, however, is that hating on the tech itself distracts people from blaming those actually responsible, the humans doing the enshittification. The billionaires, dictators...”

^– SattaRIP^

That's something I've been trying to convince people of that I converse with about LLMs and similar generative technology. I've met so many people that just throw a big blanket of hate right over the entire concept of the technology and I just find it so bizarre. Criticize the people and corporations using the tech irresponsibly! It's like a mass redirection of what and who is really to blame. Which I think is partially because "AI" is something that sort of anthropomorphizes itself to a large portion of society and most people think the "personality within" the technology is responsible for the perceived misdeeds.

I figure when all is said and done and historians and researchers look back on this time, there will be a lot to learn about human behavior that we likely have little grasp of at the moment.

[–] ClamDrinker@lemmy.world 3 points 1 day ago* (last edited 1 day ago)

It really depends. There's some good uses, but it requires careful consideration and understanding of what the technology can actually provide. And if for your use case there isnt anything, it's just not what you should use.

Most if not all of the bigger companies that push it dont really try to use it for those purposes, but instead treat it as the next big thing that nobody quite understands, building mostly on hype. But smaller companies and open source initiatives indeed try to make the good uses more accessible and less objectionable.

There's plenty of cases where people do nifty things that have positive outcomes. Researchers using it for pattern recognition, scambait chatbots, creative projects that try to make use of the characteristics of AI different from human creations, etc.

I like to keep an open mind as to what people come up with, rather than dismissing it outright when AI is involved. Although hailing it as an AI product is a red flag for me if thats all thats advertised.

[–] Apepollo11@lemmy.world 2 points 1 day ago* (last edited 1 day ago)

It's just like any big technological breakthrough. Some people will lose their jobs, jobs that don't currently exist will be created, and while it'll create acute problems for some people, the average quality of life will go up. Some people will use it for good things, some people will use it for bad things.

I'm a tech guy, I like it a lot. Before COVID, I used to teach software dev, including neural networks, so seeing this stuff gradually reach the point it has now has been incredible.

That said, at the moment, it's being put into all kinds of use-cases that don't need it. I think that's more harmful than not. There's no need for Copilot in Notepad.

We have numerous AI tools where I work, but it hasn't cost anyone their job - they just make life easier for the people who use them. I think too many companies see it as a way to reduce overheads instead of increasing output capability, and all this does is create a negative sentiment towards AI.

[–] LainTrain@lemmy.dbzer0.com 2 points 1 day ago* (last edited 1 day ago)

I'm fundamentally anti-private property and copyright. So I'm definitely pro AI art. Once it's on the internet - it's there forever. It was always being scraped, you just get to see the results now.

That said I don't like AI being shoved into everything. The fun picture recombination machine shouldn't be deciding who lives and dies. Content sorting algorithms and personalized algos are all bad too, it shouldn't take agency away from people.

I also really hate that as soon as normatrons figured out you could ask it anything they just let it do their thinking for them. No fucking willpower at all, just give up thinking as soon as the opportunity presents itself and trust the oracle, but this time a website instead of a bible. God I hate normies.

Llms have been here for a while, which helped a lot of people, the thing is now though the "AI" now is corporations stealing content from people instead of making it there own or creating a llm on training data that is not stolen from the general public.

Llms are fucking amazing, helps with cancer research iirc, and other things, I believe auto correct is a form of a LLM. But now capatilism wants more and more making it with stolen content which is the wrong direction they should be going.

[–] PillowTalk420@lemmy.world 0 points 22 hours ago (1 children)

I want actual AI, and not even necessarily for anything other than answering the question of "can we make a sentient being that isn't human?"

What is being sold as AI isn't anything cool, or special, or even super useful outside of extremely specific tasks that are certainly not things that can be sold to the general public.

[–] FaceDeer@fedia.io -1 points 1 day ago (1 children)

It's a great new technology that unfortunately has become the subject of baying mobs of angry people ignorant of both the technical details and legal issues involved in it.

It has drawn some unwarranted hype, sure. It's also drawn unwarranted hate. The common refrain of "it's stealing from artists!" Is particularly annoying, it's just another verse in the never-ending march to further monetize and control every possible scrap of peoples' thoughts and ideas.

I'm eager to see all the new applications for it unfold, and I hope that the people demanding it to be restricted with draconian new varieties of intellectual property law or to be solely under the control of gigantic megacorporations won't prevail (these two groups are the same group of people, they often don't realize this).

[–] Derpenheim@lemmy.zip 2 points 1 day ago (1 children)

Except they DID steal. Outright. They used millions of people's copyrighted works (art, books, etc.) to train these data sets and then sold them off. I don't know how else you can phrase it.

[–] FaceDeer@fedia.io 0 points 23 hours ago

As I said above:

mobs of angry people ignorant of both the technical details and legal issues involved in it.

Emphasis added.

They do not "steal" anything when they train an AI off of something. They don't even violate copyright when they train an AI off of something, which is what I assume you actually meant when you sloppily and emotively used the word "steal."

In order to violate copyright you need to distribute a copy of something. Training isn't doing that. Models don't "contain" the training material, and neither do the outputs they produce (unless you try really hard to get it to match something specific, in which case you might as well accuse a photocopier manufacturer of being a thief).

Training an AI model involves analyzing information. People are free to analyze information using whatever tools they want to. There is no legal restriction that an author can apply to prevent their work from being analyzed. Similarly, "style" cannot be copyrighted.

A world in which a copyright holder could prohibit you from analyzing their work, or could prohibit you from learning and mimicking their style, would be nothing short of a hellish corporate dystopia. I would say it baffles me how many people are clamoring for this supposedly in the name of "the little guy", but sadly, it doesn't. I know how people can be selfish and short-sighted, imagining that they're owed for their hard work of shitposting on social media (that they did at the time for free and for fun) now that someone else is making money off of it. There are a bunch of lawsuits currently churning through courts in various jurisdictions claiming otherwise, but let us hope that they all get thrown out like the garbage they are because the implications of them succeeding are terrible.

The world is not all about money. Art is not all about money. It's disappointing how quickly and easily masses of people started calling for their rights to be taken away in exchange for the sliver of a fraction of a penny that they think they can now somehow extract. The offense they claim to feel over someone else making something valuable out of something that is free. How dare they.

And don't even get me started about the performative environmental ignorance around the "they're disintegrating all the water!" And "each image generation could power billions of homes!" Nonsense.