this post was submitted on 21 Mar 2025
375 points (93.9% liked)

Greentext

5821 readers
845 users here now

This is a place to share greentexts and witness the confounding life of Anon. If you're new to the Greentext community, think of it as a sort of zoo with Anon as the main attraction.

Be warned:

If you find yourself getting angry (or god forbid, agreeing) with something Anon has said, you might be doing it wrong.

founded 1 year ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] dustyData@lemmy.world 2 points 1 day ago (1 children)

Then stop dismissing other people's “argumentets”. Unfortunately, most AI proponents don't realize that the AI use case that is being pushed by it's makers and owners is not “a tool to assist users”, but “a tool for executives to replace humans”. Is it a dumb proposal? absolutely. It doesn't reduce the moral responsibility of those promoting AI. They are supporting the destruction of people's livelihoods to make the wealthiest human being in history slightly wealthier, and curse knowledge workers to poverty just like factory workers were in their time by the exact same political and economic class of soulless pricks.

[–] uranibaba@lemmy.world 0 points 1 day ago (1 children)

If the argument want as you have laid it out, I would not dismiss it. But I cannot do that when the arguemnt is ”hammers in general are bad because I cannot use them to drive to work” or ”also your essay fucking sucks. learn to put together a coherent thought instead of relying on a glorified autocorrect that doesn’t have them at all to do it for you”. That second one is an actual quote.

What you bring up is how a few people is power are using AI to increase their wealth without regards for human suffering. I agree that what they are doing is wrong. And the discussion should be about how AI affects our society, how it is used and who controls it. This does not make AI a bad tool, it makes it a tool that can used in a bad way to cause a lot of harm.

[–] dustyData@lemmy.world 1 points 1 day ago* (last edited 1 day ago) (1 children)

"If the argument want as you have laid it out, I would not dismiss it."

Your autocorrect software is failing you.

What is your argument? It is OK for a few people to hurt others, since you personally are benefiting, in a very small way, from the cruelty? That's a shit argument to make.

If AI is "just a tool", then how come it doesn't do any of the things it is promised to do? The issue is not expecting "a hammer to drive to work". The problem is that LLMs makers promised a car, you order one, and receive a screwdriver on the mail. Because "screwdrivers are just a tool, you can use it to assemble a car". It's a scam, it is fraud, it is lying and stealing from others to capitalize on bad tech.

If AI is just a tool, its an unethical and immoral tool.

[–] uranibaba@lemmy.world 0 points 1 day ago* (last edited 1 day ago) (1 children)

I’m trying to say that one should call a fraud a fraud, not a bad screwdriver.
You want to have a discussion about the fraud? Don’t say that screwdriver suck, say that it did not do what is was advertised to do.

It is OK for a few people to hurt others, since you personally are benefiting, in a very small way, from the cruelty?

That is not what I tried to say.

[–] dustyData@lemmy.world 1 points 1 day ago* (last edited 1 day ago) (1 children)

Want to discuss the technical qualities of napalm?

I mean, it's obviously not a "bad" tool of warfare, it's just the bad people who use it. It obviously is a separate issue from war crimes committed with it, there's nothing inherently wrong with napalm.

This idea that technology must be evaluated on a vacuum, disconnected from the context that created and uses it, is disingenuous at best and malicious at its worst. Technology, tools, inventions, carry with them the moral and ethical burden of their historical context.

LLMs as we know them today, came to be from massive theft, and continue to promote their own use and improvement with further thievery, fraud and lies. The fraud is not a separate topic, it's intrinsically a part of LLMs. To speak of LLMs is to speak about fraud, copyright infringement and theft. They cannot exist without theft, at least not in their current level of prowess and use. To defend them is to promote corporate crimes on a billion dollars and worldwide scale. This is just one of the many axis of analysis that concludes with, "AI is a bad tool, actually".

[–] uranibaba@lemmy.world 0 points 1 day ago (1 children)

If you want to shit on OpenAI because you think they stink, that is fine. If you want to shit on their LLM because you think it sucks, that is also fine. But don't say that the concept of LLM sucks because OpenAI and their product is bad or their marketing of said product. Say that LLM sucks because there is no regulation and companies are using this to hurt people. Say that OpenAIs LLM suck because they took from the people to create it without giving back. Don't say that OpenAIs LLM cannot produce good output becuase they stole, because the output is good. Stealing is still wrong. Don't say that someone else is worse at what they are doing because they used an LLM. Say that they are worse at what they are doing because (and if) they cannot do it without an LLM.

It would be a nice discussion if you did not try to argue on bad faith. Of course I don't want to compare something that is meant to kill with something that could if you really try. Should be ban ropes because you can strangle people with them? Of course not, that's stupid and you know it.

[–] dustyData@lemmy.world 1 points 1 day ago (1 children)

But, the output is trash. Only incompetent people think LLMs produce great results. They don't.

[–] uranibaba@lemmy.world 0 points 1 day ago (1 children)

That is your opinon and that is fine. Just don't attack people because they don't agree, and don't argue that LLMs are because because they can't do what they are not made to do. It is okey to say the ChatGPT is shit because you can't use it as a calendar if OpenAI is trying to market that. But that is not LLM in general.

[–] dustyData@lemmy.world 1 points 1 day ago (1 children)

And there it is. Wants to have a discussion. Dismisses all arguments instead of tackling them head on. It's not just my opinion. It is the opinion of the vast majority of people, due to a myriad of reasons I already explained. That you just refuse to see them is the problem with AI bullshitters here on Lemmy. You are the one arguing in bad faith.

[–] uranibaba@lemmy.world 1 points 1 day ago (1 children)

I'm am dismissing invalid arguments, that started all this. AIs are not calendars and should be used as such.

[–] dustyData@lemmy.world 1 points 1 day ago* (last edited 1 day ago) (1 children)

No one ever said they were. You constructed that straw man because you can't tolerate the idea that most people think AI is bad. It's not just an opinion. It's a widely popular opinion supported by a ton of evidence, tons of logical and reasonable arguments, and well documented. I provided at least 4 different arguments and your response to all of them was "yes, but I don't want to talk about it". So, you know I'm right yet refuse to acknowledge it because it hurts your ego so much that you feel the need to defend it on an internet forum.

All of which makes me return to the beginning. You're not smart enough to have a grown up conversation about AI without its assistance. So I will now stop providing arguments that you don't want to hear, as obviously the only thing you want to hear is how great AI is. Unfortunately, AI bad.

[–] uranibaba@lemmy.world 1 points 23 hours ago* (last edited 22 hours ago) (1 children)

No one ever said they were

https://lemmy.world/post/27126654/15901324

literally can’t do what a calculator can do reliably. or a timer. or a calendar.

Edit to add: For the record, I am very interested in your arguments and would love to read the reports that has come to the conlusion that LLMs produce bad output. That's news to me (or I should say, a good prompt producing bad output. And what is considered bad and why?). So if you have a link to a report or something similar, please share. But don't claim that I am trying to construct a strawman when the THE VERY FIRST argument provided to me in this very comment chain was what I have talked about all along.

Edit 2: Here is the personal attack, the other point I disagree with: https://lemmy.world/post/27126654/15901907

[–] dustyData@lemmy.world 1 points 20 hours ago (1 children)

See, that's your problem. You're arguing, with me, about something that was said to you by someone else. Do you realize why I'm questioning your argumentative skills?

Here's a source to a study about AI's accuracy as a search engine. The main use case proposed for LLMs as a tool is indexing a bunch of text, then summarizing and answering questions about it in natural language.

AI Search Has A Citation Problem

Another use is creating or modifying text based on an input or prompt, however, LLMs are prone to hallucinations. Here's a deep dive into what they are, why they occur and the challenges of dealing with them.

Decoding LLM Hallucinations: A Deep Dive into Language Model Errors

I don't know why do I even bother. You are just going to ignore the sources and dismiss them as well.

[–] uranibaba@lemmy.world 0 points 19 hours ago

See, that’s your problem. You’re arguing, with me, about something that was said to you by someone else. Do you realize why I’m questioning your argumentative skills?

I'm sorry? You came to me.

Here is how I see it:

  1. Someone compared AI to calculator/calendar
  2. I said you cannot compare that
  3. You asked why I even argue with the first person
  4. I said that I want a better discussion
  5. You said that I should stop dimissing other people's arguments
  6. I tried to explain why I don't think it is a valid argument to compare LLM to "calculator can do reliably. or a timer. or a calendar."
  7. You did not seem to agree with me on that from what I understand.
  8. And now we are here.

--

Here’s a source to a study

I don't have the time to read the articles now so I will have to do it later, but hallucinations can definitively be a problem. Asking for code is one such situation where an LLM can just make up functions that does not exist.