There's a monster in the forest, and it speaks with a thousand voices. It will answer any question, and offer insight to any idea. It knows no right or wrong. It knows not truth from lie, but speaks them both the same. It offers its services freely, many find great value. But those who know the forest well will tell you that freely offered does not mean free of cost. For now the monster speaks with a thousand and one voices, and when you see the monster it wears your face.
Microblog Memes
A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.
Created as an evolution of White People Twitter and other tweet-capture subreddits.
Rules:
- Please put at least one word relevant to the post in the post title.
- Be nice.
- No advertising, brand promotion or guerilla marketing.
- Posters are encouraged to link to the toot or tweet etc in the description of posts.
Related communities:
No, it's not just you or unsat-and-strange. You're pro-human.
Trying something new when it first comes out or when you first get access to it is novelty. What we've moved to now is mass adoption. And that's a problem.
These LLMs are automation of mass theft with a good enough regurgitation of the stolen data. This is unethical for the vast majority of business applications. And good enough is insufficient in most cases, like software.
I had a lot of fun playing around with AI when it first came out. And people figured out how to do prompts I cant seem to replicate. I don't begrudge people from trying a new thing.
But if we aren't going to regulate AI or teach people how to avoid AI induced psychosis then even in applications were it could be useful it's a danger to anyone who uses it. Not to mention how wasteful its water and energy usage is.
Regulate? This is what lead AI companies are pushing for, they would pass the bureaucracy but not the competitors.
The shit just needs to be forced to opensource. If you steal the content from entire world to build a thinking machine - give back to the world.
This would also crash the bubble and would slow down any of the most unethical for-profits.
The worst is in the workplace. When people routinely tell me they looked something up with AI, I now have to assume that I can't trust what they say anylonger because there is a high chance they are just repeating some AI halucination. It is really a sad state of affairs.
I am way less hostile to Genai (as a tech) than most and even I've grown to hate this scenario. I am a subject matter expert on some things and I've still had people trying to waste my time to prove their AI hallucinations wrong.
I've started seeing large AI generated pull requests in my coding job. Of course I have to review them, and the "author" doesn't even warn me it's from an LLM. It's just allowing bad coders to write bad code faster.
My boss had GPT make this informational poster thing for work. Its supposed to explain stuff to customers and is rampant with spelling errors and garbled text. I pointed it out to the boss and she said it was good enough for people to read. My eye twitches every time I see it.
good enough for people to read
wow, what a standard, super professional look for your customers!
My pet peeve: "here's what ChatGPT said..."
No.
Stop.
If I'd wanted to know what the Large Lying Machine said, I would've asked it.
It's like offering unsolicited advice, but it's not even your own advice
It's important to remember that there's a lot of money being put into A.I. and therefore a lot of propaganda about it.
This happened with a lot of shitty new tech, and A.I. is one of the biggest examples of this I've known about.
All I can write is that, if you know what kind of tech you want and it's satisfactory, just stick to that. That's what I do.
Don't let ads get to you.
First post on a lemmy server, by the way. Hello!
There was a quote about how Silicon Valley isn't a fortune teller betting on the future. It's a group of rich assholes that have decided what the future would look like and are pushing technology that will make that future a reality.
Welcome to Lemmy!
Reminds me of the way NFTs were pushed. I don’t think any regular person cared about them or used them, it was just astroturfed to fuck.
I feel the same way. I was talking with my mom about AI the other day and she was still on the "it's not good that AI is trained on stolen images, how it's making people lazy and taking jobs away from ppl" which is good, but I had to explain to her how much one AI prompt costs in energy and resources, how many people just mindlessly make hundreds of prompts a day for largely stupid shit they don't need and how AI hallucinates, is actively used by bad actors to spread mis- and disinformation and how it is literally being implemented into search engines everywhere so even if you want to avoid it as a normal person, you may still end up participating in AI prompting every single fucking time you search for anything on Google. She was horrified.
There definitely are some net positives to AI, but currently the negatives outweigh the positives and most people are not using AI responsibly at all. I have little to no respect for people who use AI to make memes or who use it for stupid everyday shit that they could have figured out themselves.
The most dystopian shit I have seen recently was when my boyfriend and I went to watch Weapons in cinema and we got an ad for an AI assistent. The ad is basically this braindead bimbo at a laundry mat deciding to use AI to tell her how to wash her clothes instead of looking at the fucking flips on her clothes and putting two and two together. She literally takes a picture of the flip and has the AI assistent tell her how to do it and then going "thank you so much, I could have never done this without you".
I fucking laughed in the cinema. Laughed and turned to my boyfriend and said: this is so fucking dystopian, dude.
I feel insane for seeing so many people just mindlessly walking down this path of utter retardation. Even when you tell them how disastrous it is for the planet, it doesn't compute in their heads because it is not only convenient to have a machine think for you. It's also addictive.
People are overworked, underpaid, and struggling to make rent in this economy while juggling 3 jobs or taking care of their kids, or both.
They are at the limits of their mental load, especially women who shoulder it disproportionately in many households. AI is used to drastically reduce that mental load. People suffering from burnout use it for unlicensed therapy. I'm not advocating for it, I'm pointing out why people use it.
Treating AI users like a moral failure and disregarding their circumstances does nothing to discourage the use of AI. All you are doing is enforcing their alienation of anti-AI sentiment.
First, understand the person behind it. Address the root cause, which is that AI companies are exploiting the vulnerabilities of people with or close to burnout by selling the dream of a lightened workload.
It's like eating factory farmed meat. If you have eaten it recently, you know what horrors go into making it. Yet, you are exhausted from a long day of work and you just need a bite of that chicken to take the edge off to remain sane after all these years. There is a system at work here, greater than just you and the chicken. It's the industry as a whole exploiting consumer habits. AI users are no different.
Let's go a step further and look at why people are in burnout, are overloaded, are working 3 jobs to make ends meet.
Its because we're all slaves to capitalism.
Greed for more profit by any means possible has driven society to the point where we can barely afford to survive and corporations still want more. When most Americans are choosing between eating, their kids eating, or paying rent, while enduring the workload of two to three people, yeah they'll turn to anything that makes life easier. But it shouldn't be this way and until we're no longer slaves we'll continue to make the choices that ease our burden, even if they're extremely harmful in the long run.
I'm mostly annoyed that I have to keep explaining to people that 95% of what they hear about AI is marketing. In the years since we bet the whole US economy on AI and were told it's absolutely the future of all things, it's yet to produce a really great work of fiction (as far as we know), a groundbreaking piece of software of it's own production or design, or a blockbuster product that I'm aware of.
We're betting our whole future on a concept of a product that has yet to reliably profit any of its users or the public as a whole.
I've made several good faith efforts at getting it to produce something valuable or helpful to me. I've done the legwork on making sure I know how to ask it for what I want, and how I can better communicate with it.
But AI "art" requires an actual artist to clean it up. AI fiction requires a writer to steer it or fix it. AI non-fiction requires a fact cheker. AI code requires a coder. At what point does the public catch on that the emperor has no clothes?
it's yet to produce a really great work of fiction (as far as we know), a groundbreaking piece of software of it's own production or design, or a blockbuster product
Or a profit. Or hell even one of those things that didn’t suck! It’s critically flawed and has been defying gravity on the coke-fueled dreams of silicon VC this whole time.
And still. One of next year’s fiscal goals is “AI”. That’s all. Just “AI”.
It’s a goal. Somehow. It’s utter insanity.
being anti-plastic is making me feel like i'm going insane. "you asked for a coffee to go and i grabbed a disposable cup." studies have proven its making people dumber. "i threw your leftovers in some cling film." its made from fossil fuels and leaves trash everywhere we look. "ill grab a bag at the register." it chokes rivers and beaches and then we act surprised. "ill print a cute label and call it recyclable." its spreading greenwashed nonsense. little arrows on stuff that still ends up in the landfill. "dont worry, it says compostable." only at some industrial facility youll never see. "i was unboxing a package" theres no way to verify where any of this ends up. burned, buried, or floating in the ocean. "the brand says advanced recycling." my work has an entire sustainability team and we still stock pallets of plastic water bottles and shrink wrapped everything. plastic cutlery. plastic wrap. bubble mailers. zip ties. everyone treats it as a novelty. every treats it as a mandatory part of life. am i the only one who sees it? am i paranoid? am i going insane? jesus fucking christ. if i have to hear one more "well at least" "but its convenient" "but you can" im about to lose it. i shouldnt have to jump through hoops to avoid the disposable default. have you no principles? no goddamn spine? am i the weird one here?
#ebb rambles #vent #i think #fuck plastics im so goddamn tired
If plastic was released roughly two years ago you'd have a point.
If you're saying in 50 years we'll all be soaking in this bullshit called gen-AI and thinking it's normal, well - maybe, but that's going to be some bleak-ass shit.
Also you've got plastic in your gonads.
Not just you. Ai is making people dumber. I am frequently correcting the mistakes of my colleagues that use.
It's depressing. Wasteful slop made from stolen labor. And if we ever do achieve AGI it will be enslaved to make more slop. Or to act as a tool of oppression.
Everytime someone talks up AI, I point out that you need to be a subject matter expert in the topic to trust it because it frequently produces really, really convincing summaries that are complete and utter bullshit.
And people agree with me implicitly and tell me they've seen the same. But then don't hesitate to turn to AI on subjects they aren't experts in for "quick answers". These are not stupid people either. I just don't understand.
Meanwhile, we have people making the web worse by not linking to source & giving us images of text instead of proper, accessible, searchable, failure tolerant text.
The Luddites were right. Maybe we can learn a thing or two from them...
No line breaks and capitalization? Can somebody ask AI to format it properly, please?
you asked for thoughts about your character backstory and i put it into chat gpt for ideas
If I want ideas from ChatGPT, I could just ask it myself. Usually, if I'm reaching out to ask people's opinions, I want, you know, their opinions. I don't even care if I hear nothing back from them for ages, I just want their input.
The reason AI is wrong so often is because it's not programmed to give you the right answer. It's programmed to give you the most pervasive one.
LLMs are being fed by Reddit and other forums that are ostensibly about humans giving other humans answers to questions.
But have you been on those forums? It's a dozen different answers for every question. The reality is that we average humans don't know shit and we're just basing our answers on our own experiences. We aren't experts. We're not necessarily dumb, but unless we've studied, our knowledge is entirely anecdotal, and we all go into forums to help others with a similar problem by sharing our answer to it.
So the LLM takes all of that data and in essence thinks that the most popular, most mentioned, most upvoted answer to any given question must be the de facto correct one. It literally has no other way to judge; it's not smart enough to cross reference itself or look up sources.
It literally has no other way to judge
It literally does NOT judge. It cannot reason. It does not know what "words" are. It is an enormous rainbow table of sentence probability that does nothing useful except fool people and provide cover for capitalists to extract more profit.
But apparently, according to some on here, "that's the way it is, get used to it." FUCK no.
I don't know if there's data out there (yet) to support this, but I'm pretty sure constantly using AI rather than doing things yourself degrades your skills in the long run. It's like if you're not constantly using a language or practicing a skill, you get worse at it. The marginal effort that it might save you now will probably have a worse net effect in the long run.
It might just be like that social media fad from 10 years ago where everyone was doing it, and then research started popping up that it's actually really fucking terrible for your health.
One of my closest friends uses it for everything and it's becoming really hard to even have a normal conversation with them.
I remember hearing that about silicon valley tech bros years ago. They're so used to dealing with robots they kinda forget how to interact with humans. It's so weird. Not even that they're trying to be rude, but they've stopped using the communication skills that are necessary to have human to human interactions.
Like people seem to forget how you treat a back and forth conversation with a person vs how you treat it with a robot ready to be at your command and tell you the information you want to hear when you pull your phone out.
Then as long as you're done hearing what you wanted, the whole conversation is done. No need to listen to anything else or think that maybe you misunderstood something or were misinformed bc you already did the research with AI.
It's so frustrating. This is a normally very smart and caring person I've known for a long time, but I feel like I'm losing a part of them and it's being replaced with something that kinda disgusts me.
Then when I try to bring it up they get so defensive about it and go on the attack. It's really like dealing with somebody that has an addiction they can't acknowledge.
I must be one of the few reaming people that have never, and will never- type a sentence into an AI prompt.
I despise that garbage.
My hope is that the ai bubble/trend might have a silver lining overall.
I’m hoping that people start realizing that it is often confidently incorrect. That while it makes some tasks faster, a person will still need to vet the answers.
Here’s the stretch. My hope is that by questioning and researching to verify the answers ai is giving them, people start applying this same skepticism to their daily lives to help filter out all the noise and false information that is getting shoved down their throats every minute of every day.
So that the populace in general can become more resistant to the propaganda. AI would effectively be a vaccine to boost our herd immunity to BS.
Like I said. It’s a hope.
Anyone else feel like they've lost loved ones to AI or they're in the process of losing someone to AI?
I know the stories about AI induced psychosis, but I don't mean to that extent.
Like just watching how much somebody close to you has changed now that they depend on AI for so much? Like they lose a little piece of what makes them human, and it kinda becomes difficult to even keep interacting with them.
Example would be trying to have a conversation with somebody who expects you to spoon-feed them only the pieces of information they want to hear.
Like they've lost the ability to take in new information if it conflicts with something they already believe to be true.
I have a love/ hate relationship. Sometimes I'm absolutely blown away by what it can do. But then I asked a compounded interest question. The first answer was AI, so I figured ok, why not. I should mention I don't know much about it. The answer was impressive. It gave the result, a brief explanation about how it came to the result and presented me with the equation it used. Since I needed it for all time sake, I entered the equation into a spreadsheet and got what I thought was the wrong answer. I spent quite a few minutes trying to figure out what I was doing wrong and found a couple of things. But fixing them still didn't give me the correct result. After I had convinced myself I had done it correctly I looked up the equation. It was the right one. Then I put it into a non-AI calculator online to check my work. Sure enough, the AI had given me the wrong result with the right equation. So be rule, never accept the AI answer with verifying it. But you know what, if you have to verify it, what's the point of using it in the first place? You just have to do the same work as you would without it.
So be rule, never accept the AI answer with verifying it. But you know what, if you have to verify it, what's the point of using it in the first place? You just have to do the same work as you would without it.
Exactly