194
submitted 2 weeks ago* (last edited 2 weeks ago) by cyrano@lemmy.dbzer0.com to c/asklemmy@lemmy.world
top 50 comments
sorted by: hot top controversial new old
[-] vk6flab@lemmy.radio 102 points 2 weeks ago

Other than endless posts from the general public telling us how amazing it is, peppered with decision makers using it to replace staff and then the subsequent news reports how it told us that we should eat rocks, or some variation thereof, there's been no impact whatsoever in my personal life.

In my professional life as an ICT person with over 40 years experience, it's helped me identify which people understand what it is and more specifically, what it isn't, intelligent, and respond accordingly.

The sooner the AI bubble bursts, the better.

[-] Vinny_93@lemmy.world 25 points 2 weeks ago

I fully support AI taking over stupid, meaningless jobs if it also means the people that used to do those jobs have financial security and can go do a job they love.

Software developer Afas has decided to give certain employees one day a week off with pay, and let AI do their job for that day. If that is the future AI can bring, I'd be fine with that.

Caveat is that that money has to come from somewhere so their customers will probably foot the bill meaning that other employees elsewhere will get paid less.

But maybe AI can be used to optimise business models, make better predictions. Less waste means less money spent on processes which can mean more money for people. I then also hope AI can give companies better distribution of money.

This of course is all what stakeholders and decision makers do not want for obvious reasons.

[-] vk6flab@lemmy.radio 35 points 2 weeks ago

The thing that's stopping anything like that is that the AI we have today is not intelligence in any sense of the word, despite the marketing and "journalism" hype to the contrary.

ChatGPT is predictive text on steroids.

Type a word on your mobile phone, then keep tapping the next predicted word and you'll have some sense of what is happening behind the scenes.

The difference between your phone keyboard and ChatGPT? Many billions of dollars and unimaginable amounts of computing power.

It looks real, but there is nothing intelligent about the selection of the next word. It just has much more context to guess the next word and has many more texts to sample from than you or I.

There is no understanding of the text at all, no true or false, right or wrong, none of that.

AI today is Assumed Intelligence

Arthur C Clarke says it best:

"Any sufficiently advanced technology is indistinguishable from magic."

I don't expect this to be solved in my lifetime, and I believe that the current methods of"intelligence " are too energy intensive to be scalable.

That's not to say that machine learning algorithms are useless, there are significant positive and productive tools around, ChatGPT and its Large Language Model siblings not withstanding.

Source: I have 40+ years experience in ICT and have an understanding of how this works behind the scenes.

[-] Vinny_93@lemmy.world 11 points 2 weeks ago

I think you're right. AGI and certainly ASI are behind one large hurdle: we need to figure out what consciousness is and how we can synthesize it.

As Qui-Gon Jinn said to Jar Jar Binks: the ability to speak does not make you intelligent.

load more comments (3 replies)
load more comments (3 replies)
[-] PonyOfWar@pawb.social 73 points 2 weeks ago

As a software developer, the one usecase where it has been really useful for me is analyzing long and complex error logs and finding possible causes of the error. Getting it to write code sometimes works okay-ish, but more often than not it's pretty crap. I don't see any use for it in my personal life.

I think its influence is negative overall. Right now it might be useful for programming questions, but that's only the case because it's fed with Human-generated content from sites like Stackoverflow. Now those sites are slowly dying out due to people using ChatGPT and this will have the inverse effect that in the future, AI will have less useful training data which means it'll become less useful for future problems, while having effectively killed those useful sites in the process.

Looking outside of my work bubble, its effect on academia and learning seems pretty devastating. People can now cheat themselves towards a diploma with ease. We might face a significant erosion of knowledge and talent with the next generation of scientists.

[-] Tyfud@lemmy.world 14 points 2 weeks ago* (last edited 2 weeks ago)

I wish more people understood this. It's short term, mediocre gains, at the cost of a huge long term loss, like stack overflow.

[-] Norin@lemmy.world 63 points 2 weeks ago* (last edited 2 weeks ago)

For work, I teach philosophy.

The impact there has been overwhelmingly negative. Plagiarism is more common, student writing is worse, and I need to continually explain to people at an AI essay just isn’t their work.

Then there’s the way admin seem to be in love with it, since many of them are convinced that every student needs to use the LLMs in order to find a career after graduation. I also think some of the administrators I know have essentially automated their own jobs. Everything they write sounds like GPT.

As for my personal life, I don’t use AI for anything. It feels gross to give anything I’d use it for over to someone else’s computer.

[-] AFKBRBChocolate@lemmy.world 28 points 2 weeks ago

My son is in a PhD program and is a TA for a geophysics class that's mostly online, so he does a lot of grading assignments/tests. The number of things he gets that are obviously straight out of an LLM is really disgusting. Like sometimes they leave the prompt in. Sometimes the submit it when the LLM responds that it doesn't have enough data to give an answer and refers to ways the person could find out. It's honestly pretty sad.

[-] MonkeMischief@lemmy.today 18 points 2 weeks ago

convinced that every student needs to use the LLMs in order to find a career after graduation.

Yes, of course, why are bakers learning to use ovens when they should just be training on app-enabled breadmakers and toasters using ready-made mixes?

After all, the bosses will find the automated machine product "good enough." It's "just a tool, you guys."

Sheesh. I hope these students aren't paying tuition, and even then, they're still getting ripped off by admin-brain.

I'm sorry you have to put up with that. Especially when philosophy is all about doing the mental weightlifting and exploration for onesself!

[-] Nostalgia@lemmy.world 57 points 2 weeks ago

AI has completely killed my desire to teach writing at the community college level.

[-] Stovetop@lemmy.world 20 points 2 weeks ago* (last edited 2 weeks ago)

Agreed. I started steps needed to be certified as an educator in my state but decided against it. ChatGPT isn't the only reason, but it is a contributing factor. I don't envy all of the teachers out there right now who have to throw out the entire playbook of what worked in the past.

And I feel bad for students like me who really struggled with in-class writing by hand in a limited amount of time, because that is what everyone is resorting to right now.

load more comments (1 replies)
[-] LogicalDrivel@sopuli.xyz 54 points 2 weeks ago

It cost me my job (partially). My old boss swallowed the AI pill hard and wanted everything we did to go through GPT. It was ridiculous and made it so things that would normally take me 30 seconds now took 5-10 minutes of "prompt engineering". I went along with it for a while but after a few weeks I gave up and stopped using it. When boss asked why I told her it was a waste of time and disingenuous to our customers to have GPT sanitize everything. I continued to refuse to use it (it was optional) and my work never suffered. In fact some of our customers specifically started going through me because they couldn't stand dealing with the obvious AI slop my manager was shoveling down their throat. This pissed off my manager hard core but she couldn't really say anything without admitting she may be wrong about GPT, so she just ostracized me and then fired me a few months later for "attitude problems".

[-] JudahBenHur@lemm.ee 20 points 2 weeks ago

im sorry.

managers tend to be useless fucking idiots.

load more comments (3 replies)
[-] jg1i@lemmy.world 43 points 2 weeks ago

I absolutely hate AI. I'm a teacher and it's been awful to see how AI has destroyed student learning. 99% of the class uses ChatGPT to cheat on homework. Some kids are subtle about it, others are extremely blatant about it. Most people don't bother to think critically about the answers the AI gives and just assume it's 100% correct. Even if sometimes the answer is technically correct, there is often a much simpler answer or explanation, so then I have to spend extra time un-teaching the dumb AI way.

People seem to think there's an "easy" way to learn with AI, that you don't have to put in the time and practice to learn stuff. News flash! You can't outsource creating neural pathways in your brain to some service. It's like expecting to get buff by asking your friend to lift weights for you. Not gonna happen.

Unsurprisingly, the kids who use ChatGPT the most are the ones failing my class, since I don't allow any electronic devices during exams.

[-] polle@feddit.org 11 points 2 weeks ago

As a student i get annoyed thr other way arround. Just yesterday i had to tell my group of an assignment that we need to understand the system physically and code it ourselves in matlab and not copy paste code with Chatgpt, because its way to complex. I've seen people wasting hours like that. Its insane.

load more comments (4 replies)
[-] MNByChoice@midwest.social 41 points 2 weeks ago

Impact?

My company sells services to companies trying to implement it. I have a job due to this.

Actual use of it? Just wasted time. The verifiable answers are wrong, the unverifiable answers don't get me anywhere on my projects.

load more comments (1 replies)
[-] IMNOTCRAZYINSTITUTION@lemmy.world 35 points 2 weeks ago

My last job was making training/reference manuals. Management started pushing ChatGPT as a way to increase our productivity and forced us all to incorporate AI tools. I immediately began to notice my coworkers' work decline in quality with all sorts of bizarre phrasings and instructions that were outright wrong. They weren't even checking the shit before sending it out. Part of my job was to review and critique their work and I started having to send way more back than before. I tried it out but found that it took more time to fix all of its mistakes than just write it myself so I continued to work with my brain instead. The only thing I used AI for was when I had to make videos with narration. I have a bad stutter that made voiceover hard so elevenlabs voices ended up narrating my last few videos before I quit.

load more comments (2 replies)
[-] frickineh@lemmy.world 32 points 2 weeks ago

I used it once to write a proclamation for work and what it spit out was mediocre. I ended up having to rewrite most of it. Now that I'm aware of how many resources AI uses, I refuse to use it, period. What it produces is in no way a good trade for what it costs.

load more comments (1 replies)
[-] LovableSidekick@lemmy.world 31 points 2 weeks ago* (last edited 2 weeks ago)

Never explored it at all until recently, I told it to generate a small country tavern full of NPCs for 1st edition AD&D. It responded with a picturesque description of the tavern and 8 or 9 NPCs, a few of whom had interrelated backgrounds and little plots going on between them. This is exactly the kind of time-consuming prep that always stresses me out as DM before a game night. Then I told it to describe what happens when a raging ogre bursts in through the door. Keeping the tavern context, it told a short but detailed story of basically one round of activity following the ogre's entrance, with the previously described characters reacting in their own ways.

I think that was all it let me do without a paid account, but I was impressed enough to save this content for a future game session and will be using it again to come up with similar content when I'm short on time.

My daughter, who works for a nonprofit, says she uses ChatGPT frequently to help write grant requests. In her prompts she even tells it to ask her questions about any details it needs to know, and she says it does, and incorporates the new info to generate its output. She thinks it's a super valuable tool.

[-] GreenKnight23@lemmy.world 27 points 2 weeks ago

I worked for a company that did not govern AI use. It was used for a year before they were bought.

I stopped reading emails because they were absolute AI generated garbage.

Clients started to complain and one even left because they felt they were no longer a priority for the company. they were our 5th largest client that had a MRR of $300k+

they still did nothing to curb AI use.

they then reduced the workforce in the call center because they implemented an AI chat bot and began to funnel all incidents through it first before giving a phone number to call.

company was then acquired a year ago. new administration banned all AI usage under security and compliance guidelines.

today, new company hired about 20 new call center support staff. Customers are now happy. I can read my emails again because they contain human competent thought with industry jargon and not some generated thesaurus.

overall, I would say banning AI was the right choice.

IMO, AI is not being used in the most effective ways and causes too much chaos. cryptobros are pushing AI to an early grave because all they want is a cash cow to replace crypto.

[-] Routhinator@startrek.website 27 points 2 weeks ago

I have a gloriously reduced monthly subscription footprint and application footprint because of all the motherfuckers that tied ChatGPT or other AI into their garbage and updated their terms to say they were going to scan my private data with AI.

And, even if they pull it, I don't think I'll ever go back. No more cloud drives, no more 'apps'. Webpages and local files on a file share I own and host.

[-] lime@feddit.nu 26 points 2 weeks ago

it works okay as a fuzzy search over documentation.
...as long as you're willing to wait.
...and the documentation is freely available.
...and doesn't contain any sensitive information.
...and you very specifically ask it for page references and ignore everything else it says.

so basically, it's worse than just searching for one word and pressing "next" over and over, unless you don't know what the word is.

[-] AFKBRBChocolate@lemmy.world 23 points 2 weeks ago

I manage a software engineering group for an aerospace company, so early on I had to have a discussion with the team about acceptable and non-acceptable uses of an LLM. A lot of what we do is human rated (human lives depend on it), so we have to be careful. Also, it's a hard no on putting anything controlled or proprietary in a public LLM (the company now has one in-house).

You can't put trust into an LLM because they get things wrong. Anything that comes out of one has to be fully reviewed and understood. They can be useful for suggesting test cases or coming up with wording for things. I've had employees use it to come up with an algorithm or find an error, but I think it's risky to have one generate large pieces of code.

load more comments (3 replies)
[-] Sludgehammer@lemmy.world 22 points 2 weeks ago

Searching the internet for information about... well anything has become infuriating. I'm glad that most search engines have a time range setting.

[-] MonkeMischief@lemmy.today 13 points 2 weeks ago

"It is plain to see why you might be curious about Error 4752X3G: Allocation_Buffer_Fault. First, let's start with the basics.

  • What is an operating system?"

AGGHH!!!

[-] GiantChickDicks@lemmy.ml 21 points 2 weeks ago

I work in an office providing customer support for a small pet food manufacturer. I assist customers over the phone, email, and a live chat function on our website. So many people assume I'm AI in chat, which makes sense. A surprising number think I'm a bot when they call in, because I guess my voice sounds like a recording.

Most of the time it's just a funny moment at the start of our interaction, but especially in chat, people can be downright nasty. I can't believe the abuse people hurl out when they assume it's not an actual human on the other end. When I reply in a way that is polite, but makes it clear a person is interacting with them, I have never gotten a response back.

It's not a huge deal, but it still sucks to read the nasty shit people say. I can also understand people's exhaustion with being forced to deal with robots from my own experiences when I've needed support as a customer. I also get feedback every day from people thankful to be able to call or write in and get an actual person listening to and helping them. If we want to continue having services like this, we need to make sure we're treating the people offering them decently so they want to continue offering that to us.

load more comments (2 replies)
[-] Aganim@lemmy.world 20 points 2 weeks ago

I cannot come up with a use-case for ChatGPT in my personal life, so no impact there.

For work it was a game-changer. No longer do I need to come up with haiku's to announce it is release-freeze day, I just let ChatGPT crap one out so we can all have a laugh at its lack of poetic talent.

I've tried it now and then for some programming related questions, but I found its solutions dubious at best.

[-] Mango@lemmy.world 19 points 2 weeks ago

It's affected me by being really annoying to hear about in the news all the time.

[-] dingus@lemmy.world 19 points 2 weeks ago

ChatGPT has had absolutely zero impact on my work or personal life. I do not have any useful case for it whatsoever. I have used it for goofs before. That's about it. I cannot see it as a positive or negative influence...as it has had zero influence. I do get annoyed that every company and their mother is peddling worthless AI shit that most people have no use case for.

load more comments (1 replies)
[-] Rhynoplaz@lemmy.world 17 points 2 weeks ago

For my life, it's nothing more than parlor tricks. I like looking at the AI images or whipping one up for a joke in the chat, but of all the uses I've seen, not one of them has been "everyday useful" to me.

[-] traches@sh.itjust.works 16 points 2 weeks ago

I have a guy at work that keeps inserting obvious AI slop into my life and asking me to take it seriously. Usually it’s a meeting agenda that’s packed full of corpo-speak and doesn’t even make sense.

I’m a software dev and copilot is sorta ok sometimes, but also calls my code a hack every time I start a comment and that hurts my feelings.

[-] Mechaguana@programming.dev 16 points 2 weeks ago

Its making the impact of bots more polarizing, turning social media into a self radicalizing tool.

[-] Caboose12000@lemmy.world 16 points 2 weeks ago* (last edited 2 weeks ago)

I got into linux right around when it was first happening, and I dont think I would've made it through my own noob phase if i didnt have a friendly robot to explain to me all the stupid mistakes I was making while re-training my brain to think in linux.

probably a very friendly expert or mentor or even just a regular established linux user could've done a better job, the ai had me do weird things semi-often. but i didnt have anyone in my life that liked linux, let alone had time to be my personal mentor in it, so the ai was a decent solution for me

[-] recursive_recursion@lemmy.ca 16 points 2 weeks ago

It's erased several tech jobs and replaced some helpforum commentors with bots to pretend their communities are alive and when you read their comments or 'suggestions' you can clearly tell, this isn't someone trying to help it's just a bot posting garbage pretending to help

[-] kava@lemmy.world 15 points 2 weeks ago

i've used it fairly consistently for the last year or so. i didn't actually start using it until chatgpt 4 and when openai offered the $20 membership

i think AI is a tool. like any other tool, your results vary depending on how you use it

i think it's really useful for specific intents

example, as a fancy search engine. yesterday I was watching Annie from 1999 with my girlfriend and I was curious about the capitalist character. i asked chatgpt the following question

in the 1999 hit movie annie, who was the billionaire mr warbucks supposed to represent? were there actually any billionaires in the time period? it's based around the early 1930s

it gave me context. it showed examples of the types of capitalist the character was based on. and it informed me that the first billionaire was in 1916.

very useful for this type of inquiry.

other things i like using it for are to help coding. but there's a huge caveat here. some thing it's very helpful for... and some things it's abysmal for.

for example i can't ask it "can you help me write a nice animation for a react native component used reanimated"

because the response will be awful and won't work. and you could go back and forth with it forever and it won't make a difference. the reason is it's trained on a lot of stuff that's outdated so it'll keep giving you code that maybe would have worked 4 years ago. and even then, it can't hold too much context so complex applications just won't work

BUT certain things it's really good. for example I need to write a script for work. i use fish shell but sometimes i don't know the proper syntax or everything fish is capable of

so I ask

how to test, using fish, if an "images.zip" file exists in $target_dir

it'll pump out

if test -f "$target_dir/images.zip"
    echo "File exists."
else
    echo "File does not exist."
end

which gives me what i needed in order to place it into the script i was writing.

or for example if you want to convert a bash script to a fish script (or vice versa), it'll do a great job

so tldr:

it's a tool. it's how you use it. i've used it a lot. i find great value in it. but you must be realistic about its limitations. it's not as great as people say- it's a fancy search engine. it's also not as bad as people say.

as for whether it's good or bad for society, i think good. or at least will be good eventually. was the search engine a bad thing for society? i think being able to look up stuff whenever you want is a good thing. of course you could make the argument kids don't go to libraries anymore.. and maybe that's sorta bad. but i think the trade-off is definitely worth it

load more comments (2 replies)
[-] Kaldo@fedia.io 15 points 2 weeks ago

It is getting more present at work every day, I keep having to hear even seniors how they "discussed" something with chatgpt or how they will ask it for help. Had to resolve some issue with devops a while back and they just kept pasting errors into chatgpt and trying out whatever it spewed back, which I guess wasn't that much different from me googling the same issue and spewing back whatever SO said.

I tried it myself and while it is neat for some simple repetitive things, I always end up with normal google searches or clicking on the sources because the problems I usually have to google for are also complicated problems that I need the whole original discussion and context too, not just a summary that might skip important caveats.

I dunno, I simultaneously feel old and out of touch, angry at myself for not just going with the flow and buying into it, but also disappointed in other people that rely on it without ever understanding that it's so flawed, unreliable and untrustworthy, and making people into worse programmers.

[-] aesthelete@lemmy.world 14 points 2 weeks ago

It's made my professional life way worse because it was seen as an indication that the every hack-a-thon attempt to put a stupid chat bot in everything is great, actually.

[-] Gxost@lemmy.world 13 points 2 weeks ago

GitHub Copilot became my daily helper at work. While I'm not 100% satisfied with its code quality, I must admit it's very handy at writing boilerplate code. A few days ago, I had to write code without having internet access, and it was so disappointing to write boilerplate code by hand. It's an easy task, but it's time-consuming and unpleasant.

[-] wizardbeard@lemmy.dbzer0.com 10 points 2 weeks ago

I will forever continue to suggest that as a developer, you learn your IDE of choice's features for templates/code snippets, or make yourself a "templates" file to copy and paste from.

Far more control, far less opportunity to miss something small and mess up, cheaper, less resource use, and faster.

Using VsCode/VsCodium's snippets feature has been a serious game changer for me when it comes to boilerplate.

load more comments (1 replies)
load more comments (4 replies)
[-] FeelzGoodMan420@eviltoast.org 13 points 2 weeks ago* (last edited 2 weeks ago)

I use it as a glorified google search for excel formulas and excel troubleshooting. That's about it. ChatGPT is the most overhyped bullshit ever. My company made a huge push to implement it into fucking everything and then seemingly abandoned it when the hype died down.

[-] Brkdncr@lemmy.world 12 points 2 weeks ago

For me, the amount of people and time spent in meetings that talk about AI grossly outweighs any benefit of AI.

[-] LesserAbe@lemmy.world 12 points 2 weeks ago

I'm a coding hobbyist, it's been very helpful in analyzing bugs, giving quick info about syntax and converting formatting for long sections where manually typing would be time intensive.

Point taken by someone else here saying continued use of AI may mean decreased functionally for stack exchange et al. That said, the advantage of AI is that it's answering your question specifically, instead of spending time sifting through semi related answers.

Outside of code it's good at aping the form of various genres. So if I need to answer an RFP question in a sales proposal, I might feed it the prompt to get a starting point. It always needs editing since it doesn't know the details of our business and because it's writing style is bland, but it's helpful to get a first draft.

[-] sudneo@lemm.ee 12 points 2 weeks ago

After 2 years it's quite clear that LLMs still don't have any killer feature. The industry marketing was already talking about skyrocketing productivity, but in reality very few jobs have changed in any noticeable way, and LLM are mostly used for boring or bureaucratic tasks, which usually makes them even more boring or useless.

Personally I have subscribed to kagi Ultimate which gives access to an assistant based on various LLMs, and I use it to generate snippets of code that I use for doing labs (training) - like AWS policies, or to build commands based on CLI flags, small things like that. For code it gets it wrong very quickly and anyway I find it much harder to re-read and unpack verbose code generated by others compared to simply writing my own. I don't use it for anything that has to do communication, I find it unnecessary and disrespectful, since it's quite clear when the output is from a LLM.

For these reasons, I generally think it's a potentially useful nice-to-have tool, nothing revolutionary at all. Considering the environmental harm it causes, I am really skeptical the value is worth the damage. I am categorically against those people in my company who want to introduce "AI" (currently banned) for anything other than documentation lookup and similar tasks. In particular, I really don't understand how obtuse people can be thinking that email and presentations are good use cases for LLMs. The last thing we need is to have useless communication longer and LLMs on both sides that produce or summarize bullshit. I can totally see though that some people can more easily envision shortcutting bullshit processes via LLMs than simply changing or removing them.

[-] weeeeum@lemmy.world 12 points 2 weeks ago

Scam emails are a lot more coherent now

load more comments (1 replies)
[-] Burninator05@lemmy.world 11 points 2 weeks ago

It seemingly has little impact. I've attempted to use LLMs a couple of times to ask very specific technical questions (on this specific model, running this specific OS version, how do I do this very specific thing) to try and cut down on the amount of research I would have to do to find a solution. The answer every time has been wrong. Once it was close enough to the answer I was able to figure it out but "close enough" doesn't seem worth bothering with most of the time.

When I search for things I always slip the AI summary at the top of the page.

[-] aramis87@fedia.io 11 points 2 weeks ago

Someone suggested using it to identify things you only remember bits of or certain scenes from. I tried using it to find this YA book I read as a kid; it was not at all helpful, but did eventually lead me to do researching and finding the book elsewhere. (And it turns out the scene I was describing was exactly what happened, and the characters were named exactly what I thought they were, so that was born annoying at the time and frustrating later.)

I also tried using it to find this really obscure, incredibly bad 1970s tv movie that I had vague recollections of. Again, the scene was pretty much what I remembered, it couldn't identify it, but I eventually found a site that lists the plots of old tv movies and I read through like 30 pages of movie synopses until I found the one I was looking for.

I've also tried using it to find this 1980's interactive fiction game, but it's proved useless once again - and once again further research has identified a couple possibilities except I haven't had time to try to find the game and set up the right environment for it.

So my experience has been that it's useless in finding the things I want it to find, but that in trying to persist against it may lead me to find what I'm looking for elsewhere.

load more comments (3 replies)
[-] MonkeMischief@lemmy.today 11 points 2 weeks ago

Man, so much to unpack here. It has me worried for a lot of the reasons mentioned: The people who pay money to skilled labor will think "The subscription machine can just do it." And that sucks.

I'm a digital artist as well, and while I think genAi is a neat toy to play with for shitposting or just "seeing what this dumb thing might look like" or generating "people that don't exist" and it's impressive tech, I'm not gonna give it ANY creative leverage over my work. Period. I still take issue with where it came from and how it was trained and the impact it has on our culture and planet.

We're already seeing the results of that slop pile generated from everyone who thought they could "achieve their creative dreams" by prompting a genie-product for it instead of learning an actual skill.

As for actual usefulness? Sometimes I run a local model for funsies and just bounce ideas off of it. It's like a parrot combined with a "programmer's rubber ducky." Sometimes that gets my mind moving, in the same way "autocomplete over and over" might generate interesting thoughts.

I also will say it's pretty decent at summarizing things. I actually find it somewhat helpful when YouTube's little "ai summary" is like "This video is about using this approach taking these steps to achieve whatever."

When the video description itself is just like "Join my Patreon and here's my 50+ affiliate links for blinky lights and microphones" lol

I use it to explain concepts to me in a slightly different way, or to summarize something for which there's a wealth of existing information.

But I really wish people were more educated about how it actually works, and there's just no way I'm trusting the centralized "services" for doing so.

[-] 2ugly2live@lemmy.world 11 points 2 weeks ago

I used it once to write a polite "fuck off" letter to an annoying customer, and tried to see how it would revise a short story. The first one was fine, but using it with a story just made it bland, and simplified a lot of the vocabulary. I could see people using it as a starting point, but I can't imagine people just using whatever it spots out.

load more comments (1 replies)
[-] Kaiyoto@lemmy.world 10 points 2 weeks ago

Not much impact personally. I just read all the terrible implications of it online. Pressure in the professional world to use it, though fuck if I know what to use it for in this job. I don't like using it for my writing because I don't want to rely on something like that and because it's prone to errors.

Wish something that used a ton of resources would actually have a great impact to make it worth the waste.

load more comments (1 replies)
load more comments
view more: next ›
this post was submitted on 01 Dec 2024
194 points (91.8% liked)

Ask Lemmy

27225 readers
1315 users here now

A Fediverse community for open-ended, thought provoking questions


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.


6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 2 years ago
MODERATORS