111
submitted 11 months ago by gomp@lemmy.ml to c/firefox@lemmy.ml

Is there an extension that warns you when you are wasting time reading ai-generated crap?

Case in point, I was reading an article that claimed to compare kubernetes distros and wasted some good minutes before realizing it was full of crap.

all 29 comments
sorted by: hot top controversial new old
[-] siderealyear@lemmy.world 46 points 11 months ago

Most of the internet was already BS before 'working' LLMs, where do you think the models learned it from? I think what you want is a crap detector, and I'm with you. Any ideas good ideas and I'll donate my time to work on it.

[-] dxc@sh.itjust.works 15 points 11 months ago

For me it's uBlacklist with my personal list in front and some github page I found after.

[-] Linus_Torvalds@lemmy.world 9 points 11 months ago* (last edited 11 months ago)

FYI Kagi has an integrated Blocker/Upranker/Downranker similar to this. Under their stats page you can see, which domains have been blockes/raised/... the most.

The most hated one by far: Pinterest and all locale-specific sub-domains.

[-] Korkki@lemmy.world 2 points 11 months ago

That's only for google though.

[-] eee@lemm.ee 2 points 11 months ago
[-] Infiltrated_ad8271@kbin.social -1 points 11 months ago

That's the reason why ai search engines like bing are so bad, it's based on top results that are the same crap.

[-] monobot@lemmy.ml 27 points 11 months ago

I think at some point we will have to introduce human confirmation from creator side.

I don't mind someone using chatgpt as a tool to write better articles, but most of internet is sensles bs.

[-] Nawor3565@lemmy.blahaj.zone 16 points 11 months ago

Unfortunately, even OpenAI themselves took down their AI detection tool because it was too inaccurate. It's really, REALLY hard to detect AI writing with current technology, so any such addon would probably need to use a master list of articles that are manually flagged by human.

[-] DogMuffins@discuss.tchncs.de 6 points 11 months ago

If you could detect AI authored stuff, couldn't you use that to train your LLM?

[-] apis@beehaw.org 2 points 11 months ago

Suspect it would operate more on the basis of a person confirming that the article is of reasonable quality & accuracy.

So not unlike editors selecting what to publish, what to reject & what to send back for improvements.

If good articles by AI get accepted & poor articles by people get rejected, there may still be impacts, but at face value it might be sufficient for us seeking to read stuff.

[-] BetaDoggo_@lemmy.world 2 points 11 months ago

It could be used to create a reward model like what is done right now with RLHF.

[-] lily33@lemm.ee 2 points 11 months ago

That said, it should actually be possible to make a bullshit detector that detects bullshit writing.

[-] Zuberi@lemmy.dbzer0.com 9 points 11 months ago

You cannot use AI to detect AI. That would defeat the entire purpose.

[-] starman@programming.dev 6 points 11 months ago

It's not possible to create 100% reliable ML-generated content detection

[-] JohnDClay@sh.itjust.works 6 points 11 months ago

I don't even know of any that are 75% reliable. It's a really hard problem.

[-] strawberry@artemis.camp 5 points 11 months ago

wasn't openai's ai detector like 25% accurate? at that point its just random chance mostly

[-] Schmeckinger@feddit.de 1 points 11 months ago

I wrote a detector that is 50% accurate. It just flips a coin.

[-] blakeus12@hexbear.net 3 points 11 months ago

Marxist-Leninists cant reliably detect content D:

[-] Cwilliams@beehaw.org 2 points 11 months ago

I know there's GPT Zero. I personally don't trust it at all, but you could still look into it

[-] apis@beehaw.org 2 points 11 months ago

Have got fairly good at spotting these from the first few lines, but it would be nice to not bother clicking on them in the first place & better again if they didn't clog up my search results.

Back when it was just humans churning out rubbish, there was far less of it in the way of good information, but it helped enormously that search engines still respected operands.

Bringing that back would likely help far more than a detector extension.

[-] Harpsist@lemmy.world 1 points 11 months ago

I dunno how to follow this in lemmy...

[-] loki@lemmy.ml 8 points 11 months ago

you can't follow it but you can save it with the star icon and come back to it later.

[-] nix@merv.news 1 points 11 months ago* (last edited 11 months ago)

You can use the remind me bot

@Remindme@programming.dev 5 hours

Or whatever timeframe you prefer

[-] Ategon@programming.dev 1 points 11 months ago

Note the remindme bot uses an allowlist and this community isnt in it, youd have to get your community mods to request it gets added in the repository if you want to use it here

[-] nix@merv.news 1 points 11 months ago

Why does it use an allowlist? Seems like its fine to just run across lemmy since it only appears when summoned.

[-] Ategon@programming.dev 1 points 11 months ago* (last edited 11 months ago)

Bot guidelines for some of the major instances dont allow bot posting unless its been approved by a mod. Also makes more sense for mods to choose what bots to allow in their community rather than response bots being fully allowed everywhere since that can easily get out of hand if a bunch get made

[-] naut@infosec.pub -1 points 11 months ago
[-] Cwilliams@beehaw.org -2 points 11 months ago

Another thought: does it really matter if it's AI generated or not? As long as you can fact-check the content and the quality isn't horrible, I don't see why it matters if it's written by a real person or not

this post was submitted on 27 Sep 2023
111 points (98.3% liked)

Firefox

17303 readers
577 users here now

A place to discuss the news and latest developments on the open-source browser Firefox

founded 4 years ago
MODERATORS