21
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 06 Oct 2024
21 points (100.0% liked)
TechTakes
1436 readers
118 users here now
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
founded 1 year ago
MODERATORS
Don't know how much this fits the community, as you use a lot of terms I'm not inherently familiar with (is there a "welcome guide" of some sort somewhere I missed).
Anyway, Wikipedia moderators are now realizing that LLMs are causing problems for them, but they are very careful to not smack the beehive:
I just... don't have words for how bad this is going to go. How much work this will inevitably be. At least we'll get a real world example of just how many guardrails are actually needed to make LLM text "work" for this sort of use case, where neutrality, truth, and cited sources are important (at least on paper).
I hope some people watch this closely, I'm sure there's going to be some gold in this mess.
Wikipedia's mod team definitely haven't realised it yet, but this part is pretty much a de facto ban on using AI. AI is incapable of producing output that would be acceptable for a Wikipedia article - in basically every instance, its getting nuked.
lol i assure you that fidelitously translates to "kill it with fire"
Yeah, that sounds like text which somebody quickly typed up for the sake of having something.
it is impossible for a Wikipedia editor to write a sentence on Wikipedia procedure without completely tracing the fractal space of caveats.
I'd like to believe some of them have, but it's easier or more productive to keep giving the benefit of the doubt (or at at least pretend to) than argue the point.