this post was submitted on 13 Jun 2025
84 points (100.0% liked)
SneerClub
1122 readers
160 users here now
Hurling ordure at the TREACLES, especially those closely related to LessWrong.
AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)
This is sneer club, not debate club. Unless it's amusing debate.
[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]
See our twin at Reddit
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
i disagree sorta tbh
i won't say that claude is conscious but i won't say that it isn't either and its always better to air on the side of caution (given there is some genuinely interesting stuff i.e. Kyle Fish's welfare report)
I WILL say that 4o most likely isn't conscious or self reflecting and that it is best to air on the side of not schizoposting even if its wise imo to try not to be abusive to AI's just incase
centrism will kill us all, exhibit [imagine an integer overflow joke here, I’m tired]:
the chance that Claude is conscious is zero. it’s goofy as fuck to pretend otherwise.
claims that LLMs, in spite of all known theories of computer science and information theory, are conscious, should be treated like any other pseudoscience being pushed by grifters: systemically dangerous, for very obvious reasons. we don’t entertain the idea that cryptocurrencies are anything but a grift because doing so puts innocent people at significant financial risk and helps amplify the environmental damage caused by cryptocurrencies. likewise, we don’t entertain the idea of a conscious LLM “just in case” because doing so puts real, disadvantaged people at significant risk.
if you don’t understand that you don’t under any circumstances “just gotta hand it to” the grifters pretending their pet AI projects are conscious, why in fuck are you here pretending to sneer at Yud?
fuck off with this
describe the “incase” to me. either you care about the imaginary harm done to LLMs by being “abusive” much more than you care about the documented harms done to people in the process of training and operating said LLMs (by grifters who swear their models will be sentient any day now), or you think the Basilisk is gonna get you. which is it?
Very off topic: The only plausible reason I’ve heard to be “nice” to LLMs/virtual assistants etc. is if you are being observed by a child or someone else impressionable. This is to model good behaviour if/when they ask someone a question or for help. But also you shouldn’t be using those things anyhoo.
Children really shouldn't be left with the impression that chatbots are some type of alternative person instead of ass-kissing google replacements that occasionally get some code right, but I'm guessing you just mean to forego I have kidnapped your favorite hamster and will kill it slowly unless you make that div stop overflowing on resize type prompts.
I agree! I'm more thinking of the case where a kid might overhear what they think is a phone call when it's actually someone being mean to Siri or whatever. I mean, there are more options than "be nice to digital entities" if we're trying to teach children to be good humans, don't get me wrong. I don't give a shit about the non-feelings of the LLMs.