this post was submitted on 13 Jun 2025
94 points (100.0% liked)
SneerClub
1122 readers
81 users here now
Hurling ordure at the TREACLES, especially those closely related to LessWrong.
AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)
This is sneer club, not debate club. Unless it's amusing debate.
[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]
See our twin at Reddit
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
It's generally best to assume 100% is illusory and pareidolia. These systems are incredibly effective at mirroring whatever you project onto it back at you.
i disagree sorta tbh
i won't say that claude is conscious but i won't say that it isn't either and its always better to air on the side of caution (given there is some genuinely interesting stuff i.e. Kyle Fish's welfare report)
I WILL say that 4o most likely isn't conscious or self reflecting and that it is best to air on the side of not schizoposting even if its wise imo to try not to be abusive to AI's just incase
centrism will kill us all, exhibit [imagine an integer overflow joke here, I’m tired]:
the chance that Claude is conscious is zero. it’s goofy as fuck to pretend otherwise.
claims that LLMs, in spite of all known theories of computer science and information theory, are conscious, should be treated like any other pseudoscience being pushed by grifters: systemically dangerous, for very obvious reasons. we don’t entertain the idea that cryptocurrencies are anything but a grift because doing so puts innocent people at significant financial risk and helps amplify the environmental damage caused by cryptocurrencies. likewise, we don’t entertain the idea of a conscious LLM “just in case” because doing so puts real, disadvantaged people at significant risk.
if you don’t understand that you don’t under any circumstances “just gotta hand it to” the grifters pretending their pet AI projects are conscious, why in fuck are you here pretending to sneer at Yud?
fuck off with this
describe the “incase” to me. either you care about the imaginary harm done to LLMs by being “abusive” much more than you care about the documented harms done to people in the process of training and operating said LLMs (by grifters who swear their models will be sentient any day now), or you think the Basilisk is gonna get you. which is it?
Very off topic: The only plausible reason I’ve heard to be “nice” to LLMs/virtual assistants etc. is if you are being observed by a child or someone else impressionable. This is to model good behaviour if/when they ask someone a question or for help. But also you shouldn’t be using those things anyhoo.
Very much this but, we're all impressionable. Being abusive to a machine that's good at tricking our brains into thinking that it's conscious is conditioning oneself to be abusive, period. You see this also in online gaming - every person that I have encountered who is abusive to randos in a match on the Internet has problematic behavior in person.
It's literally just conditioning; making things adjacent to abusing other humans comfortable and normalizing them makes abusing humans less uncomfortable.
That’s reasonable, and especially achievable if you don’t use chatbots or digital assistants!
Children really shouldn't be left with the impression that chatbots are some type of alternative person instead of ass-kissing google replacements that occasionally get some code right, but I'm guessing you just mean to forego I have kidnapped your favorite hamster and will kill it slowly unless you make that div stop overflowing on resize type prompts.
I agree! I'm more thinking of the case where a kid might overhear what they think is a phone call when it's actually someone being mean to Siri or whatever. I mean, there are more options than "be nice to digital entities" if we're trying to teach children to be good humans, don't get me wrong. I don't give a shit about the non-feelings of the LLMs.
I recommend it because we know some of these LLM-based services still rely on the efforts of A Guy Instead to make up for the nonexistence and incoherence of AGI. If you're an asshole to the frontend there's a nonzero chance that a human person is still going to have to deal with it.
Also I have learned an appropriate level of respect and fear for the part of my brain that, half-asleep, answers the phone with "hello this is YourNet with $CompanyName Support." I'm not taking chances around unthinkingly answering an email with "alright you shitty robot. Don't lie to me or I'll barbecue this old commodore 64 that was probably your great uncle or whatever"
Also it's simply just bad to practice being cruel to a humanshaped thing.