ZDL

joined 4 days ago
[–] ZDL@lazysoci.al 3 points 4 hours ago

I'd be down for that. Having pretty between all the goons would be a relief.

[–] ZDL@lazysoci.al 3 points 4 hours ago

… What is that?

[–] ZDL@lazysoci.al 5 points 4 hours ago (1 children)

Not just this brand but this specific product and indeed this specific roast.

Sinloy was the first company in China to take coffee culture seriously, almost single-handedly creating the entire Yunnan coffee industry. They're primarily known for sourcing the local beans, obviously, but also do some importing of coffees from as far afield as Ethiopia and everywhere in between. Their quality control is very high, their roasters are top-notch, and yet their price remains very reasonable.

I buy 250g of this specific coffee (Yunnan "Red Wine" sun-dried SOE coffee) every two weeks and if I miss a purchase, that's a few days of suffering as I wait for them to roast and ship it. (They small-batch roast to demand.)

[–] ZDL@lazysoci.al 6 points 6 hours ago (1 children)

Can't kick up shit in an old school forum without an account, though. I mean count the guys commenting just in this message. They saw a message that specifically said "no guys commenting" and ... commented. This is because the whole way Lemmy works means everybody can see posts, even if they're not members. They can go to the local or all feed.

In a non-federated forum of the aulde skoole variety they'd have to go hunting for a forum for women, then make an account and pass muster, and only then can they start kicking up a fuss like a toddler. Some will, but I suspect way fewer than the drive-by randos here in Lemmy.

I wish I had a "white list" instead of a "black list" for federation. Right now if I see too many stupid opinions from a single place (lemmy.world I'm looking at you here!) I can block it. But given this intrinsically misogynistic environment on Lemmy it might be nicer to have lists of servers I want to see things from instead of ones I don't want.

'Cause I get this feeling that the former is a smaller list than the latter.

[–] ZDL@lazysoci.al 1 points 7 hours ago

When I see hot takes that range from "WTF!?" to actual batshit insanity, I always look at the source. lemmy.world, shit.just.works, and a few others (including dbzer0) seem to always be the host.

Increasingly I'm wondering if it might not be best to just shove these into the "block site" box. The trigger finger hasn't yet itched enough for it, but it's getting closer.

[–] ZDL@lazysoci.al 1 points 7 hours ago (2 children)

This could also be a community dedicated to blind hate.

Which part of "Fuck AI" was unclear? There's only two words there (well, a word and an initialization if you want to get picky). Which one of those was unclear?

[–] ZDL@lazysoci.al 7 points 7 hours ago (3 children)

I think the only way to have a non-misogynistic place is to go to an old-school forum without "federation". That way you don't have the drive-by randos popping in to spout bullshit about how it's "misandry" to say "please guys, this is just our space, you can go literally anywhere else".

[–] ZDL@lazysoci.al 2 points 8 hours ago

I actually think that the prompt is, in fact, protected by copyright if it's a non-trivial prompt. I mean "anime chick, big bewbs" won't be protected by copyright, but a long sequence of detailed instructions would be.

What's not protected by copyright (in any sane legal milieu) is the output.

[–] ZDL@lazysoci.al 0 points 8 hours ago

Go to one of these "reasoning" AIs. Ask it to explain its reasoning. (It will!) Then ask it to explain its reasoning again. (It will!) Ask it yet again. (It will gladly do it thrice!)

Then put the "reasoning" side by side and count the contradictions. There's a very good chance that the three explanations are not only different from each other, they're very likely also mutually incompatible.

"Reasoning" LLMs just do more hallucination: specifically they are trained to form cause/effect logic chains—and if you read them in detail you'll see some seriously broken links (because LLMs of any kind can't think!)—using standard LLM hallucination practice to link the question to the conclusion.

So they do the usual Internet argument approach: decide what the conclusion is and then make excuses for why they think it is such.

If you don't believe me, why not ask one? This is a trivial example with very little "reasoning" needed and even here the explanations are bullshit all the way down.

Note, especially, the final statement it made:

Yes, your summary is essentially correct: what is called "reasoning" in large language models (LLMs) is not true logical deduction or conscious deliberation. Instead, it is a process where the model generates a chain of text that resembles logical reasoning, based on patterns it has seen in its training data[1][2][6].

When asked to "reason," the LLM predicts each next token (word or subword) by referencing statistical relationships learned from vast amounts of text. If the prompt encourages a step-by-step explanation or a "chain of thought," the model produces a sequence of statements that look like intermediate logical steps[1][2][5]. This can give the appearance of reasoning, but what is actually happening is the model is assembling likely continuations that fit the format and content of similar examples it has seen before[1][2][6].

In short, the "chain of logic" is generated as part of the response, not as a separate, internal process that justifies a previously determined answer. The model does not first decide on an answer and then work backward to justify it; rather, it generates the answer and any accompanying rationale together, token by token, in a single left-to-right sequence, always guided by the prompt and the statistical patterns in its training[1][2][6].

"Ultimately, LLM 'reasoning' is a statistical approximation of human logic, dependent on data quality, architecture, and prompting strategies rather than innate understanding. ... Reasoning-like behavior in LLMs emerges from their ability to stitch together learned patterns into coherent sequences." [1]

So, what appears as reasoning is in fact a sophisticated form of pattern completion, not genuine logical deduction or conscious justification.

[1] https://milvus.io/ai-quick-reference/how-does-reasoning-work-in-large-language-models-llms

[2] https://www.digitalocean.com/community/tutorials/understanding-reasoning-in-llms

[3] https://sebastianraschka.com/blog/2025/understanding-reasoning-llms.html

[4] https://en.wikipedia.org/wiki/Reasoning_language_model

[5] https://arxiv.org/html/2407.11511v1

[6] https://www.anthropic.com/research/tracing-thoughts-language-model

[7] https://magazine.sebastianraschka.com/p/state-of-llm-reasoning-and-inference-scaling

[8] https://cameronrwolfe.substack.com/p/demystifying-reasoning-models

Now I'm absolutely technically declined. Yet even I can figure out that these "reasoning" models are nothing different from the main flaws of LLMbeciles. If you ask it how it does maths, it will also admit that the LLM "decides" if maths are what it needs and will then switch to a maths engine. But if the LLM "decides" it can do it on its own it will. So you'll still get garbage maths out of the machine.

[–] ZDL@lazysoci.al 0 points 8 hours ago (4 children)

How "rational" is it to come into a community called, literally, "Fuck AI" and expect pro-AI messaging to be desired and engaged with?

Is "rational" now a synonym for its opposite, like how "literally" now means both itself and "figuratively"?

[–] ZDL@lazysoci.al 10 points 9 hours ago

It feels like even progressive men here (not all, but many) aren’t willing to listen, while making it all about them.

I mentioned this in past messages that got deleted (not by the mods here).

But basically this is what women's groups in uni were like for me when they tried inviting "allies" from among male students. These well-intentioned (I have to stress this, they were not being jerks deliberately!) men would come to meetings and by the end it was only the men talking.

It's just in the nature.

This is why we went back to "women-only" groups with occasional "open house"-style gatherings which were just social events.

[–] ZDL@lazysoci.al 8 points 9 hours ago (1 children)

There's two kinds of boosters of the LLMbecile grift: the grifters and the patsies.

Which one of the two are you?

view more: next ›