Finally, some actual argumentation. Enough to convince me, at least - specially the first paragraph.
Let's go simpler: what if your instance was allowed to copy the fed/defed lists from other instances, and use them (alongside simple Boolean logic plus if/then statements) to automatically decide who you're going to federate/defederate with? That would enable caracoles and fedifams for admins who so desire, but also enable other organically grown relations.
For example. Let's say that you just joined the federation. And there are three instances that you somewhat trust:
- Alice - it defederates only really problematic instances.
- Bob and Charlie - both are a bit prone to defederate other instances on a whim, but when both defed the same instance it's usually problematic.
Then you could set up your defederation rules like this:
- if Alice defed it, then defed it too.
- else, if (Bob defed it) and (Charlie defed it), then defed it too.
- else, federate with it.
Of course, that would require distinguishing between manual and automatic fed/defed. You'd be able to use the manual fed/defed from other instances to create your automatic rules, to avoid deadlocks like "Alice is blocking it because Bob is blocking it, and Bob is blocking it because Alice is doing it".
Do tell me more of how Old People are not the target of discrimination¹, yout'².
You're 1) distorting what I said, and 2) being an assumer.
Discrimination can happen against any group. However, it's considerably worse when it's geared towards marginalised groups, as they have less ways to deal with it. That makes your analogy with a racial group (black people) a lot flawed.
Note, I do not think that insults against old people are "cool". However they're considerably less worse than insults towards black people.
The links that you've posted - that you clearly didn't even bother to read yourself - are evidence of discrimination in a very specific environment (workplace). They are not evidence of marginalisation.
[Speaking as a user] Yeah, it looks inorganic for me too. As another user said the gold trim signals that the post was gilded, gilding nowadays works like a "mega-upvote" and gives it that trim. It's possible that defenders of the Jewish genocide (Holocaust) and of the Palestinian genocide (the ongoing war) are dumping money in those posts, to promote their shitty discourses.
And that works really well in Reddit because the userbase there loves some oversimplification: they have a really hard time decoupling the Jewish people from the State of Israel.
[Speaking as a mod] Given the topic I'll be monitoring this thread carefully. As such, if anyone here is eager to either promote fascism (rule #1) or witch hunt (to point fingers towards other users based on assumptions or faulty reasoning - rule #1, rule #4): don't. Also remember that the topic of this community is Reddit, there's a lot of leeway for non-divisive off-topic but please don't go overboard.
And if anyone here has concerns that some other user is doing either thing, please use the report button, OK?
Just because the bigotry is aimed at old People doesn’t make it cool.
I get your point and I partially agree with it, but note that there's a big difference between old people and people from racial groups in USA (based on your references): the later are marginalised groups.
...for keeping the rhyme, perhaps "Doom-like shoot and boom"? Lots of exploding enemies in Doom, but no boomer reference.
Aaaaah. I really, really wanted to complain about the excessive amount of keys.
(My comment above is partially a joke - don't take it too seriously. Even if a new key was added it would be a bit more clutter, but not that big of a deal.)
The source that I've linked mentions semantic embedding; so does further literature on the internet. However, the operations are still being performed with the vectors resulting from the tokens themselves, with said embedding playing a secondary role.
This is evident for example through excerpts like
The token embeddings map a token ID to a fixed-size vector with some semantic meaning of the tokens. These brings some interesting properties: similar tokens will have a similar embedding (in other words, calculating the cosine similarity between two embeddings will give us a good idea of how similar the tokens are).
Emphasis mine. A similar conclusion (that the LLM is still handling the tokens, not their meaning) can be reached by analysing the hallucinations that your typical LLM bot outputs, and asking why that hallu is there.
What I'm proposing is deeper than that. It's to use the input tokens (i.e. morphemes) only to retrieve the sememes (units of meaning; further info here) that they're conveying, then discard the tokens themselves, and perform the operations solely on the sememes. Then for the output you translate the sememes obtained by the transformer into morphemes=tokens again.
I believe that this would have two big benefits:
- The amount of data necessary to "train" the LLM will decrease. Perhaps by orders of magnitude.
- A major type of hallucination will go away: self-contradiction (for example: states that A exists, then that A doesn't exist).
And it might be an additional layer, but the whole approach is considerably simpler than what's being done currently - pretending that the tokens themselves have some intrinsic value, then playing whack-a-mole with situations where the token and the contextually assigned value (by the human using the LLM) differ.
[This could even go deeper, handling a pragmatic layer beyond the tokens/morphemes and the units of meaning/sememes. It would be closer to what @njordomir@lemmy.world understood from my other comment, as it would then deal with the intent of the utterance.]
Soap and water do wonders for 90% of the restroom cleaning.
The problem is that the other 10% are important too.
Not quite. I'm focusing on chatbots like Bard, ChatGPT and the likes, and their technology (LLM, or large language model).
At the core those LLMs work like this: they pick words, split them into "tokens", and then perform a few operations on those tokens, across multiple layers. But at the end of the day they still work with the words themselves, not with the meaning being encoded by those words.
What I want is an LLM that assigns multiple meanings for those words, and performs the operations above on the meaning itself. In other words the LLM would actually understand you, not just chain words.
Yup, that's the stuff. It's mostly a finishing touch, to get rid of bacteria.
Even here in South America, depending on the region, they're invasive.