Grandiose delusions from a ketamine-rotted brain.
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
I wonder how many papers he's read since ChatGPT released about how bad it is to train AI on AI output.
Spoiler: He's gonna fix the "missing" information with MISinformation.
She sounds Hot
She’s unfortunately can’t see you because of financial difficulties. You gotta give her money like I do. One day, I will see her in person.
"and then on retrain on that"
Thats called model collapse.
So they’re just going to fill it with Hitler’s world view, got it.
Typical and expected.
I mean, this is the same guy who said we'd be living on Mars in 2025.
In a sense, he's right. I miss good old Earth.
So just making shit up.
Don't forget the retraining on the made up shit part!
Delusional and grasping for attention.
Lol turns out elon has no fucking idea about how llms work
It's pretty obvious where the white genocide "bug" came from.
“Deleting Errors” should sound alarm bells in your head.
And the adding missing information doesn't. Isn't that just saying we are going to make shit up.
"We'll fix the knowledge base by adding missing information and deleting errors - which only an AI trained on the fixed knowledge base could do."
The thing that annoys me most is that there have been studies done on LLMs where, when trained on subsets of output, it produces increasingly noisier output.
Sources (unordered):
- What is model collapse?
- AI models collapse when trained on recursively generated data
- Large Language Models Suffer From Their Own Output: An Analysis of the Self-Consuming Training Loop
- Collapse of Self-trained Language Models
Whatever nonsense Muskrat is spewing, it is factually incorrect. He won't be able to successfully retrain any model on generated content. At least, not an LLM if he wants a successful product. If anything, he will be producing a model that is heavily trained on censored datasets.
Huh. I'm not sure if he's understood the alignment problem quite right.
He's been frustrated by the fact that he can't make Wikipedia 'tell the truth' for years. This will be his attempt to replace it.
[My] translation: "I want to rewrite history to what I want".
That was my first impression, but then it shifted into "I want my AI to be the shittiest of them all".
Why not both?
Elon Musk, like most pseudo intellectuals, has a very shallow understanding of things. Human knowledge is full of holes, and they cannot simply be resolved through logic, which Mush the dweeb imagines.
What the fuck? This is so unhinged. Genuine question, is he actually this dumb or he's just saying complete bullshit to boost stock prices?
my guess is yes.
Yes please do that Elon, please poison grok with garbage until full model collapse.
which has advanced reasoning
No it doesn't.
Yes! We should all wholeheartedly support this GREAT INNOVATION! There is NOTHING THAT COULD GO WRONG, so this will be an excellent step to PERMANENTLY PERFECT this WONDERFUL AI.
Fuck Elon Musk
I never would have thought it possible that a person could be so full of themselves to say something like that
Is he still carrying his little human shield around with him everywhere or can someone Luigi this fucker already?
I read about this in a popular book by some guy named Orwell
Wasn't he the children's author who published the book about a talking animals learning the value of hard work or something?
"If we take this 0.84 accuracy model and train another 0.84 accuracy model on it that will make it a 1.68 accuracy model!"
~Fucking Dumbass
Whatever. The next generation will have to learn to trust whether the material is true or not by using sources like Wikipedia or books by well-regarded authors.
The other thing that he doesn't understand (and most "AI" advocates don't either) is that LLMs have nothing to do with facts or information. They're just probabilistic models that pick the next word(s) based on context. Anyone trying to address the facts and information produced by these models is completely missing the point.
Humm....this doesn't sound great
He knows more ... about knowledge... than... anyone alive now
I remember when I learned what corpus meant too
adding missing information and deleting errors
Which is to say, "I'm sick of Grok accurately portraying me as an evil dipshit, so I'm going to feed it a bunch of right-wing talking points and get rid of anything that hurts my feelings."
Dude wants to do a lot of things and fails to accomplish what he says he's doing to do or ends up half-assing it. So let him take Grok and run it right into the ground like an autopiloted Cybertruck rolling over into a flame trench of an exploding Startship rocket still on the pad shooting flames out of tunnels made by the Boring Company.
if you won't tell my truth I'll force you to acknowledge my truth.
nothing says abusive asshole more than this.
So where will Musk find that missing information and how will he detect "errors"?
Because neural networks aren't known to suffer from model collapse when using their output as training data. /s
Most billionaires are mediocre sociopaths but Elon Musk takes it to the "Emperors New Clothes" levels of intellectual destitution.