[-] hrrrngh@awful.systems 11 points 1 week ago

I know this shouldn't be surprising, but I still cannot believe people really bounce questions off LLMs like they're talking to a real person. https://ai.stackexchange.com/questions/47183/are-llms-unlikely-to-be-useful-to-generate-any-scientific-discovery

I have just read this paper: Ziwei Xu, Sanjay Jain, Mohan Kankanhalli, "Hallucination is Inevitable: An Innate Limitation of Large Language Models", submitted on 22 Jan 2024.

It says there is a ground truth ideal function that gives every possible true output/fact to any given input/question, and no matter how you train your model, there is always space for misapproximations coming from missing data to formulate, and the more complex the data, the larger the space for the model to hallucinate.

Then he immediately follows up with:

Then I started to discuss with o1. [ . . . ] It says yes.

Then I asked o1 [ . . . ], to which o1 says yes [ . . . ]. Then it says [ . . . ].

Then I asked o1 [ . . . ], to which it says yes too.

I'm not a teacher but I feel like my brain would explode if a student asked me to answer a question they arrived at after an LLM misled them on like 10 of their previous questions.

[-] hrrrngh@awful.systems 9 points 1 month ago* (last edited 1 month ago)

I feel like the Internet Archive is a prime target for techfashy groups. Both for the amount of culture you can destroy, and because backed up webpages often make people with an ego the size of the sun look stupid.

Also, I can't remember but didn't Yudkowsky or someone else pretty plainly admit to taking a bunch of money during the FTX scandal? I swear he let slip that the funds were mostly dried up. I don't think it was ever deleted, but that's the sort of thing you might want to delete and could get really angry about being backed up in the Internet Archive. I think Siskind has edited a couple articles until all the fashy points were rounded off and that could fall in a similar boat. Maybe not him specifically, but there's content like that that people would rather not be remembered and the Internet Archive falling apart would be good news to them.

Also (again), it scares me a little that their servers are on public tours. Like it'd take one crazy person to do serious damage to it. I don't know but I'm hoping their >100PB of storage is including backups, even if it's not 3-2-1. I'm only mildly paranoid about it lol.

[-] hrrrngh@awful.systems 9 points 1 month ago

Oh look! Human horrors ~~beyond~~ regrettably within my comprehension

https://x.com/haveibeenpwned/status/1843780415175438817

Tweet descriptionNew sensitive breach: "AI girlfriend" site Muah[.]ai had 1.9M email addresses breached last month. Data included AI prompts describing desired images, many sexual in nature and many describing child exploitation. 24% were already in @haveibeenpwned . More: https://404media.co/hacked-ai-girlfriend-data-shows-prompts-describing-child-sexual-abuse-2/

[-] hrrrngh@awful.systems 8 points 1 month ago

If he collects enough metrics, he could make a horrendously cursed blogpost out of it like Aella

[-] hrrrngh@awful.systems 10 points 1 month ago

Every time I see these crypto games, I can only think of the online uwu pit bosses shown in that one Folding Ideas video who were driving workers to slave away for less than the minimum wage in the Phillipines. Just permanently burned-in mental imagery.

This is a cool channel by the way. I'm stealing this description from someone in the YouTube comments, but he has a creative "glitchcore SFM aesthetic" that I kind of like and his speaking cadence reminds me of Primer. His style works strangely well for ripping into NFT games. This video felt like looking into a funhouse mirror dimension where every genre of game is somehow even worse than the worst games I've ever seen.

Also, the Dr. Disrespect-Chewbacca mask guy doing NFT lootbox openings is something I can't unsee. That's honestly so much funnier that he's still doing it after the Dr. Disrespect sexting minors scandal.

[-] hrrrngh@awful.systems 9 points 3 months ago

Chiming in with my own find!

https://archiveofourown.org/works/38590803/chapters/96467457

I've seen this person around a lot with crazy takes on AI. They have a couple quotes that might inflict psychic damage:

If I had the skill to pull it off, a Buddhist cultivation book would've thus been the single most rationalist xianxia in existence.

My acquaintance asks for rational-adjacent books suitable for 8-11 years old children that heavily feature training, self-improvement, etc. The acquaintance specifically asks that said hard work is not merely mentioned, but rather is actively shown in the story. The kid herself mostly wants stories "about magic" and with protagonists of about her age.

They had a long diatribe I don't have a copy of, but they were gloating about having masterful writing despite not reading any books besides non-fiction and HPMoR, their favorite book of all time.

There's also a whole subreddit from hell about this subgenre of fiction: https://www.reddit.com/r/rational/

[-] hrrrngh@awful.systems 9 points 3 months ago* (last edited 3 months ago)

Oh whoops, I should have archived it.

There were about 7 images posted of users roleplaying with bots, all ending with a bot response that cut off halfway with an error message that read "This content may violate our policies; blablabla; please use the report button if you believe this is a false positive and we will investigate." The last one was some kind of parody image making fun of the warning.

Most of them were some kind of romantic roleplay with bad spelling. One was like, "i run my hand down your arm and kiss you", and the bots response triggered the warning. Another one was like, "*is slapped in the face* it's okay, I still love you" and the rest of the message generated a warning. There wasn't enough context for that one, so the person might have been writing it playfully (?), but that subreddit has a lot of blatant sexual violence regardless.

[-] hrrrngh@awful.systems 11 points 4 months ago

This released today: https://www.ic3.gov/Media/News/2024/240709.pdf

Cool (horrifying) look into one of the active Russian bot farms and their use of generative AI

[-] hrrrngh@awful.systems 8 points 4 months ago

I hate that I saw that same post earlier today

Here's a quote from the book:

AI already transcends human perception — in a sense, through chronological compression or “time travel”: enabled by algorithms and computing power, it analyzes and learns through processes that would take human minds decades or even centuries to complete.

Glad to know the calculators I had in school were capable of time travel

[-] hrrrngh@awful.systems 10 points 4 months ago* (last edited 4 months ago)

People are so, so, so bad at telling what's a bot and what's real. I know social media is swarming with bots, but if you're interacting with somebody who's saying anything more complicated than "P o o s i e I n B i o" it's probably not a bot. A similar thing happens in online games, too, and it's usually the excuse people use before harassing someone else

But damn the lengths people will go to to avoid admitting they were wrong. This comment chain just keeps going on with somebody who's convinced {origin="RU"}{faith="bad"}{election_manipulation="very yes"} must be real because something something microservices: https://www.reddit.com/r/interestingasfuck/comments/1dlg8ni/russian_bot_falls_prey_to_a_prompt_iniection/l9pbmrw/ It reads like something straight off /r/programming or the orange site

Then it comes full circle with people making joke responses on Twitter imitating the first post, and then other people taking those joke responses as proof that the first one must be real: https://old.reddit.com/r/ChatGPT/comments/1dimlyl/twitter_is_already_a_gpt_hellscape/l9691c8/

This account kind of kicked up some drama too, basically for the same reason (answering an LLM prompt), but it's about mushroom ID instead: https://www.reddit.com/user/SeriousPerson9 I've seen people like this who use voice-to-text and run their train of thought through ChatGPT or something, like one person notorious on /r/gamedev. But people always assume it's some advanced autonomous bot with stochastic post delays that mimic a human's active hours when like, it's usually just somebody copy/pasting prompts and responses.

Sorry if you contract any diseases from those links or comment chains

[-] hrrrngh@awful.systems 11 points 8 months ago

Yesterday before bed I saw some galaxy-brained takes on PKM (personal knowledge management software) from a 7-day old account, and curiosity took over me. I was not disappointed. (sadly they deleted their account after I woke up: /u/Few-Elephant-2600 if you're bored and have moderator API access)

Link

Since GPUs continuously generate large amounts of waste heat during AI training, could electric/GPU stoves utilize this unused thermal energy resource through on-demand tickets as distributed networks instead of citizens using a wasteful private electric stove? What are the scientific challenges?

Honey can you preheat the porn generator?

Maybe you could pair it with this accursed AI of Things Smart Oven. Fun quotes:

“Users aren’t aware of any of the oven’s learning processes,”

Ovens that learn from one another

Finally, I can experience Windows progress bars when baking potatoes:

The predictive model updates the remaining baking time every 30 seconds

[-] hrrrngh@awful.systems 9 points 1 year ago

I feel like the article ends pretty suddenly, but maybe that's just because I can already imagine the next 30 pages of explaining TESCREAL bullshittery in my head

view more: ‹ prev next ›

hrrrngh

joined 1 year ago