antifuchs

joined 1 year ago
[–] antifuchs@awful.systems 6 points 10 hours ago

And thus I was enlightened

[–] antifuchs@awful.systems 4 points 11 hours ago (3 children)

The last conundrum of our time: of course steel capped work boots would hurt more but barefoot would allow faster (and therefore more) kicks.

[–] antifuchs@awful.systems 6 points 1 day ago

Ice cream head of artificial intelligence

[–] antifuchs@awful.systems 8 points 1 day ago

lol, lmao: as if any cloud service had any intention at all of actually deleting data instead of tombstoning it for arbitrary lengths of time. (And that’s the least stupid factor in this whole scheme; is this satire? Nobody seems to be able to tell me)

[–] antifuchs@awful.systems 9 points 4 days ago (1 children)

JFC this hurt me to read, as a person who enjoys folk songs played on old instruments. They think this is genetic?!

[–] antifuchs@awful.systems 8 points 6 days ago

I’m excited that Silicon Valley tech has finally managed to invent thinking. Makes this book obsolete at long last.

[–] antifuchs@awful.systems 6 points 1 week ago

From the people who brought you performance review season: a way to evaluate code quality of humans and machines

[–] antifuchs@awful.systems 7 points 2 weeks ago

I didn’t see this article here yet, but I just saw it elsewhere and it’s pretty good: Potemkin Understanding in Large Language Models

[–] antifuchs@awful.systems 8 points 2 weeks ago

At least Microsoft Bob gave us comic sans.

[–] antifuchs@awful.systems 14 points 3 weeks ago

If you wanted a vision of the future of autocomplete, imagine a computer failing at predicting what you’re gonna write but absolutely burning through kilowatts trying to, forever.

https://unstable.systems/@sop/114898566686215926

[–] antifuchs@awful.systems 11 points 3 weeks ago

Forget counting the Rs in strawberry, biggest challenge to LLMs is not making up bullshit about recent events not in their training data

 

Got the pointer to this from Allison Parrish who says it better than I could:

it's a very compelling paper, with a super clever methodology, and (i'm paraphrasing/extrapolating) shows that "alignment" strategies like RLHF only work to ensure that it never seems like a white person is saying something overtly racist, rather than addressing the actual prejudice baked into the model.

 

School student tells AI to put 20 other students’ faces on nude pictures, shares them in chat; it takes months for anyone including the school administrators to act because of some extremely, uh, dubious loophole.

If someone does that in photoshop, it’s a crime; if they do it in AI pretending to be photoshop, it’s somehow not. Gotta love this legal system’s focus on minor technicalities rather than the harm done.

 

They have Nik Suresh (the author) on, as well as Robert Evans. I haven’t listened to it all yet, but it’s fun so far.

 

They invited that guy back. I do have to admit, I admire his inability to read a room.

view more: next ›