Architeuthis

joined 2 years ago
[–] Architeuthis@awful.systems 9 points 4 months ago* (last edited 4 months ago) (1 children)

I can never tell, is there an actual 'experiment' taking place with an LLM-backend agent actually trying stuff on a working vm or are they just prompting a chatbot to write a variation of a story (or ten, or a million) about what it might have done given these problem parameters?

[–] Architeuthis@awful.systems 3 points 5 months ago* (last edited 5 months ago)

23-2Leaving something to run for 20-30 minutes expecting nothing and actually getting a valid and correct result: new positive feeling unlocked.

Now to find out how I was ideally supposed to solve it.

[–] Architeuthis@awful.systems 2 points 5 months ago* (last edited 5 months ago) (1 children)

If nothing else, you've definitely stopped me forever from thinking of jq as sql for json. Depending on how much I hate myself by next year I think I might give kusto a shot for AOC '25

[–] Architeuthis@awful.systems 2 points 5 months ago* (last edited 5 months ago)

22-2 commentaryI got a different solution than the one given on the site for the example data, the sequence starting with 2 did not yield the expected solution pattern at all, and the one I actually got gave more bananas anyway.

The algorithm gave the correct result for the actual puzzle data though, so I'm leaving it well alone.

Also the problem had a strong map/reduce vibe so I started out with the sequence generation and subsequent transformations parallelized already from pt1, but ultimately it wasn't that intensive a problem.

Toddler's sick (but getting better!) so I've been falling behind, oh well. Doubt I'll be doing 24 & 25 on their release days either as the off-days and festivities start kicking in.

[–] Architeuthis@awful.systems 19 points 5 months ago* (last edited 5 months ago) (2 children)

I mean, you could have answered by naming one fabled new ability LLM's suddenly 'gained' instead of being a smarmy tadpole, but you didn't.

[–] Architeuthis@awful.systems 20 points 5 months ago (6 children)

What new AI abilities, LLMs aren't pokemon.

[–] Architeuthis@awful.systems 15 points 5 months ago* (last edited 5 months ago) (5 children)

Slate Scott just wrote about a billion words of extra rigorous prompt-anthropomorphizing fanfiction on the subject of the paper, he called the article When Claude Fights Back.

Can't help but wonder if he's just a critihype enabling useful idiot who refuses to know better or if he's being purposefully dishonest to proselytize people into his brand of AI doomerism and EA, or if the difference is meaningful.

edit: The claude syllogistic scratchpad also makes an appearance, it's that thing where we pretend that they have a module that gives you access to the LLM's inner monologue complete with privacy settings, instead of just recording the result of someone prompting a variation of "So what were you thinking when you wrote so and so, remember no one can read what you reply here". Que a bunch of people in the comments moving straight into wondering if Claude has qualia.

[–] Architeuthis@awful.systems 26 points 5 months ago* (last edited 5 months ago) (4 children)

Rationalist debatelord org Rootclaim, who in early 2024 lost a $100K bet by failing to defend covid lab leak theory against a random ACX commenter, will now debate millionaire covid vaccine truther Steve Kirsch on whether covid vaccines killed more people than they saved, the loser gives up $1M.

One would assume this to be a slam dunk, but then again one would assume the people who founded an entire organization about establishing ground truths via rationalist debate would actually be good at rationally debating.

[–] Architeuthis@awful.systems 30 points 5 months ago

It's useful insofar as you can accommodate its fundamental flaw of randomly making stuff the fuck up, say by having a qualified expert constantly combing its output instead of doing original work, and don't mind putting your name on low quality derivative slop in the first place.

view more: ‹ prev next ›