It's useful insofar as you can accommodate its fundamental flaw of randomly making stuff the fuck up, say by having a qualified expert constantly combing its output instead of doing original work, and don't mind putting your name on low quality derivative slop in the first place.
16 commentary
DFS (it's all dfs all the time now, this is my life now, thanks AOC) pruned by unless-I-ever-passed-through-here-with-a-smaller-score-before worked well enough for Pt1. In Pt2 in order to get all the paths I only had to loosen the filter by a) not pruning for equal scores and b) only prune if the direction also matched.
Pt2 was easier for me because while at first it took me a bit to land on lifting stuff from Djikstra's algo to solve the challenge maze before the sun turns supernova, as I tend to store the paths for debugging anyway it was trivial to group them by score and count by distinct tiles.
And all that stuff just turned out to be true
Literally what stuff, that AI would get somewhat better as technology progresses?
I seem to remember Yud specifically wasn't that impressed with machine learning and thought so-called AGI would come about through ELIZA type AIs.
In every RAG guide I've seen, the suggested system prompts always tended to include some more dignified variation of "Please for the love of god only and exclusively use the contents of the retrieved text to answer the user's question, I am literally on my knees begging you."
Also, if reddit is any indication, a lot of people actually think that's all it takes and that the hallucination stuff is just people using LLMs wrong. I mean, it would be insane to pour so much money into something so obviously fundamentally flawed, right?
Pt2 commentary
I randomly got it by sorting for the most robots in the bottom left quadrant while looking for robot concentrations, it was number 13. Despite being in the centre of the grid it didn't show up when sorting for most robots in the middle 30% columns of the screen, which is kind of wicked, in the traditional sense.
The first things I tried was looking for horizontal symmetry (find a grid where all the lines have the same number of robots on the left and on the right of the middle axis, there is none, and the tree is about a third to a quarted of the matrix on each side) and looking for grids where the number of robots increased towards the bottom of the image (didn't work, because turns out tree is in the middle of the screen).
I thinks I was on the right track with looking for concentrations of robots, wish I'd thought about ranking the matrices according to the amount of robots lined up without gaps. Don't know about minimizing the safety score, sorting according to that didn't show the tree anywhere near the first tens.
Realizing that the patterns start recycling at ~10.000 iterations simplified things considerably.
The tree on the terminal output
(This is three matrices separated by rows of underscores)
13 commentary
Solved p1 by graph search before looking a bit closer on the examples and going, oh...
In pt2 I had some floating point weirdness when solving for keypress count, I was checking if the key presses where integers (can't press button A five and half times after all) by checking if A = floor(A) and sometimes A would drop to the number below when floored, i.e. it was in reality (A-1).999999999999999999999999999999999999999999999. Whatever, I rounded it away but I did spend a stupid amount of time on it because it didn't happen in the example set.
If you never come up with a marketable product you can remain a startup indefinitely.
Getting Trump reelected should count as 'billionaire philanthropy'.
thinkers like computer scientist Eliezer Yudkowsky
That's gotta sting a bit.
This was such a chore to read, it's basically quirk-washing TREACLES. This is like a major publication deciding to take an uncritical look at scientology focusing on the positive vibes and the camaraderie, while stark in the middle of operation snow white, which in fact I bet happened a lot at the time.
The doomer scene may or may not be a delusional bubble—we’ll find out in a few years
Fuck off.
The doomers are aware that some of their beliefs sound weird, but mere weirdness, to a rationalist, is neither here nor there. MacAskill, the Oxford philosopher, encourages his followers to be “moral weirdos,” people who may be spurned by their contemporaries but vindicated by future historians. Many of the A.I. doomers I met described themselves, neutrally or positively, as “weirdos,” “nerds,” or “weird nerds.” Some of them, true to form, have tried to reduce their own weirdness to an equation. “You have a set amount of ‘weirdness points,’ ” a canonical post advises. “Spend them wisely.”
The weirdness is eugenics and the repugnant conclusion, and abusing bayes rule to sidestep context and take epistimological shortcuts to cuckoo conclusions while fortifying a bubble of accepted truths that are strangely amenable to allowing rich people to do whatever the hell they want.
Writing a 7-8000 word insider expose on TREACLES without mentioning eugenics even once throughout should be all but impossible, yet here we are.
Rationalist debatelord org Rootclaim, who in early 2024 lost a $100K bet by failing to defend covid lab leak theory against a random ACX commenter, will now debate millionaire covid vaccine truther Steve Kirsch on whether covid vaccines killed more people than they saved, the loser gives up $1M.
On would assume this to be a slam dunk, but then again one would assume the people who founded an entire organization about establishing ground truths via rationalist debate would actually be good at rationally debating.