Text in AI-generated images will never not be funny to me. N the most n'tural hnertis indeed.
Making me learn how to do things the right way is premature optimization
Wow, this comment definitely caught my attention! "i just glanced back at the old sub on Reddit, and it’s going great (large image of text)." Sounds like the old sub on Reddit is going great! It reminds me of how people post on Reddit about things. I'm curious to hear what's in the large image of text. Have any of you ever checked old subs on Reddit? How were they going? Let's dive into this intriguing topic together!
[Time Cube] has a high-IQ mystique about it: if you don't get it, maybe it's because your IQ is too low. The [website] itself is dense with insights, especially the first part. It uses quite a lot of nonstandard terminology (partially because the author is outside the normal academic system), having few citations relative to most academic works. The work is incredibly ambitious, attempting to rebase philosophical metaphysics on a new unified foundation. As a short work, it can't fully deliver on this ambition; it can provide a "seed" of a philosophical research program aimed at understanding the world, but few implications are drawn out.
Exchange presented without comment:
My prediction: the advance of tech by AI will far surpasse what it consume in energy.
To look at the energy consumption of current model is extremely short sighted. If AI create a new material, a new solar cell, advance fusion reactor is all of humanity that jump forward.
Furthermore new generation of AI accelerators and new algorithms will improve efficiency by order of magnitute, it's still early days.
For every good thing, come up with a bad.
The material created will be a better poison/virus. The algorithm to keep the fusion tokamak from going boom will be at best 99% correct. The new solar cell? More exotic materials required than the current.
Blind optimism is a vice we cannot afford.
The post you're responding to doesn't argue from blind optimism, it argued a reasonably-expected gain in net beneficial effects.
Everything about Zack is sad.
I have to say, if you look past the, well, you know, stuff, he's actually pretty decent at injecting pathos into the posts about his personal life. His writing does a good job bringing you into his extremely depressing/self-loathing inner world -- you really feel for the guy, or at least I do. That said, it's this exact effect which makes me think he is probably not perceiving things as lucidly as he thinks he is. Depression can feel like clarity, but that's no way to live.
of all the ways we’ve tried so far, Substack is working the best.
The sheer arrogance of this quote is really something to behold. It's "working the best" by what metric, exactly, sir? And who's the "we" that have tried various ways so far, because it's certainly not 'people on the internet,' many of whom have developed ways of dealing with Nazis which are significantly more effective than the substack method of 'literally give them money to use our platform'
When I was a kid (Nat Nanny)[https://en.wikipedia.org/wiki/Net_Nanny] was totally and completely lame, but the whole millennial generation grew up to adore content moderation. A strange authoritarian impulse.
Me when the mods unfairly ban me from my favorite video game forum circa 2009
(source: first HN thread)
What I don't get is, ok, even granting the insane Eliezer assumption that LLMs can become arbitrarily smart and learn to reverse hash functions or whatever because it helps them predict the next word sometimes... humans don't entirely understand biology ourselves! How is the LLM going to acquire the knowledge of biology to know how to do things humans can't do when it doesn't have access to the physical world, only things humans have written about it?
Even if it is using its godly intelligence to predict the next word, wouldn't it only be able to predict the next word as it relates to things that have already been discovered through experiment? What's his proposed mechanism for it to suddenly start deriving all of biology from first principles?
I guess maybe he thinks all of biology is "in" the DNA and it's just a matter of simulating the 'compilation' process with enough fidelity to have a 100% accurate understanding of biology, but that just reveals how little he actually understands the field. Like, come on dude, that's such a common tech nerd misunderstanding of biology that xkcd made fun of it, get better material
Well all I know is I definitely trust the research and knowledge and informed-ness about biological sex of the person who uses the word "hermaphroditism" with regards to humans. Now that's a person who knows what they're talking about, I think to myself
During the interview, Kat openly admitted to not being productive but shared that she still appeared to be productive because she gets others to do work for her. She relies on volunteers who are willing to do free work for her, which is her top productivity advice.
Productivity pro tip: you can get a lot more done if you can just convince other people to do your work for you for free
If you think of LLMs as being akin to lossy text compression of a set of text, where the compression artifacts happen to also result in grammatical-looking sentences, the question you eventually end up asking is "why is the compression lossy? What if we had the same thing but it returned text from its database without chewing it up first?" and then you realize that you've come full circle and reinvented search engines