25
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 07 Jul 2024
25 points (100.0% liked)
TechTakes
1427 readers
118 users here now
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
founded 1 year ago
MODERATORS
i don't think that can be quite right, as illustrated by an extreme example: consider a game where the first move has player 1 choose "win" or "hypergo." if player 1 chooses win, they win. if player 1 chooses hypergo, begin a game of Go on a 1,000,000,000 x 1,000,000,000 board, and whoever wins that subgame wins. for player 1, the 'true' position eval function must be in some sense incredibly complicated, because it includes hypergo nonsense. but player 1 strategy can be compressed to "choose win" without opening up any counterattacks
more generally I suspect that as soon as you are trying to compare some notion of a 'true' position eval function to eval functions you can actually generate you're going to have a very difficult time making correct and clear predictions. the reason I say this is that treating such a 'true' function is essentially the domain of combinatorial game theory (not the same as "game theory"), and there are few if any bridges people have managed to build between cgt and practical Go etc playing engines. so it's probably pretty hard to do
(I know there's a theory of 'temperature' of combinatorial games that I think was developed for purposes of analyzing Go, but I don't think it has any known relationship to reinforcement learning based Go engines)