[-] BigMuffin69@awful.systems 5 points 1 month ago* (last edited 1 month ago)
[-] BigMuffin69@awful.systems 6 points 2 months ago

Take a look w/ your own googly eyes

[-] BigMuffin69@awful.systems 5 points 2 months ago

thanks for the tip! πŸ™

[-] BigMuffin69@awful.systems 6 points 2 months ago* (last edited 2 months ago)

These kids really think if they pick up some trailer park rock candy they can become Paul Erdos. Hate to say it lads, he was simply built different.

[-] BigMuffin69@awful.systems 6 points 3 months ago* (last edited 3 months ago)

it's a compliment boo 😘 <3 u habibi

[-] BigMuffin69@awful.systems 5 points 3 months ago

it was a rough week for me too! the wife and I have been battling covid, think we are through the worst of it though.

[-] BigMuffin69@awful.systems 6 points 5 months ago
[-] BigMuffin69@awful.systems 6 points 5 months ago* (last edited 5 months ago)

Not prying! Thankful to say, none of my coworkers have ever brought up ye olde basilisk, the closest anyone has ever gotten has been jokes about the LLMs taking over, but never too seriously.

No, I don't find the acasual robot god stuff too weird b.c. we already had Pascal's wager. But holy shit, people actually full throat believing it to the point that they are having panic attacks wtf. Like:

  1. Full human body simulation -> my brother-in-law is a computational chemist, they spend huge amounts of compute modeling simple few atom systems. To build a complete human simulation, you'd be computing every force interaction for approx ~ 10^28 atoms, like this is ludicrous.

  2. The chuckle fucks who are posing this are suggesting ok, once the robot god can sim you (which again, doubt), it's going to be able to use that simulation of you to model your decisions and optimize against you.

So we have an optimization problem like:

min_{x,y} f(x) s.t. y in argmin{ g(x,y),(x,y) in X*Y}

where x and f(x) would be the decision variables and obj function 🐍 is trying to minimize, and y and g(x,y) is the objective of me, the simulated human who has its own goals, (don't get turned to paperclips).

This is a bilevel optimization problem, and it's very, very nasty to solve. Even in the nicest case possible, that somehow g,f, are convex functions and X,Y are all convex sets, (which is an insane ask considering y and g entails a complete human sim), this problem is provably NP-hard.

Basically, to build the acasual god, first you need a computer larger than the known universe, and this probably isn't sufficient.

Weird note: while I was in academia, I actually did do some work on training ANN to model the constraint that y is a minimizer of a follower problem by using an ANN to act as a proxy for g(x,*), and then encoding a representation of the trained network into a single level optimization problem... we got some nice results for some special low dim problems where we had lots of data🦍 🦍 🦍 🦍 🦍

[-] BigMuffin69@awful.systems 4 points 5 months ago

I don't understand why the wife does not simply consume her husband, a parenting style I've developed from observing the noble praying mantis in the wild.

[-] BigMuffin69@awful.systems 6 points 6 months ago* (last edited 6 months ago)

Ugh, this post has me tilted- if your utility function is

max sum log(spending on fun stuff at time t ) * p(alive_t) s.t. cash at time t = savings_{t-1}*r + work_t - spending_t,

etc.,

There's no fucking way the optimal solution is to blow all your money now, because the disutility of living in poverty for decades is so high. It's the same reason people pay for insurance, no one expects their house is going to burn down tomorrow, but protecting yourself against the risk is the 100% correct decision.

Idk, they are the Rationalist^{tm} so what the hell do I know.

[-] BigMuffin69@awful.systems 4 points 7 months ago

This is why I'll never succeed in software engineering. My first thought was to make the penis part of the internet of things.

view more: β€Ή prev next β€Ί

BigMuffin69

joined 10 months ago