The major take is: We spell it differently.
I am too dumb/autistic to know what you're conveying here.
The major take is: We spell it differently.
I am too dumb/autistic to know what you're conveying here.
take some time and read this
I read it. I appreciated the point that human perception of current AI performance can scam us, though this is nothing new. People were fooled by Eliza.
It's a weak argument though. For causing an AI singularity, functional intelligence is the relevant parameter. Functional intelligence just means "if the machine is given a task, what is the probability it completes the task successfully". Theoretically an infinite chinese room can have functional intelligence (the machine just looks up the sequence of steps for any given task).
People have benchmarked GPT-4 and it's got general functional intelligence at tasks that can be done on a computer. You can also just go pay up $20 a month and try it. It's below human level overall I think, but still surprisingly strong given it's emergent behavior from computing tokens.
Serious answer not from yudnowsky: the AI doesn't do any of that. It helps people cheat on their homework, write their code and form letters faster, and brings in revenue. AI owner uses the revenue and buys gpus. With the GPUs they make the AI better. Now it can do a bit more than before and then they buy more GPUs and theoretically this continues until the list of tasks the AI can do includes "most of the labor in a chip fab" and GPUs become cheap and then things start to get crazy.
Same elementary school logic but I mean this is how a nuke works.
Just I think to summarize your beliefs: rationalists are wrong about a lot of things and assholes. And also the singularity (which predates yuds existence) is not in fact possible by the mechanism I outlined.
I think this is a big crux here. It's one thing if its a cult around a false belief. It's kind of a problem to sneer at a cult if the core S of it happens to be a true law of nature.
Or an analogy. I think gpt-4 is like the data from the Chicago pile. That data was enough to convince the domain experts then a nuke was going to work to the point they didn't test Fat Man, you believe not. Clearly machine generality is possible, clearly it can solve every problem you named including, with the help of humans, ordering every part off digikey and loading the pick and place and inspecting the boards and building the wire harnesses and so on.
Just to be clear, you can build your own telescope now and see the incoming spacecraft.
Right now you can go task GPT-4 with solving a problem about equal to undergrad physics, let it use plugins, and it will generally get it done. It's real.
Maybe this is the end of the improvements, just like maybe the aliens will not actually enter orbit around earth.
It's 8 instances and the MoE architecture is a little more complex than that.
Just to engage with the high school bully analogy, the nerd has been threatening to show up with his sexbot bodyguards that are basically T-800s from terminator for years now, and you've been taking his lunch money and sneering. But now he's got real funding and he goes to work at a huge building and apparently there are prototypes of the exact thing he claims to build inside.
The prototypes suck...for now...
No literally the course material has the word "belief". It means "at this instant what is the estimate of ground truth".
Those shaky blue lines that show where your Tesla on autopilot thinks the lane is? That's it's belief.
English and software have lots of overloaded terms.
1, 2 : since you claim you can't measure this even as a thought experiment, there's nothing to discuss 3. I meant complex robotic systems able to mine minerals, truck the minerals to processing plants, maintain and operate the processing plants, load the next set of trucks, the trucks go to part assembly plants, inside the plant robots unload the trucks and feed the materials into CNC machines and mill the parts and robots inspect the output and pack it and more trucks...culminating in robots assembling new robots.
It is totally fine if some human labor hours are still required, this cheapens the cost of robots by a lot.
Regarding (3) : the specific mechanism would be AI that works like this:
Millions of hours of video of human workers doing tasks in the above domain + all video accessible to the AI company -> tokenized compressed description of the human actions -> llm like model. The llm like model thus is predicting "what would a human do". You then need a model to transform the what to robotic hardware that is built differently than humans, and this is called the "foundation model": you use reinforcement learning where actual or simulated robots let the AI system learn from millions of hours of practice to improve on the foundation model.
The long story short of all these tech bro terms is robotic generality - the model will be able to control a robot to do every easy or medium difficulty task, the same way it can solve every easy or medium homework problem. This is what lets you automate (3), because you don't need to do a lot of engineering work for a robot to do a million different jobs.
Multiple startups and deepmind are working on this.
Personally I imagine him as a cult leader of a flying saucer cult where suddenly an alien vehicle is actually arriving. He's running around panicking tearing his hair out because this wasn't actually what he planned, he just wanted money and bitches as a cult leader. And because it's one thing to say the aliens will beam every cult member up and take them to paradise, but if you see a multi-kilometer alien vehicle getting closer to earth, whatever it's intentions are no one is going to be taken to paradise...
Software you write can have a "belief" as well. The course I took on it had us write Kalman filters, where you start with some estimate of a quantity. That estimate is your "belief", and you have a variance as well.
Each measurement you have a (value, variance) where the variance is derived from the quality of the sensor that produced it.
It's an overloaded word because humans are often unwilling to update their beliefs unless they are simple things, like "I believe the forks are in the drawer to the right of the sink". You believe that because you think you saw them their last. There is uncertainty - you might have misremembered, as your own memory is unreliable, your eyes are unreliable. If it's your kitchen and you've had thousands of observations, your belief has low uncertainty, if it's a new place your belief has high uncertainty.
If you go and look right now and the forks are in fact there you update your beliefs.
Sorry sir:
*I have to ask, on the matter of (2): why? * I think I answered this.
I am not sure what you are asking here, sir. It's well known to those in the AI industry that a profound change is upon us and that GPT-4 shows generality for it's domain, and robotics generality is likely also possible using a variant technique. So individuals unaware of this tend to be retired people who have no survival need to learn any new skills, like my boomer relatives. I apologize for using an ageist slur.