yeahiknow3

joined 1 year ago
[–] yeahiknow3 3 points 2 weeks ago* (last edited 2 weeks ago) (9 children)

The discussion is over whether we can create an AGI. An AGI is an inorganic mind of some sort. We don’t need to make an AGI. I personally don’t care. The question was can we? The answer is No.

[–] yeahiknow3 2 points 2 weeks ago* (last edited 2 weeks ago) (13 children)

A malfunctioning nuke can also destroy humanity. So could a toaster, under the right circumstances.

The question is not whether we can create a machine that can destroy humanity. (Yes.) Or cure cancer. (Maybe.) The question is whether we can create a machine that can think. (No.)

What I was discussing earlier in this thread was whether we (scientists) can build an AGI. Not whether we can create something that looks like an AGI, or whether there’s an economic incentive to do so. None of that has any bearing.

In English, the phrase “what most people mean when they say” idiomatically translates to something like “what I and others engaged in this specific discussion mean when we say.” It’s not a claim about how the general population would respond to a poll.

Hope that helps!

[–] yeahiknow3 2 points 2 weeks ago* (last edited 2 weeks ago) (11 children)

Okay, we can create the illusion of thought by executing complicated instructions. But there’s still a difference between a machine that does what it’s told and one that thinks for itself. The fact that it might be crazy is irrelevant, since we don’t know how to make it, at all, crazy or not.

[–] yeahiknow3 3 points 2 weeks ago* (last edited 2 weeks ago) (15 children)

That’s fine, but most people (engaged in this discussion) aren’t interested in an illusion. When they say AGI, they mean an actual mind capable of rationality (which requires sensitivity and responsiveness to reasons).

Calculators, LLMs, and toasters can’t think or understand or reason by definition, because they can only do what they’re told. An AGI would be a construct that can think for itself. Like a human mind, but maybe more powerful. That requires subjective understanding (intuitions) that cannot be programmed. For more details on why, see Gödel's incompleteness theorems. We can’t even axiomatize mathematics, let alone human intuitions about the world at large. Even if it’s possible we simply don’t know how.

[–] yeahiknow3 3 points 2 weeks ago* (last edited 2 weeks ago) (30 children)

Reasoning literally requires consciousness because it’s a fundamentally normative process. What computers do isn’t reasoning. It’s following instructions.

[–] yeahiknow3 33 points 2 weeks ago* (last edited 2 weeks ago)

But using as little energy as possible and consuming as much of it as possible, all else equal, is LITERALLY the natural order. That’s our evolutionary programming. We rely on our big brains to achieve more complex instrumental goals.

OP’s reasoning is sound, because it suggests that certain choices are made out of ignorance and are therefore not rational (i.e., “not natural”).

Ultimately, more information leads to MORE diet and exercise, whereas more information leads to LESS “trad” lifestyle.

[–] yeahiknow3 3 points 2 weeks ago* (last edited 2 weeks ago)
[–] yeahiknow3 20 points 2 weeks ago* (last edited 2 weeks ago) (32 children)

The only way to create AGI is by accident. I can’t adequately stress how much we haven’t the first clue how consciousness works (appropriately called The Hard Problem). I don’t mean we’re far, I mean we don’t even have a working theory — just half a dozen untestable (if fascinating) hypotheses. Hell, we can’t even agree on whether insects have emotions (probably not?) let alone explain subjective experience.

[–] yeahiknow3 1 points 2 weeks ago

No, it is not.

[–] yeahiknow3 148 points 2 weeks ago (17 children)

Religion is a monstrous evil.

view more: ‹ prev next ›