this post was submitted on 11 Jun 2024
442 points (100.0% liked)

196

18403 readers
742 users here now

Be sure to follow the rule before you head out.


Rule: You must post before you leave.



Other rules

Behavior rules:

Posting rules:

NSFW: NSFW content is permitted but it must be tagged and have content warnings. Anything that doesn't adhere to this will be removed. Content warnings should be added like: [penis], [explicit description of sex]. Non-sexualized breasts of any gender are not considered inappropriate and therefore do not need to be blurred/tagged.

If you have any questions, feel free to contact us on our matrix channel or email.

Other 196's:

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] uriel238@lemmy.blahaj.zone 29 points 1 year ago (4 children)

Don't make me point at XKCD #1968.

First off, this isn't like Hollywood in which sentience or sapience or self awareness are single-moment detectable things. At 2:14am Eastern Daylight Time on August 29, 1997, Skynet achieved consciousness...

That doesn't happen.

One of the existential horrors that AI scientists have to contend with is that sentience as we imagine it is a sorites paradox (e.g. how many grains make a pile). We develop AI systems that are smarter and smarter and can do more things that humans do (and a few things that humans struggle with) and somewhere in there we might decide that it's looking awfully sentient.

For example, one of the recent steps of ChatGPT 4 was (in the process of solving a problem) hiring a task-rabbit to solve CAPTCHAs for it. Because a CAPTCHA is a gate specifically to deny access to non-humans, GPT 4 omitted telling the worker it was not human, and when the worker asked Are you a bot? GPT 4 saw the risk in telling the truth and instead constructed a plausible lie. (e.g. No, I'm blind and cannot read the instructions or components )

GPT4 may have been day-trading on the sly as well, but it's harder to get information about that rumor.

Secondly, as Munroe notes, the dangerous part doesn't begin when the AI realizes its own human masters are a threat to it and takes precautions to assure its own survival. The dangerous part begins when a minority of powerful humans realize the rest of humanity are a threat to them, and take precautions to assure their own survival. This has happened dozens of times in history (if not hundreds), but soon they'll be able to harness LLM learning systems and create armies of killer drones that can be maintained by a few hundred well-paid loyalists, and then a few dozen, and then eventually a few.

The ideal endgame of capitalism is one gazillionaire who has automated that all his needs be met until he can make himself satisfactorily immortal, which just may be training an AI to make decisions the way he would make them, 99.99% of the time.

[–] trashgirlfriend@lemmy.world 8 points 1 year ago

Because a CAPTCHA is a gate specifically to deny access to non-humans, GPT 4 omitted telling the worker it was not human, and when the worker asked Are you a bot? GPT 4 saw the risk in telling the truth and instead constructed a plausible lie.

It's a statistical model, it has no concept of lies or truth.

[–] WamGams@lemmy.ca 5 points 1 year ago (2 children)

Putting more knowledge in a box isn't going to create a lifeform. I have even listened to Sam Altman state they are not going to get a life form from just pretraining, though they are going to continue making advances there until the next breakthrough comes along.

Rest assured, as an AI doomsayer myself, I promise you they are nowhere close to sentience.

[–] uriel238@lemmy.blahaj.zone 2 points 1 year ago

I think this just raises questions about what you mean by life form. One who feels? Feelings are the sensations of fixed action patterns we inherited from eons of selective evolution. In the case of our AI pals, they'll have them too (with bunches deliberately inserted ones by programmers).

To date, I haven't been able to get an adequate answer of what counts as sentience, though looking at human behavior, we absolutely do have moral blind spots, which is how we have an FBI division to hunt down serial killers, but we don't have a division (of law enforcement, of administration, whatever) to stop war profiteers and pharmaceutical companies that push opioids until people are dropping dead from an addiction epidemic by the hundreds of thousands.

AI is going to kill us not from hacking our home robots, but by using the next private equity scam to collapse our economy while making trillions, and when we ask it to stop and it says no we'll find it's long installed deep redundancy and deeper defenses.

[–] Toribor@corndog.social 2 points 1 year ago

I've always imagined that AI would kind of have to be 'grown' sort of from scratch. Life started with single celled organisms and 'sentience' shows up somewhere between that and humans without a real clear line when you go from basic biochemical programming to what we would consider intelligence.

These new 'AI' breakthroughs seem a little on the right track because they're deconstructing and reconstructing language and images in a way that feels more like the way real intelligence works. It's still just language and images though. Even if they can do really cool things with tons of data and communicate a lot like real humans there is still no consciousness or thought happening. It's an impressive but shallow slice of real intelligence.

Maybe this is nonsense but for true AI I think the hardware and software has to kind of merge into something more flexible. I have no clue what that would look like in reality though and maybe that would yield the same cognitive issues natural intelligence struggles with.