Soyweiser

joined 2 years ago
[–] Soyweiser@awful.systems 6 points 1 month ago

Yeah indeed, had not even thought of the timegap. And it is such a bit of bullshit misdirection, very Muskian, to pretend that this fake transparency in any way solves the problem. We don't know what the bad prompt was nor who did it, and as shown here, this fake transparency prevents nothing. Really wished more journalists/commentators were not just free pr.

[–] Soyweiser@awful.systems 3 points 1 month ago

Im reminded of the cartoon bullets from who framed rodger rabbit.

[–] Soyweiser@awful.systems 11 points 1 month ago

LLMs cannot fail, they can only be prompted incorrectly. (To be clear, as I know there will be people who think this is good, I mean this in a derogatory way)

[–] Soyweiser@awful.systems 8 points 1 month ago* (last edited 1 month ago)

Think this already happened, not this specific bit, but ai involved shooting. Esp considering we know a lot of black people have been falsely arrested due to facial ID already. And with the gestapofication of the USA that will just get worse. (Esp when the police go : no regulations on AI also gives us carte blance. No need for extra steps).

[–] Soyweiser@awful.systems 9 points 1 month ago* (last edited 1 month ago) (2 children)

Remember those comments with links in them bots leave on dead websites? Imagine instead of links it sets up an AI to think of certain specific behaviour or people as immoral.

Swatting via distributed hit piece.

Or if you manage to figure out that people are using a LLM to do input sanitization/log reading, you could now figure out a way to get an instruction in the logs and trigger alarms this way. (E: im reminded of the story from the before times, where somebody piped logging to a bash terminal and got shelled because somebody send a bash exploit which was logged).

Or just send an instruction which changes the way it tries to communicate, and have the LLM call not the cops but a number controlled by hackers which pays out to them, like the stories of the A2P sms fraud which Musk claimed was a problem on twitter.

Sure competent security engineering can prevent a lot of these attacks but you know points to history of computers.

Imagine if this system was implemented for Grok when it was doing the 'everything is white genocide' thing.

[–] Soyweiser@awful.systems 8 points 1 month ago (1 children)

"whats my purpose?"

[–] Soyweiser@awful.systems 20 points 1 month ago

Are you trying to say here that cold readers do not actually communicate with the spirit realm? Where is your open mind?

[–] Soyweiser@awful.systems 6 points 1 month ago

Sociopaths

Bit important to note here to people not familiar with the blog posts (now available as a book (in pdf form), because everything must be monetized) that sociopath is meant here as a specific type of person, not a clinical sociopath per se, but more a certain type of person inside the context of the blog post series. So people reacting to it beware.

[–] Soyweiser@awful.systems 13 points 1 month ago* (last edited 1 month ago) (2 children)

Think you are misreading the blog post. They did this after the Grok had its white genocide hyperfocus thing. It shows the process of the xAI public github (their fix (??) for Groks hyperfocus) is bad, not that they started it. (There is also no reason to believe this github is actually what they are using directly (would be pretty foolish of them, which is why I could also believe they could be using it))

[–] Soyweiser@awful.systems 7 points 1 month ago

Cryptocurrencyexecs after reading this : "we have a code red, a code red, the public has figured it out, abort abort!"

[–] Soyweiser@awful.systems 9 points 1 month ago* (last edited 1 month ago) (1 children)

Doesnt help that there is a group of people who go 'using the poor like ~~biofuel~~ food what a good idea'.

E: Really influential movie btw. ;)

[–] Soyweiser@awful.systems 3 points 1 month ago

No, the guy who did it non-apologized, by going he should have checked the output better. Still an AI user.

view more: ‹ prev next ›