105
submitted 6 months ago* (last edited 6 months ago) by dgerard@awful.systems to c/techtakes@awful.systems

courtesy @self

can't wait for the crypto spammers to hit every web page with a ChatGPT prompt. AI vs Crypto: whoever loses, we win

you are viewing a single comment's thread
view the rest of the comments
[-] 200fifty@awful.systems 19 points 6 months ago

I think they were responding to the implication in self's original comment that LLMs were claiming to evaluate code in-model and that calling out to an external python evaluator is 'cheating.' But actually as far as I know it is pretty common for them to evaluate code using an external interpreter. So I think the response was warranted here.

That said, that fact honestly makes this vulnerability even funnier because it means they are basically just letting the user dump whatever code they want into eval() as long as it's laundered by the LLM first, which is like a high-school level mistake.

[-] Ephera@lemmy.ml 10 points 6 months ago

Yeah, that was exactly my intention.

[-] zogwarg@awful.systems 6 points 6 months ago

From reading the paper I'm not sure which is more egregious, the frameworks that pass code and/or use exec directly without checking, or the ones that rely on the LLM to do the checking (based on the fact that some of the CVEs require LLM prompt jailbreaking)

If you wanted to be exceedingly charitable, you could try and make the maintainers of said framework claim that "of course none of this should be used with unsanitized inputs open to the public, it's merely a productivity boost tool that you would run on your own machine, don't worry about possible prompts being evaluated by our agent from top bing results, don't use this for anything REAL."

this post was submitted on 25 Apr 2024
105 points (100.0% liked)

TechTakes

1401 readers
155 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS