81
You can make top LLMs break their own rules with gibberish
(www.theregister.com)
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
Good point! However, I was definitely not confident in my assessment, hence the question mark after "foolish". I guess seeing all these "A.I. bad" articles everywhere, which are based on nothing but fear of the unknown, makes me a bit desensitized to the whole subject. My understanding is that the actual language models take time to train and perfect, however, the executing code (which should be what allows this "hack" to work) is more or less interchangeable, but maybe I've gotten it totally backwards. If so, please forgive my ignorance.
I don’t mean to pick on you, but I also don’t think “AI bad” articles are just based on fear of the unknown. Some of them are, but there are also reasonable concerns with all this, and I believe we will need strong and attentive regulation as we continue.
By analogy, people who opposed car culture in the 50s and 60s were seen as fear mongers who just opposed “progress”, but they turned out to be right. Cars don’t scale, they’re an environmental disaster, the most expensive and dangerous form of transportation possible, and we’ve completely redesigned our society so that now it’s extremely hard to reverse. We should have been more cautious.
The problems raised by these researchers may be an easy fix (disallow these specific tokens), or it may be surprisingly difficult to fix, or indicative of a bigger problem, and therefore worth worrying about. I’m concerned that society is a bit blasé about the risks.
Oh, I’m not saying there aren’t innate risks. You’re bringing up great points, and I agree we mustn’t throw caution to the wind. This is slightly besides the point of my initial comment, though, where I was merely stating my belief that the “hack” described in the OP might be a non issue in a couple of years. But you are right. Again, I’m sorry about my ignorance. I didn’t mean to start an argument. It’s great hearing other points of view, though.