This may only be a problem if the people in charge don't understand why it's wrong. "But it sounds correct!" etc.
HedyL
Refusing to use AI tools or output. Sabotage!
Definitely guilty of this. Refused to use AI generated output when it was clearly hallucinated BS from start to finish (repeatedly!).
I work in the field of law/accounting/compliance, btw.
I believe that promptfondlers and boosters are particularly good at "kissing up", which may help their careers even during an AI winter. This something we have to be prepared for, sadly. However, some of those people could still be in for a rude awakening if someone actually pays attention to the quality and usefulness of their work.
By the way, I know there is an argument that "low-skilled" jobs should not be eliminated because there are supposedly people who are unable to perform more demanding and varied tasks. But I believe this is partly a myth that was invented as a result of the industrial revolution, because back then, a very large number of people were needed to do such jobs. In addition, this doesn't even address the fact that many of these jobs require some type of specific skill anyway (which isn't getting rewarded appropriately, though).
The best example to this day are immigrants who have to do "low-skilled" jobs even though they possess academic degrees from their home countries. In such cases, I believe that automation could even lead to the creation of more jobs that match their true skill levels.
Another problem is that, especially in countries like the US, low-wage jobs are used as a substitute for a reasonable social safety net.
AI (especially large language models) is, of course, a separate issue, because it is claimed that AI could replace highly skilled and creative workers, which, on the one hand, is used as a constant threat and, on the other hand, is not even remotely true according to current experience.
In my experience, the large self-service kiosks at McDonald's are pretty decent (unless they crash, which happens too often). Many people (including myself) use them voluntarily, because if it is nice to have more control of and visual information about your order (including prices, product images, nutritional information, allergens etc.). You don't even need to wait in line anymore if their staff brings your order directly to your table. You don't need to use any tricks to speak to a human either, because you can always go to the counter and order there instead. However, this only works because the kiosks are customer-friendly enough that you don't have to force most people to use them.
I know that even those kiosks probably aren't great in the sense that they may replace some jobs, at least over the short-term. However, if customers truly like something, this might still lead to more demand and thus more jobs in other areas (people who carry your order to your table, people who prepare the food itself, people who code those apps - unless they are truly "vibe-coded", maintain the kiosks, design their content etc.).
However, the current "breed" of AI bots is a far cry away from even that, in my impression. They are really primarily used as a threat to “uppity” labor, and who cares about the customers?
Aren't most people ordering their fast food through apps nowadays anyway? Isn't this slightly more customer-friendly than AI order bots because it is at least a deterministic system?
Oh, I forgot, these apps will probably be vibe-coded soon too. Never mind.
More than two decades ago, I dabbled a bit in PHP, MySQL etc. for hobbyist purposes. Even back then, I would have taken stronger precautions, even for some silly database on hosted webspace. Apparently, some of those techbros live in a different universe.
Nice! I could almost swear I heard some of these in real life.
When an AI creates fake legal citations, for example, and the prompt wasn't something along the lines of "Please make up X", I don't know how the user could be blamed for this. Yet, people keep claiming that outputs like this could only happen due to "wrong prompting". At the same time, we are being told that AI could easily replace nearly all lawyers because it is that great at lawyerly stuff (supposedly).
To put it more bluntly: Yes, I believe this is mainly used as an excuse by AI boosters to distract from the poor quality of their product. At the same time, as you mentioned, there are people who genuinely consider themselves "prompting wizards", usually because they are either too lazy or too gullible to question the chatbot's output.
I think this is more about plausible deniability: If people report getting wrong answers from a chatbot, this is surely only because of their insufficient "prompting skills".
Oddly enough, the laziest and most gullible chatbot users tend to report the smallest number of hallucinations. There seems to be a correlation between laziness, gullibility and "great prompting skills".
Maybe it's also considered sabotage if people (like me) try prompting the AI with about 5 to 10 different questions they are knowledgeable about, get wrong (but smart-sounding) answers every time (despite clearly worded prompts) and then refuse to continue trying. I guess it's expected to try and try again with different questions until one correct answer comes out and then use that one to "evangelize" about the virtues of AI.