this post was submitted on 11 Sep 2025
812 points (96.2% liked)
Technology
75038 readers
4567 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
We're turfing out students by the tens on academic misconduct. They are handing in papers with references that clearly state "generated by Chat GPT". Lazy idiots.
This is why invisible watermarking of AI-generated content is likely to be so effective. Even primitive watermarks like file metadata. It's not hard for anyone with technical knowledge to remove, but the thing with AI-generated content is that anyone who dishonestly uses it when they are not supposed to is probably also too lazy to go through the motions of removing the watermarking.
if you are going to do all that, just do the research and learn something.
Aye that's exactly the same thing that I said
Couldn't students just generate a paper with ChatGPT, open two windows wide by side and then type it out in a word document?
I think I'd at least use an OCR program to do the bulk of the typing for me...
but that's work.
Students view doing that as basically the same amount of work as writing the paper yourself
Depends on the watermark method used. Some people talk about watermarking by subtly adjusting the words used. Like if there's 5 synonyms and you pick the 1st synonym, next word you pick the 3rd synonym. To check the watermark you have to access to the model and probabilities to see if it matches that. The tricky part about this is that the model can change and so can the probabilities and other things I don't fully understand.
Huh that actually does sound like a good use-case of LLMs. Making it easier to weed out cheaters.