this post was submitted on 22 May 2025
100 points (91.0% liked)
memes
14864 readers
5485 users here now
Community rules
1. Be civil
No trolling, bigotry or other insulting / annoying behaviour
2. No politics
This is non-politics community. For political memes please go to !politicalmemes@lemmy.world
3. No recent reposts
Check for reposts when posting a meme, you can only repost after 1 month
4. No bots
No bots without the express approval of the mods or the admins
5. No Spam/Ads
No advertisements or spam. This is an instance rule and the only way to live.
A collection of some classic Lemmy memes for your enjoyment
Sister communities
- !tenforward@lemmy.world : Star Trek memes, chat and shitposts
- !lemmyshitpost@lemmy.world : Lemmy Shitposts, anything and everything goes.
- !linuxmemes@lemmy.world : Linux themed memes
- !comicstrips@lemmy.world : for those who love comic stories.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Well, that's progress, innit? After you read A and B you set out to improve things further and it worked. That's why you publish it.
(But don't get me started on systematic problems in academic publishing which stop people from publishing their helpful results about not succeeding and also exaggerating the importance of their findings)
Dammit, negative results are gold, pretty close to the essence of science and it's just 'not enough clickbait, fuck your career'
Yes, but... Let's say papers A, B, and C are introducing methods. Often, each paper will choose to show the benchmarks in which their tool was the best. In reality, each tool might be better for a different task. If you understand the tools, and have gotten used to this kind of papers, you will probably get what each tool is good for. But the papers themselves are misleading, and people often just blindly use the "cutting edge" for everything.
But you need that sweet high impact factor for getting a job.