1
10
submitted 9 months ago by fossilesque@mander.xyz to c/scicomm@mander.xyz
2
6
submitted 11 months ago* (last edited 2 days ago) by fossilesque@mander.xyz to c/scicomm@mander.xyz
3
14
How to Read a Paper (ccr.sigcomm.org)
submitted 11 months ago by fossilesque@mander.xyz to c/scicomm@mander.xyz
4
51
5
3
submitted 1 year ago* (last edited 1 year ago) by fossilesque@mander.xyz to c/scicomm@mander.xyz
6
52
submitted 5 days ago* (last edited 5 days ago) by plantteacher@mander.xyz to c/scicomm@mander.xyz

The ACM.org website published the work of a team at Carnegie Mellon (#CMU) which was said to include source code. Then the code was omitted from the attached ZIP file, which only contained another copy of the paper. I asked the lead researcher (a prof) for the code and was ignored. Also asked the other researchers (apparently students), who also ignored the request. The code would have made it possible to reproduce the research and verify it. ACM ~~also ignored my request and~~ also neglected to fix the misinfo (the claim on the page that source code is available). Correction: ACM replied and tried to find the missing code but then just gave up.

It seems like this should taint the research in some way. Why don’t they want people reproducing the research? If the idea is that scientific research is “peer reviewed” for integrity, it seems like a façade if reviewers don’t have a voice. Or is there some kind of 3rd party who would call this out?

7
13
8
6
9
16

Are You Eating a Credit Card Every Week?

No, but it is an interesting example of how misinformation can be created and spread, even without malicious intent. Video is a couple years old, but it still holds up.

10
4
submitted 2 months ago by fossilesque@mander.xyz to c/scicomm@mander.xyz
11
19
submitted 2 months ago by fossilesque@mander.xyz to c/scicomm@mander.xyz
12
13
submitted 2 months ago by fossilesque@mander.xyz to c/scicomm@mander.xyz
13
50
submitted 2 months ago by fossilesque@mander.xyz to c/scicomm@mander.xyz
14
5
submitted 3 months ago* (last edited 3 months ago) by memfree@beehaw.org to c/scicomm@mander.xyz

cross-posted from: https://beehaw.org/post/15160546 | ghost archive | Excerpts:

... findings with null or negative results — those that fail to find a relationship between variables or groups, or that go against the preconceived hypothesis — gather dust in favour of studies with positive or significant findings. A 2022 survey of scientists in France, for instance, found that 75% were willing to publish null results they had produced, but only 12.5% were able to do so2. Over time, this bias in publications distorts the scientific record, and a focus on significant results can encourage researchers to selectively report their data or exaggerate the statistical importance of their findings. It also wastes time and money, because researchers might duplicate studies that had already been conducted but not published. Some evidence suggests that the problem is getting worse, with fewer negative results seeing the light of day3 over time.


At the crux of both academic misconduct and publication bias is the same ‘publish or perish’ culture, perpetuated by academic institutions, research funders, scholarly journals and scientists themselves, that rewards researchers when they publish findings in prestigious venues, Scheel says.

But these academic gatekeepers have biases, say some critics, who argue that funders and top-tier journals often crave novelty and attention-grabbing findings. Journal editors worry that pages full of null results will attract fewer readers, says Simine Vazire, a psychologist at the University of Melbourne in Australia and editor of the journal Psychological Science.


One of the most significant changes to come out of the replication crisis is the expansion of preregistration (see ‘Registrations on the rise’), in which researchers must state their hypothesis and the outcomes they intend to measure in a public database at the outset of their study (this is already the norm in clinical trials). ... Preliminary data look promising: when Scheel and her colleagues compared the results of 71 registered reports with a random sample of 152 standard psychology manuscripts, they found that 44% of the registered reports had positive results, compared with 96% of the standard publications^7^ (see ‘Intent to publish’). And Nosek and his colleagues found that reviewers scored psychology and neuroscience registered reports higher on metrics of research rigour and quality compared with papers published under the standard model^8^.

15
12
submitted 4 months ago by Sal@mander.xyz to c/scicomm@mander.xyz
16
38
submitted 5 months ago by fossilesque@mander.xyz to c/scicomm@mander.xyz
17
18
submitted 6 months ago by fossilesque@mander.xyz to c/scicomm@mander.xyz
18
19
Lie-to-children - Wikipedia (en.m.wikipedia.org)
submitted 8 months ago by fossilesque@mander.xyz to c/scicomm@mander.xyz
19
62
submitted 8 months ago by fossilesque@mander.xyz to c/scicomm@mander.xyz
20
5
submitted 8 months ago by fossilesque@mander.xyz to c/scicomm@mander.xyz
21
3
submitted 8 months ago by fossilesque@mander.xyz to c/scicomm@mander.xyz
22
2
submitted 8 months ago by fossilesque@mander.xyz to c/scicomm@mander.xyz
23
14
submitted 8 months ago by fossilesque@mander.xyz to c/scicomm@mander.xyz
24
13
submitted 8 months ago by fossilesque@mander.xyz to c/scicomm@mander.xyz
25
13
submitted 8 months ago by fossilesque@mander.xyz to c/scicomm@mander.xyz
view more: next ›

Science Communication

885 readers
1 users here now

Welcome to c/SciComm @ Mander.xyz!

Science Communication



Notice Board

This is a work in progress, please don't mind the mess.



About

Rules

  1. Don't throw mud. Be kind and remember the human.
  2. Keep it rooted (on topic).
  3. No spam.


Resources

Outreach:

Networking:



Similar Communities


Sister Communities

Science and Research

Biology and Life Sciences

Plants & Gardening

Physical Sciences

Humanities and Social Sciences

Memes

founded 1 year ago
MODERATORS