Social Sciences have been facing a replication or reproducibility crisis because efforts to replicate past research findings frequently do not show the same result. A recent phenomenon observed by Marta Serra-Garcia from the University of California is that the studies with replicated results are cited less than the studies that have failed to reproduce or replicate.
For analysis of the result of the replication crisis, Marta and Uri Gneezy analyzed the papers from best psychology, science, and economy journals, to correlate replicability (of replication projects) with the number of citations and find if one is cited more than the other. They used two types of measure, firstly, predictability measure and publicly available prediction market results, secondly, citations from Google Scholar.He looked at the citation of 139 studies in 20,252 papers citing those studies across various journals, where they concluded that non-reproducible papers are cited 16 times more per year.
The number of citations is a basic tool to assess the scholarly impact of published work. Citations are being used by the academic institution as a vital metric in deciding whether to promote a faculty member, as it is a proxy of the impact of paper, therefore researchers prefer to be cited more.
Marta found that the papers that fail to replicate are cited 153 times more than those that are replicable. The failure to replicate does not affect the citation trends. One reason could be the laxity in the review process because there has been a negative correlation between replicability and citation count that could be due. After all, assumingly, more cited papers offer more “interesting” findings. This trade-off of preference for interesting results and application of low standards regarding their reproducibility is so evident.
For the journals in Psychology out of 100 analyzed studies 39% were replicated with success whereas in economy journals 61% out of 18 studies while in journals of Nature and Science 62% out of 21 studies were easily replicated. This difference in the latter is most prominent, that is non-replicable work was cited 300 times more than the reproducible work. The scarcity of preference of citing non-replicable over replicable work can be understood by the fact that only 12% of times non-replicable study was cited in all the journals and papers that author observed. This problem is worsened by the fact that appealing or interesting findings can become the talk of the town by platforms like Twitter but has a massive audience, but that doesn’t make the findings, a fact.
This trend can give way to false or problematic research to showcase publicly, and retracting such paper or to put it right can take a very long time like the infamous paper linking vaccine and autism, which is now retracted after 12 years, and has made irreparable damage on the perception of the vaccine among masses. But retracting paper not only drops the citation rate of such work but is one of the most important ways to manage replication crisis and allow adaption of thorough scientific methods.
The pressure on academics and editors to publish ‘interesting’ findings have given way to this crisis and the focus should be on the quality of scientific paper, so next time you see an interesting or appealing fact, check if the cited work has replicable data or not?
replicable publication, non-replicable publication, citation, impact factor, methodology, social sciences, interesting facts