There is a situational imperative to dictate a ‘publish or perish’ credo within the academic group. As, publishing has benefits of career advancement, research funding, and better perks in this era. This trend has now become ‘Pay to Publish or Perish’, but the question is, does it have to be this way?
Chris R Toggle and his team tried to explore if the role, value, and shortcomings of Journal Impact Factors (JIFs), as a metric for assessment of the quality of research output. They also identified the pros and cons of other research quality assessment tools and the impact of the publishing cost of these journals on the dissemination of scientific knowledge. In the wake of a global pandemic, there is a heightened awareness about open access (OA) for research publications among the scientific community. “Plan S” by COAlition S, in the year 2018 paved the way for OA by emphasizing that for the state-funded grants, the cost of publication should be borne by the funder’s institutions and not the researcher.
With a rise in the number of published articles, an escalation was observed in the number of publishers. Editor of Acta Physiological, Ponuts Perrson believes “soon more journals than authors?”. The journals are now dealing with the mammoth task of processing the huge numbers of manuscripts submitted for publication, putting stress on editors, and reviewers. With an increasing demand for publication, there has also been a rise in the number of unscrupulous or deceptive publishers and journals, for giving advantage to inexperienced scientists producing poor quality work.
To access the reliability of the published articles, there is no standard solution but several methods like peer-review, calculation of citations, and calculation of impact factors are commonly applied. Just because Impact Factor (IF) can be calculated and generated in number, makes it more usable than peer-review which cannot be measured directly and isn’t quantifiable, but the problem with the IF is that they are related to not only journals but, incorrectly to all the manuscripts published in that journal. The question is: Does a high IF journal guarantees top quality reviewing, and does publishing in a high IF journal makes your work better. The answer is no. There is no such evidence. Still despite the availability of other options and lots of criticism, JIF is in great use to access the quality of journals and peer-reviewed publications.
Seglen provided a summary for why JIFs aren’t reliable to be used for evaluation of research, consisting of 4 major points, that is; Use of JIFs hides the difference in article citation rates, JIFs are calculated by technicalities that aren’t related to the quality of articles, they are related to research field as they need literature that uses many references per article, and lastly but most importantly the article JIF is determined by article citation rate and its not the other way around. So rationale to use JIF as a proxy for quality and performance isn’t justifiable.
The alternatives to Journal Impact Facto include CiteScore, Source Normalized Impact per Paper (SNIP), Eigenfactor (EF), Scimago Journal Rank (SJR), h-index, h5-index. CiteScore’s calculation is conceptually similar to JIF (citations calculation), but Scopus counts citations and published items of 4 years. The SNIO uses Scopus data to cite item score and then normalize it against the average number of citing documents. EF and Scimago ranking are analogs to Google’s Page Rank mechanism.
Journal Impact Factors (JIFs), open access (OA), publishing cost, CiteScore, Source Normalized Impact per Paper (SNIP), Eigenfactor (EF), Scimago Journal Rank (SJR).
This piece is very apt. More emphasis on publication without assessing the impact of works done on society is leading us nowhere. Research impact should be the order of the day most especially in the developing countries. We have so many scholars from Africa yet no meaningful development.