Self-Citation can be a double-edged sword. Sometimes it is an acceptable academic behavior or a prolific author while other times it is a shameless act of self-promoter. It can be used for building upon one’s prior work, while it can be abused by some to accrue citation records to enjoy grants and promotions. Whenever citations are used as indicators to evaluate scientific research, self-citation is often considered problematic. But why is self-citation blotting its copybook nowadays?
Do dig into the answer, and to help solve this problem, Justin Flatt and his team stress on creation of a ‘self-citation index’, that can help with measuring and reporting the extent of one’s self-citation and help to identify self-promotion that can tarnish the scientific workforce and advancements, and exacerbate gender and status biases. The ‘s-index’ will help in calculating author self-citation scores, just as the h-index helps in combining the author’s productivity and citation performance.
Flatt’s findings are based on the bibliometric study of the Norwegian articles database, analyzing over 692,455 citations by James H. Fowler. They suggest that each additional self-citation increases the number of citations from other authors by one after one year, and by three after five years, which also means that the effect of self-citation remains positive even for very high rates of self-citation due to lack of penalty. The dual effect of self-citation (yielding more citations to the particular author without yielding more citations to the paper in question) is really hard to detect.
When Eugene Garfield, the creator of the first Science Citation Index was asked about Self-citations, he acknowledged that self citations are one way of manipulating citation rate, but this practice is both common and reasonable to him. He thought that a high citation rate could be a sign of a prolific author in a specialized field, but in reality, it is hard to differentiate a luxuriant author from a flagrant self-advertiser.
Many authors like Richard Poynde remphasize transparency in self-citation data, to let every discipline define its conditions and parameters of acceptability. But according to Flatt and Garfield, transparency will not dissuade citation gaming, it will only enable it. So we don’t need more transparency, but more curation, suggest Phil Davis a specialist in the statistical analysis of citation.
The creation of an s-index won’t solve the problem due to lack of proper context and high discipline-dependence of self-citation and can harm the repute of some legit authors. This new index will do nothing for identifying citation coercion or cartels, so the solution to this problem is simply separating the h-index scores, with and without self-citation.
The new metrics are no answer to the problem, but a better and more rigorous curation of what gets indexed and added in the citation dataset does. This way the indexes will have the potential to either sanction and suppress offending journals, thereby improving the quality of the citation data and enhancing trust in the validity of the citation metrics, because the validity of metrics is based on the data fed to them. Index curation needs human intervention, making it cost-ineffective compare to the computer algorithm, but the former is of more value than the latter.
publication ethics, citation ethics, self-citation, h-index, self-citation index, bibliometrics, scientific assessment, scientific success.