Does Journal Citation Indicator work as per Clarivate’s claim?


Does Journal Citation Indicator work as per Clarivate’s claim?

Clarivate Analytics, a big name in the field of research analytics, has announced an addition of a new metric “Journal Citation Indicator (JCI)” a relative citation performance indicator of journals across different disciplines. As announced by Martin Szomszor, the Director, Institute for Scientific Information at Clarivate.

According to Clarivate, “JCI is a field-normalized measured of citation impact, where a value of 1.0 means that, across the journal, published papers received several citations equal to the average citation count in that subject category.” The addition of this matrix is to normalize for various fields of research that has variable publication and citation rate. This journal-level metric is easy to interpret and compare across multiple disciplines. In the case of JIF, the numerator-denominator method was used where numerators are all citations while the denominator includes only the citable items like articles and reviews, while, JCI only uses Articles and Reviews for its calculation. The best feature about JCI its is easy to interpret with average performance is 1.0, for example, if a journal gets a score of 0.5 it has performed half well compared to the average performance of journals, while a JCI score of 2 means the performance is twice better than average.

It has many benefits to offer unlike Journal Impact Factor (JIF), which is based on a single year’s performance analysis across two previous years, JCI on the other hand analyses the journal’s citation performance across 3 years of citation data. JCI score of the journal will be provided in the Core Collection by Clarivate even if the journal hasn’t received the JIF score.

On analyzing the addition of this new matric by Clarivate, Phil Davis, a publishing consultant who specialized in the statistical analysis of citation, said that the major weakness to JCI is its daring claim that it has attained normalization across disciplines.

He went on to the way that 10 years ago Source Normalized Impact per Paper (SNIP), a journal indicator developed by Leiden University had attained performance analysis of journals across disciples. It defined the subject field as ‘the set of papers citing that journal’, whereas, JCI still uses the subject classification assigned to a journal. He explained Clarivate is using 235 subject categories while some categories contain an extremely large number of journals, e.g. Economics contains 373 journals, engineering has a multidisciplinary category with more than 13 sub-disciplines, etc. more than 1/3rd of the journals have been assigned more than 1 category, e.g. Oncogene is classified under 4 disciplines that are; Oncology, Genetics & Heredity, Biochemistry & Molecular Biology, and Cell Biology. JCI would only make sense when a user would need to know the disciplinary classification structure for each journal. A single JCI score of multiple JCI scores is needed to fulfill Clarivate’s goal, for which they used ‘the mean normalized citation impact across all categories assigned’.  It means the JCI score of Oncogene’s score will be based on journals in the 4 fields it is classified in. which can mean that Oncology can have a higher JCI score if it doesn’t perform so well but the other 3 fields do.

Phil Davis believes that the marketing materials of the JCI contain apparent contractions of what the metric can do. The normalizing technique isn’t so clear plus the normalization steps make it more reasonable to compare journals across disciplines but careful judgment is still needed. Clarivate needs to avoid misrepresenting and overselling the JCI and should give it a new, more descriptive name. which according to Dr. Phil should be “Field Normalized Impact Factor” (FNIF)


Journal Citation Indicator (JCI), Journal Impact Factor (JIF), Clarivate, Clarivate Analytics, Journal Citation Reports, Journal’s performance.


Translate »