Columns Opinion

A new yardstick: Is Citation Count a more realistic measure of research impact?

Vijay Kothari


It has become common practice to judge a researcher’s work by looking at the Impact Factor of the journals in which she/​he has published papers. While this approach could give a general idea about quality of research, it leads to a variety of complications when applied to situations — including recruitment, promotions, etc. — for which it is not that suitable.


The Impact Factor is, basically, an indirect measure of quality. It indicates the average citations received by an article published in a particular journal. Needless to say, every article in a given journal does not receive citations equal to the Impact Factor of that journal. So the Impact Factor really says more about the journal than the individual papers published in it.

Citations may be a more realistic measure of the impact (or value) of an individual researcher. To evaluate the contribution made by the researcher, her/​his total citation count (over her/​his entire career or over the time period under evaluation) could be considered. In addition to total citation count, one can also compute citations per paper, h index, etc.

When two researchers compete for a particular fellowship or promotion, it is not uncommon for the applicant with the higher Impact Factor to be favored, even when they both have the same total citation count, implying that their papers are referred to at same frequency by their peers. One of them loses out despite having made, arguably, the same scientific impact. Similarly, in eligibility criteria for various academic benefits, we often read that only people with a total Impact Factor of 5 or 10 (or some such number) can apply. Is it not erroneous to assume that papers published in high impact journals will automatically receive more citations?

Currently, many journals are operating in open access mode, making their content openly available to a wider audience. Increased access to scientific papers is changing the way they are referred to as well as cited. Papers are increasingly cited based on their relevance and content, rather than the reputation or Impact Factor of the journals in which they are published. If this trend of open access to research was to grow stronger, we can logically assume that if a paper is not good or interesting enough, then it will not get cited, irrespective of the journal in which it has been published.

This article is not an argument against use of the Impact Factor. It is not a bad parameter, when applied with due consideration of its limitations. But it is, really, better for evaluating journals than individual researchers. Unlike Impact Factors, which are translated from a journal to the papers it contains, citations are to be earned by each individual paper on its own merit. These days it is easy to generate the citation count from Google Scholar, Scopus, and other sources. It would be for the betterment of science if policymakers/decision-making authorities replaced Impact Factor with Citation Count as the parameter of evaluating the scientific excellence of a researcher and her/​his contribution. Citation Count can also be a good and reliable parameter for ranking of research institutes and universities. Let us move towards a more direct and realistic evaluation, based on a more reasonable and effective parameter.