One colleague who read my previous blog on “Holistic science sans impact factor”, shared with me a few other recent articles on impact and other similar “factors” that glorify research publications (“Impacting our young”, doi: 10.1073/pnas.1016516107 PNAS December 14, 2010 vol. 107 no. 50 21233 and “On the value of author indices”, Letters Section, Physics Today — March 2011 Volume 64, Issue 3, p. 10). Reading these pieces, I felt stimulated to write something more on this issue only to re-emphasize the absurdity of the fact that the scientific community as a whole seems to have accepted “impact factor” (IF) and the like indices as the only parameter to judge a scientist’s worth! Ron Vale’s blog “Papers — how many should one publish?” also alludes to this issue.
Notwithstanding the variety of quantitative parameters that have been invented in recent decades to measure the quality of one’s research contributions, the fact remains that the quality of a research paper can be understood only by reading it. If we reflect honestly, most of us who are internet/online-search savvy scientists in the contemporary information-loaded world would agree that very few papers are actually read in totality. Whether to cite or not to cite a given paper is another issue. “EndNote” and the like reference managers can conveniently suggest some references and a simple click of the mouse on any one or more in the suggested list would complete the job. One need not read the paper but the “IF” of the cited paper (rather of the journal which published it) goes up. Thanks to a phenomenon like “cooperative binding”, if a paper begins to get cited frequently, others find it easier to cite that paper without, of course, even bothering to read the Abstract! I have seen some citations of my papers that appear out of context and I myself have sometimes been a prey to this easy solution.
Incidences of missing the deserved citations are also common, partly because of bias but also because the paper was not read by those who ought to have read it. In this age of online searches, the excuse that the paper was not published in a “high-impact” journal and therefore, missed the attention is really not valid. The “ping-pong” game of mutual citations is resorted to more frequently than we actually realize: this reciprocally boosts “morale” and “scientific standing” of both sides. In the process, the journal comes out as the “winner” for no effort on its part.
It is a fact that publication of research papers is now a full-fledged industry with enormous commercial interests and stakes. Such enterprises would have to ensure invention of newer versions of “IF” to attract more manuscripts and thus more money from the scientists as “cost of publication” and “open access” facility. Should the scientists not worry that it is they who work hard to get new information but instead of being rewarded for it, they need to spend more money for the findings to be known? The financial gains are reaped by the commercial publishers acting merely as the “middle man”!
I would like to share a personal experience of the “impact” of a paper without having the so-called “IF”. During my doctoral studies in the 1960s, I learnt about benzamide, through “hard” search in library (no Pubmed or Google search or pdf or http files existed then! Even “Current Contents” did not exist and of course the concept of “IF” was not yet invented!). This chemical was reported, along with a few others, to inhibit chromosomal transcription in polytene nuclei of a dipteran insect. I found this possibility attractive for my studies and its use by me generated the expected result. However, the most significant fall out of this study was the serendipitous observation that the benzamide treatment specifically induced one single gene (the 93D puff) and that chance observation in 1969 has indeed kept me engaged till date, occupying a major part of my research activity. I saw a reference to benzamide’s use initially in a book published in 1964 (Jacob et al, in Nucleic Acids – Structure, Biosynthesis and Function, CSIR, New Delhi) which included papers presented at a research conference. Through this chapter, I located the original paper published in Nature (Sirlin J L and Jacob J 1964 Sequential and reversible inhibition of synthesis of ribonucleic acid in the nucleolus and chromosomes: effect of benzamide and substituted benzimidazoles on dipteran salivary glands; Nature 204 545 – 547). This Nature paper is listed in Pubmed Central to have been cited only 4 times (far below Nature’s IF). However, for my own research the impact has been unquantifiable as it stimulated design of experiments to seek answer to a specific question; the impact became real long-lasting because of the serendipity that facilitated observing an unsought for and unexpected result.
Today’s “IF” driven research aimed at finding immediate gains without a long-term curiosity-driven research makes “good manuscripts” which the leading journals find worthy of their pages. How many of them actually survive several years after publication? Even at the risk of repeating the obvious, it should be pointed out that the most as well as the least (or never) cited papers share the same journal and often, it is those few papers of the first category that really contribute to the rather undeserved “status” of the other category.
A real worry in the Indian context is that more and more peers and researchers are getting trapped into the vicious circle of “IF”, without any attempt to actually find out the real “impact” of the given research finding/s. Our establishment is showing increasing addiction to “IF” and “h‑index” and has accepted these as the standard, and unfortunately, the only parameter in any evaluation. This is obviously convenient as it does not require the evaluators to make any effort to actually go through at least some research papers of the candidate and appreciate the real worth. In a recent review of performance of a major support to a university, the grantee university was kind of chided for the “IF” being far below that of another research institution in the country! The evaluators conveniently forget that the research achievements of a university and a research institution in India are really not comparable because of the enormous differences in the infrastructure support and the fact that the university faculty also needs to spend a significant time in teaching! The obvious, although very unfortunate and damaging, carry home message for the university community emanating from such mis-founded comparisons is: forget teaching, forget original questions, but somehow publish in “high IF” journal so that the “h‑index” is increased; that alone would make one competitive enough for further grants.
In my view, the science gets “killed” in the process! But who cares? How many of our community will have the courage to say “the Emperor is naked”?