Subhash Chandra Lakhotia, a former Professor of Zoology, is currently an INSA Senior Scientist and Distinguished Professor at Banaras Hindu University. In this interview with IndiaBioscience he talks about how impact factor can never be an effective tool in distinguishing good research from bad. He also gets into the reasons of why Indian scientists don’t publish in Indian journals and why Indian journals suffer from bad ratings.
Your publication record suggests you favour publishing in Indian journals. Do you think it has affected your career in any way?
Yes, I took a conscious decision way back in 1971, when I was just starting my career, that I should publish as much of my research output in journals published in India as in those published from outside. The only condition that I placed was that the journal must have good peer-review process. When I learnt that some of the journals where I had submitted a manuscript accepted the same without a formal peer-review, I did not go back to them. I am happy that during all these long years of my career, I have more or less maintained a 50:50 ratio for my publications in journals published in so-called ‘national’ and ‘international’ journals. I must say at this point that these two officially used terms, ‘national’ and ‘international’, are not only misleading but inappropriate and insulting. In fact, I had argued in a commentary published in Current Science in 2013 that such a distinction must not be made since it indirectly implies that a paper published in ‘national’ journal is, by default, poorer than that in an ‘international’ journal. Our own community has created this unjustified distinction and it has affected the quality of manuscripts that are submitted to journals published in India.
As I look back at my professional career, I do not think that I have suffered in any way — I did get my due promotions as well as recognitions in form of the SS Bhatnagar Prize and fellowships of Academies. I believe that I am reasonably well recognised by peers in my field of research. In addition, this practice has given me the moral authority to encourage others to also publish in journals published in the country as long as those journals have a reasonably good peer-review process and good publication practices in place. I have no regrets for having taken this decision rather I am happy that I took such a decision early in my career and happier that I could stick to it.
What are the things that you consider when choosing a journal for submitting your papers? How important is impact factor in making that decision?
I would look for a journal which i) has a wide readership in the domain of my research, ii) has a good peer-review system, iii) does not levy any kind of charges but if it does, the journal is willing to waive off the charges, iv) it does not have charges for colour images since most of our papers will have them in good numbers, and v) provides free open access immediately or within a short time without the author being charged for it. Occasionally, I have also submitted my manuscript to a given journal because it had published another paper whose inferences were not agreeable to me.
I have never seriously worried about impact factor of the journal. Moreover, most of the so-called ‘high impact factor’ journals are not affordable for me because of the charges involved. My university does not provide any grant, neither for research nor for publication. Obviously, I cannot think of spending the limited money available through the externally funded projects, on publication or other charges.
Do you think it is a little unfair to expect young scientists to be oblivious of IF in publishing, since their careers trajectories and whether or not they get an academic job is often decided based the quality of journals in which they publish their work?
Yes, I agree that in recent times, the young aspirants are made to suffer if they publish in journals published in the country or in other journals that have a low IF rating. This is a very unfortunate situation, indeed. Yet it persists because of the misdirected mindset of many of our senior colleagues who sit on the ‘judgment chairs’ and who mostly count the impact factor without making any serious effort to see what work has actually been carried out. Unless we learn to appreciate the quality of work rather than using inappropriate metrics like impact factor for assessment of individuals and institutions, we would continue to discourage merit and promote undesirable methods which facilitate publication, by hook or crook, in high IF journals.
I am seriously against IF being given so much importance and have been encouraging the young scientists to stop worrying about IF but be confident about the quality of the work they publish. But, I agree that the seniors need to change their mindset. The sooner this happens, the better it is for all concerned in the country.
Current Science is one of India’s top academic journal, which has an impact factor of <1. Why do you think Indian journals suffer from such low ratings?
Yes, it is indeed very unfortunate that Current Science has an IF<1.0. In fact, most Indian journals have relatively low IF. There are multiple reasons, including some editorial policies and practices that do not make it attractive enough and which dilute the IF value. However, a major reason for their perceived ‘poor’ quality lies in the quality of submissions. What is made available by authors can be filtered only to some extent through peer-review before being published. Many of the manuscripts submitted to these journals are not first-time submissions but are secondary, tertiary or even subsequent submissions. With poor-quality submissions, you cannot expect the journal to publish high-quality research! Our colleagues argue that since these journals, by and large, do not have ‘good’ articles published in their pages, why should they submit their ‘good ’ papers to Current Science or similar other journals? This vicious circle of poor submissions and poor recognition of the journal can only be snapped if more of us choose to send our work to these journals in the first place. To expect that the journal would first become good by some magic and then our scientists would submit their ‘good’ papers for publication is really like placing the cart before the horse.
Do you think there’s a visibility issue? Are articles published in lower impact factor journals, seen less and consequently cited less, too?
My own experience makes me believe that the argument about visibility is a self-created alibi. All of my papers published during the 1980s in Indian Journal of Experimental Biology (CSIR) have been cited and tables and figures from some of them reproduced in reviews/monographs published by reputed authors in good journals/series books (e.g., Advances in Genetics, published by Academic Press). These papers were ‘visible’ to those interested even when internet did not exist and that particular journal where the work was published was not on the subscription list of most libraries around the world. Now that the internet is freely available to everyone and all journals are immediately listed by one or the other search engine, visibility is no issue at all. In fact, since most of our journals are freely available on the net, they become more visible than many others where you have to be a subscriber or willing to pay per view charge!
The other aspect of visibility of a paper relates to how one selects a paper for citation in one’s own publication. I do not look for the name of the journal or the number of times the paper has been cited but like to assure myself that the paper that I am citing is indeed appropriate in the given context. Thus, neither the impact factor of the journal nor its ‘visibility’ to others (i.e., number of times cited) is of any consequence when I select a paper for citation.
IF was first thought of in 1975. Hasn’t it become archaic now?
The IF is not only archaic but also is being used for a purpose that it was never meant to be, by its inventor, Eugene Garfield. IF as a metric for assessment of an individual or an institution has been promoted purely for commercial purposes by big publishing houses. Many learned societies too have, unfortunately, fallen in their line.
The high IF of a journal in a given year does not necessarily guarantee the quality of every paper published in that journal in that year or during an earlier period. Unfortunately, many of our colleagues seem to be deeply imprinted by the IF bug and consequently, the first question often asked about a newly published paper concerns the IF of the journal rather than what was the significant point in the published paper. I certainly would like to see the day when no agency even wants to know the IF of journals where one has published.
Is it time to look for new metrics for measuring journal quality? Also, should we measure journal quality at all?
I believe that any metric that we develop (and many have been developed after the IF), will have serious limitations if that metric were to be used as an arithmetic value. Not only different broad disciplines but different sub-disciplines in a broad field of research have varied patterns of citations and frequencies. A given field may be populated at a given time by a much larger number of investigators (being ‘in fashion’) while another field would have fewer active researchers. The frequency of citations in such cases would be very different, independent of the relative quality. It would be impossible to buffer against such variations
Some metrics can be used as qualitative indicators but never as an absolute value for individuals, institutions or journals. In my perception, any ranking on such numbers is misleading and can be tweaked unfairly. I would only like to assess the quality of the questions, the approaches used to address them and finally the interpretations provided in a paper to measure the quality of science being practised. Such measurements have to be qualitative rather than quantitative.
Acknowledgements: Manupriya thanks Amitabh Joshi, TNC Vidya and Manan Gupta for conversations that led to this article.