Six researchers under the age of 40 recognized for their seminal contributions across diverse fields.

Are we evaluating our research right?

Read time: 6 mins
Bengaluru
1 Oct 2018

Science relies on experiment and observation of nature to determine the truth or falsity of models and theories. But, the evaluation of individual performance in the scientific community is fraught with difficulty. Historically, scientific research was evaluated based on the opinions of fellow scientists. Now, with the increasing availability of quantitative data and electronic databases that can store them, these subjective criteria are being replaced by seemingly objective criteria. In India, metrics like the 'impact factor' of a journal and the 'h-index' are increasingly being used to determine appointments and promotions.

However, are these indices infallible? If not, is it time to revise these standards? In a study, published in the Indian Journal of Medical Ethics, scientists from the Department of Science and Technology (DST)'s Centre for Policy Research, the Indian Institute of Science (IISc), Bengaluru, and Council of Scientific and Industrial Research (CSIR) - Central Electrochemical Research Institute, Karaikudi, have examined these popular measurement standards and their trustworthiness.

The impact factor of a journal is a measure of the average number of citations that papers in a journal receive. Journals typically have a mix of highly-cited and less-cited papers, which together produce an average. But, does this average speak about the quality of the research? An example cited by the authors of the current study is of the renowned journal Nature, where 1% of papers account for nearly 12% of all citations. Many other published papers are either cited less or go uncited. Hence, they argue that the impact factor of a journal cannot be attributed to an individual paper published in that journal. Besides, research papers in fields like biomedicine tend to receive more citations compared to areas like mathematics and agriculture.

The h-index, on the other hand, is a metric based on the number of papers published by a researcher and the number of citations they receive. For example, a researcher’s h-index is 10 if they have ten papers cited 10 or more times. The metric depends only on the papers a researcher has published, not the journal. Does that make it a better measure? Not necessarily, say the authors of this study.

“The index does not take into account the actual number of citations received by each paper even if these are far in excess of the number equivalent to the h-index and can thus lead to misleading conclusions”, says Dr.Subbiah Gunasekaran from CSIR-CERI, who is an author of the study. Also, if a researcher has highly-regarded, groundbreaking research with a very high number of citations, that too is not taken into account. Also, it still suffers from the problem that the average number of citations per paper depends on the field of study. The Stanford University chemist Zare believes that the h-index is a poor measure in judging researchers early in their career, and it is more a trailing, rather than a leading, indicator of professional success.

Interestingly, there have been warnings about the misuse of these metrics from the time they were developed and published. For example, the Joint Committee report on Quantitative Assessment of Research formed by three international mathematics institutions cautioned that these metrics “should be used properly, interpreted with caution,” and went on to call the h-index a “naïve and poor measure in judging a researcher”. Eugene Garfield, a founding father of scientometrics (the study of measuring and analysing science) and the creator of the Science Citation Index (SCI) warned against “the possible promiscuous and careless use of quantitative citation data for sociological evaluations, including personnel and fellowship selection.”

Unfortunately, Indian regulatory and funding agencies have institutionalised such misuse. In addition to academic and research institutions, peer-review committees have also begun to use these metrics to rank researchers and institutions. Some establishments such as the Indian Institute of Management, Bengaluru, are giving out monetary rewards to researchers who publish in high impact factor journals. The Department of Science and Technology (DST) uses the h-index to evaluate universities, although it is a measure of the productivity of an individual researcher, and often, the h-index of a university may be determined by the work of a small number of individuals in one or two departments. Others like the University Grants Commission (UGC), Indian Council of Medical Research (ICMR) and the Medical Council of India use impact factors to select researchers and scientists for fellowships, for the selection and promotion of faculty, and for sanctioning grants to departments and institutions.

The UGC has mandated that universities and colleges apply a particular quantitative metric to evaluate aspiring faculty and existing faculty that is heavily based on impact factors of journals. Such a process can lead to arbitrariness in recruitment and evaluation practices. “If we compromise on the selection of researchers for jobs and decide to fund based on citation-based indicators in an uninformed way, obviously, there is every possibility that it would affect quality of research performed at Indian universities and laboratories”, says Dr. Muthu Madan from the Centre for Policy Research at IISc, who is an author of this study. These problems are further exacerbated because many supposedly “reputed” and “refereed” Indian journals are substandard and predatory, making India the world’s capital for predatory journals. Such policies help to breed and sustain predatory journals.

Even if there is no misuse of these metrics, they still have their limitations, say the authors. Research activity is not reducible to quantitative metrics like the h-index, impact factor or the number of papers published. These only seek to measure productivity without reference to creativity and originality, which the education system must encourage. As an example of the shortcomings of pure, scientometric based evaluations, Nobel laureates like Peter Higgs of Higgs Boson fame and Ed Lewis would have been rated as poor performers by these metrics. Additionally, an evaluation process based on scientometrics, like the National Institutional Ranking Framework (NIRF) instituted by the Ministry of Human Resource Development (MHRD), is liable to “gaming”, and research driven by assessment and performance targets.

Internationally, promotion and selection of faculty members is based on a combination of peer-review and metrics, with the balance determined by costs and academic traditions in each place. However, generally, the process is weighted more towards peer-review as opposed to the reliance on quantitative metrics. For example, Stanford University solicits the views of 10-15 national and international external experts to evaluate the research contribution of a faculty member. In India too, some institutions like the National Centre for Biological Sciences (NCBS) and the Indian Institute of Science (IISc) place heavy reliance on the opinions of peers. However, since peer-reviews are expensive, the authors of the study feel that the costs of rolling out such a system across the nation would be prohibitive.

The study calls for greater transparency and accountability in the process of assessment, and for giving greater importance to originality and creativity in evaluating performance and proposals. It finds that the method of assessing institutions, and the appointment of faculty, directors and vice-chancellors is mired in corruption, nepotism and political favouritism. The need of the hour is to clean up and depoliticise regulatory bodies like the UGC, All India Council for Technical Education (AICTE) and National Assessment and Accreditation Council (NAAC). The problems are less with the tools, and more with the agencies that govern and oversee academic institutions and research. In summary, the evaluation of researchers is not being done right in India at present, conclude the authors.