As a teenager, it is hard enough dealing with the emotional challenges of growing; add to it the stigma of living with HIV. Then there are researchers who aim to improve the mental health of these teenagers without ever involving them in the process! This leads to a disconnect between lived experience and research outcomes. It is a little-recognized barrier to effective mental health interventions for marginalised communities. 

Course corrections needed in peer review publication process

Read time: 7 mins
Bengaluru
14 Feb 2022
Course corrections needed in peer review publication process

In early 2020, as the world was grappling with the COVID-19 pandemic, scientists the world over were scrambling to characterise the novel pathogen. Research findings were being generated at an astonishing pace and within the first 6 months of the pandemic, nearly 61000 articles had been published!

The ominous flip side to this was that the sheer number of submissions completely overwhelmed the peer review process, particularly for medical journals. In their haste to publish the next important finding about the virus, journals apparently cut corners.

In May 2020, the Lancet retracted a paper that claimed that hydroxychloroquine was almost miraculously effective in treating COVID-19 (it isn’t). Next, the NEJM, arguably the most prestigious medical journal on the planet, followed suit and retracted a paper that described the cardiovascular effects of COVID-19 in persons with hypertension who were being treated with a class of drugs called ACE inhibitors. Both papers had common lead authors and were based on a completely fraudulent dataset. How could such scientific chicanery have gone unnoticed by the editors and reviewers at the planet’s two most prestigious medical journals?
Publishing a paper in the NEJM or the Lancet is the holy grail for most medical professionals. Retractions during this crucial time left the medical community and the society shaken. Trust in medical research was fractured, possibly irreparably.

As the medical community was reeling from this disaster and acrimonious debates raged over the trustworthiness of published medical research, Dr. Venkatesh Madhugiri decided to turn his attention to this issue. Prof. Madhugiri is an academic neurosurgeon and clinician-scientist at the National Institute of Mental Health and Neuro Sciences (NIMHANS), India’s premier neuroscience institute. He pondered over the veracity of published clinical research as a whole, and specifically, within the context of his speciality.

“Retractions are not a new phenomenon, of course. Normally however, the process of flagging a suspect paper, investigating concerns regarding the data, and eventually, retracting the paper if the suspicions are borne out usually takes months or even years. But with the COVID papers, data fraud was detected in record time. This was possibly due to the fact that the entire medical community was not only publishing, but also scrutinising COVID papers on a war footing. But what happens during “peacetime,” when fraudulent papers could potentially slip under the radar?” he asks.

Retractions

The high-impact journals (such as the Lancet and NEJM) are considered the standard-bearers of medical publishing and should theoretically have very low retraction rates. Dr. Madhugiri and his team decided to compare retractions in neurosurgery with those in the high-impact medical journals. They also included anaesthesiology journals in the comparison matrix.

“Anaesthesiology as a specialty, has this reputation - for having the most number of retractions of any clinical field, thanks to the malpractice committed by a  few researchers. For instance, one Dr. Fuji, holds the record for the highest number of retractions for any single author across all scientific disciplines- a whopping 182 fraudulent papers,” noted Dr. Amrutha Bindu Nagella, Assistant Professor of Anaesthesiology at the Sapthagiri Institute of Medical Sciences in Bangalore, and Dr. Madhugiri’s collaborator.

Anaesthesia journals had a retraction rate of 2.6 retractions per 1000 published papers, much higher than the rate for the other groups. The high-impact journals for instance, had a rate of 0.75 per 1000, and the neurosurgical journals had a retraction rate of 0.66 retractions per 1000 papers. Clearly, different disciplines retract papers at different rates. However, many unanswered questions remained. Why were these flawed papers retracted in the first place and how were they cited and disseminated to the broader medical community?

The team then analysed retractions in neurosurgery in greater detail - 191 retracted papers published across 108 journals. They categorised the retracted papers based on the reasons for retraction and analysed their citation trends. Their findings were alarming, to put it mildly. They found that author misconduct or data fraud accounted for more than two thirds of all retractions in neurosurgery.

The lag period between publication to retraction of flawed papers could be as long as 21 years! Although the median time to retraction was 16.2 months, it varied significantly by group. For instance, the high-impact journals took longer to retract flawed papers. Worryingly, they also found that the time to retraction was longer when papers were retracted due to misconduct or data fraud than due to genuine errors.

Even more concerning was the fact that papers continued to be read and disseminated widely even after retraction - half the total citations received by retracted papers were accrued after retraction! This is highly problematic, since merely retracting the flawed papers does not seem to erase their effects.

“The exponential explosion of medical literature is bound to create ever-increasing opportunities for mistakes, data fabrication and outright fraud. The sad reality is that many of them will remain perfect crimes. Niche fields appear particularly vulnerable, but the damage to public health and trust in science is more when the rot happens in high visibility journals” said Dr. Gopalakrishnan, Professor of Neurosurgery at the prestigious Jawaharlal Institute of Postgraduate Medical Education and Research. He stresses the need for regular housekeeping of published research.

The tip of the iceberg

Pre-publication peer review is the gatekeeper of quality and veracity. But how efficient is this process? Recall that the COVID-related retractions happened from high-impact journals during a time of utmost scrutiny. If that is the case, what happens to the papers in the lower impact journals with low visibility?

“Peer review is the point of entry which decides the relevance and internal consistency of a research paper. It is just not equipped to detect data error or statistical manipulation unless the reviewers are provided with the experimental data. A logjam of papers awaiting peer review (as during the pandemic) can completely overwhelm this process. Then, the point of quality control shifts further downstream, namely after publication.” said Dr. Subeikshanan Venkatesan, a member of the research team and a contributing author to the study.

In their third and final paper, the authors sought to dive deeper into the universe of flawed papers. Using a formula described by Cokol et al., they estimated the proportion of potentially retractable papers to be around 1% of all published papers in both neurosurgery and high-impact journal groups.This is still a very large number considering the amount of papers published, which runs into the millions.

Dr. Madhugiri and his team had to think outside the box to make sense of this murky milieu of fraudulent publishing. They developed two new parameters, viz, the retraction gap and the proportion of true papers in a journal. The retraction gap is simply the proportion of compromised articles that remained un-retracted; it represents the failure of post-publication scrutiny to detect fraudulent papers. The proportion of true papers, as the name suggests, is the proportion of papers in a journal presumed to contain no error. It represents the success of the peer review process in that such papers have stood the test of post-publication scrutiny.

The retraction gap was found to be incredibly high in neurosurgery and high-impact journals - around 96%. This means that only a tiny fraction of the fraudulent papers that have escaped detection during peer review are subsequently detected to be rotten apples and retracted because of post-publication scrutiny.

Neurosurgery journals had both a higher proportion of true papers and a higher retraction gap when compared with high impact journals. This implies that a strong peer review process is the mainstay for quality control in neurosurgery and similar sub-specialty journals with a limited audience. The higher retraction gap implies that the post-publication scrutiny was not sufficient to detect all the potentially retractable papers in such niche journals. This is expected, since journals dedicated to publishing papers from a single discipline would have far less visibility when compared to the high impact journals that cater to a much wider audience.

Only a tiny fraction of the possibly flawed papers are actually detected and retracted., Credits to graphic: Authors of the Paper

“Even if we assume that adequate time and resources can be spared to address these issues, the necessary awareness and willingness to actually effect changes may not be widespread” noted Dr. Akshat Dutt, another member of the research group and one of the contributing authors to the retraction series.

Nevertheless, the authors offer useful insights about fighting corruption within science. Dr. Madhugiri is of the opinion that almost anyone, medical students, resident doctors in training, academic peers, and even those from outside the field, can inspect and critically analyse a research paper, given a little training. The plethora of free online resources such as the retraction watch database, pubpeer, etc., facilitate this academic spring cleaning.

To improve the peer review process, journals and editors should consider increasing the number of subspecialty-specific associate editors and enlarging the pool of available reviewers. Making published articles freely available and having mandatory data deposition policies are good first steps to improve post-publication scrutiny, which would in turn reduce the retraction gap. At the end of the day, science by its nature, is self-correcting and a little nudge goes a long way.