Happy Talk About Published Scientific Error and Fraud

Over at the American Scientist (in an overall interesting Jan-Feb. 2013 issue) we have a column arguing that there’s no need to worry about a contagion of fraud and error in scientific publication, even though the number of publications has exploded and the number of retractions has exploded along with them. The basic pitch: the scientific literature is wonderfully self-correcting. The evidence given: the ratio of voluntary corrections to retractions for fraud looks kind of high, and journals with more aggressive and welcoming policies toward corrections have more of them. I kid you not.

But wait, you say. How is that evidence at all probative? Good question, as one says when the student goes right where we want to take the discussion. At the very least, we’d want to see if the rate of retractions is going up over time, but somehow those figures and graphs don’t appear in the article. But what we’d really like to know is how many non-retracted, non-corrected, and non-commented articles are in fact erroneous or misleading despite peer review, and here the article is silent. It’s evidence is almost completely non-responsive to the question it purports to address. But the problem goes deeper.

Recent public concerns, including on this blog, have noted pressures for sensationalism, publication bias, data snooping and experimental tuning bias and many similar causally based arguments. John Ionnadis has made a pretty good career pounding on these issues and trying to place upper and lower bounds on the problem. The devastating Begley and Ellis study of “landmark” papers in preclinical cancer research found that only 6 of 53 had reproducible results, even after going back to the original investigators and sometimes even after the original investigators themselves tried to reproduce their published results. Here is what the latter authors think about the health of the peer-reviewed publishing system in pre-clinical cancer research:

The academic system and peer-review process tolerates and perhaps even inadvertently encourages such conduct5. To obtain funding, a job, promotion or tenure, researchers need a strong publication record, often including a first-authored high-impact publication. Journal editors, reviewers and grant-review committees often look for a scientific finding that is simple, clear and complete — a ‘perfect’ story. It is therefore tempting for investigators to submit selected data sets for publication, or even to massage data to fit the underlying hypothesis.

Of this substantial and growing literature on the prevalence of error and publication of invalid results, the American Scientist article is entirely innocent. Instead, it uses a single Wall Street Journal article as its target for attack, and even there ignores the non-anecdotal parts of the story–evidence that retractions have been growing faster than publications since 2001 (up 1500% vs. a 44% increase in papers), that the time lag between publication and retraction is growing, and that retractions in biomedicine related to fraud have been growing faster than those due to error and constitute about 75% of the total retractions.

Perhaps a corrigendum is in order over at the Am Sci.

UPDARE:

A September 2012 article in PNAS found that most retractions are caused by misconduct rather than error:

A detailed review of all 2,047 biomedical and life-science research articles indexed by PubMed as retracted on May 3, 2012 revealed that only 21.3% of retractions were attributable to error. In contrast, 67.4% of retractions were attributable to misconduct, including fraud or suspected fraud (43.4%), duplicate publication (14.2%), and plagiarism (9.8%). Incomplete, uninformative or misleading retraction announcements have led to a previous underestimation of the role of fraud in the ongoing retraction epidemic. The percentage of scientific articles retracted because of fraud has increased ∼10-fold since 1975. Retractions exhibit distinctive temporal and geographic patterns that may reveal underlying causes.


2 Comments on “Happy Talk About Published Scientific Error and Fraud”

  1. sslevine says:

    Very well put!

  2. […] By Steve Postrel via StrategyProfs.net   Article […]


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s