Happy Talk About Published Scientific Error and Fraud

Over at the American Scientist (in an overall interesting Jan-Feb. 2013 issue) we have a column arguing that there’s no need to worry about a contagion of fraud and error in scientific publication, even though the number of publications has exploded and the number of retractions has exploded along with them. The basic pitch: the scientific literature is wonderfully self-correcting. The evidence given: the ratio of voluntary corrections to retractions for fraud looks kind of high, and journals with more aggressive and welcoming policies toward corrections have more of them. I kid you not.

But wait, you say. How is that evidence at all probative? Good question, as one says when the student goes right where we want to take the discussion. At the very least, we’d want to see if the rate of retractions is going up over time, but somehow those figures and graphs don’t appear in the article. But what we’d really like to know is how many non-retracted, non-corrected, and non-commented articles are in fact erroneous or misleading despite peer review, and here the article is silent. It’s evidence is almost completely non-responsive to the question it purports to address. But the problem goes deeper.

Recent public concerns, including on this blog, have noted pressures for sensationalism, publication bias, data snooping and experimental tuning bias and many similar causally based arguments. John Ionnadis has made a pretty good career pounding on these issues and trying to place upper and lower bounds on the problem. The devastating Begley and Ellis study of “landmark” papers in preclinical cancer research found that only 6 of 53 had reproducible results, even after going back to the original investigators and sometimes even after the original investigators themselves tried to reproduce their published results. Here is what the latter authors think about the health of the peer-reviewed publishing system in pre-clinical cancer research:

The academic system and peer-review process tolerates and perhaps even inadvertently encourages such conduct5. To obtain funding, a job, promotion or tenure, researchers need a strong publication record, often including a first-authored high-impact publication. Journal editors, reviewers and grant-review committees often look for a scientific finding that is simple, clear and complete — a ‘perfect’ story. It is therefore tempting for investigators to submit selected data sets for publication, or even to massage data to fit the underlying hypothesis.

Of this substantial and growing literature on the prevalence of error and publication of invalid results, the American Scientist article is entirely innocent. Instead, it uses a single Wall Street Journal article as its target for attack, and even there ignores the non-anecdotal parts of the story–evidence that retractions have been growing faster than publications since 2001 (up 1500% vs. a 44% increase in papers), that the time lag between publication and retraction is growing, and that retractions in biomedicine related to fraud have been growing faster than those due to error and constitute about 75% of the total retractions.

Perhaps a corrigendum is in order over at the Am Sci.

UPDARE:

A September 2012 article in PNAS found that most retractions are caused by misconduct rather than error:

A detailed review of all 2,047 biomedical and life-science research articles indexed by PubMed as retracted on May 3, 2012 revealed that only 21.3% of retractions were attributable to error. In contrast, 67.4% of retractions were attributable to misconduct, including fraud or suspected fraud (43.4%), duplicate publication (14.2%), and plagiarism (9.8%). Incomplete, uninformative or misleading retraction announcements have led to a previous underestimation of the role of fraud in the ongoing retraction epidemic. The percentage of scientific articles retracted because of fraud has increased ∼10-fold since 1975. Retractions exhibit distinctive temporal and geographic patterns that may reveal underlying causes.


Scientists as workaholics…

In the article below, Wired reports on a study of when researchers download articles (middle of the night? Yep! Weekends? Yep!) and concludes that scientists are workaholics. The article also opines that it is the intense competition and stress of the scientists’ jobs that cause them to engage in such obviously self-destructive behavior. I think they could have the causal mechanism wrong here. I believe many researchers work at odd hours, at least in part, because they find it pleasurable — not because of external pressure. People end up in these fields (and successful in these fields) because studying something is what they like to do and are good at. Information technology just enables them to more liberally indulge in this rewarding (and rewarded) behavior.

I was scolded just last weekend for the fact that I almost never read fiction anymore. I was afraid to admit that I am often too busy on non-fiction endeavors — like  an internet scavenger hunt to figure out just why lobsters maintain telomerase activation throughout their lives, and may thus have a potential lifespan of…wait for it…FOREVER. That is seriously cool — how could a Grisham novel ever compete? But I might be biased because I like researching things, at any hour of the day. If you’re reading this, I bet you do too.

-Melissa

http://www.wired.com/wiredscience/2012/08/the-results-are-in-scientists-are-workaholics/


Twittering Strategy Profs

For those of you who also follow twitter, LDRLB, an “online think tank that shares insights from research on leadership, innovation, and strategy” has just posted a list of Top Professors on Twitter. The categories are Leadership, Innovation, and Strategy (15 profs in each category). Good lists — all good folks with thoughtful views on the world of strategy. Nice to see a number of StrategyProfs bloggers listed.


“Big Data” Business Strategy for Scammers

A terrific paper by Cormac Herley, Microsoft Research, came out entitled, “Why do Nigerian Scammers Say There are from Nigeria.” It turns out that 51% of scam emails mention Nigeria as the source of funds. Given that “Nigerian scammer” now make it regularly into joke punch-lines, why in the world would scammer continue to identify themselves in this way? The paper was mentioned in a news item here, if you want the executive summary version but, really, I can’t imagine readers of this blog not finding the actual paper worthwhile and fun (it contains a terrific little model of scamming). 

In a nutshell, the number of people who are gullible enough to fall for an online scam is tiny compared to the population that has to be sampled. This creates a huge false positive problem, that is, people who respond in some way and, hence, require an expenditure of scammer resources but who ultimately do not follow follow through on being duped.

As the author explains, in these situations, false positives (people identified as viable marks but who do not ultimately fall for the scam) must be balanced against false negatives (people who would fall for the scam but who are not targeted by the scammer). Since targeting is essentailly costless, the main concern of scammers is the false positive: someone who responds to an initial email with replies, phone calls, etc. – that require scammer resources to field – but who eventually fails to take the bait. Apparently, it does not take too many false positives before the scam becomes unprofitable. What makes this problem a serious issue is that the size of the vulnerable population relative to the population that is sampled (i.e., with an initial email) is minuscule.

Scammer solution? Give every possible hint – including self-identifying yourself as being from Nigeria – that you are a stereotypical scammer without actually saying so. Anyone replying to such an offer must be incredibly naive and uninformed (to say the least). False positives under this strategy drop considerably!

Geeeeenius!

UPDATE: Josh Gans was blogging about this last week over at Digitopoly. He’s not convinced of the explanation though. To the extent there are “vigilante” types who are willing to expend resources to mess with scammers, the Easy-ID strategy could incur additional costs. As an interesting side note, in discussing this with Josh, he at one point suggested the idea that when legit firms come across scammers, they should counterattack by flooding them with, e.g., millions of fake/worthless credit card numbers (setting of something like a false positive atom bomb). Just one snag: US laws protect scammers from these kinds of malicious attacks.

 


Individual Bias and Collective Truth?

Freek’s latest post on confirmation bias notes that intellectual commitments can bias which research findings one believes. The tone of the post is that we would all be better off if such biases didn’t exist, but there is definitely a tradeoff here. Greater objectivity tends to go with lower intensity of interest in a subject. (Disinterested and uninterested are correlated, for those old-timers who remember when those words had different definitions.) That’s why you often find that those with strong views on controversial topics–including those with minority or even widely ridiculed opinions–often know more about the topic, the evidence, and the arguments pro and con than “objective” people who can’t be bothered to dig into the matter. Other than partisanship, the only thing that will get people interested enough to seriously assess competing claims is a personal stake in the truth of the matter. (And in all cases, Feynman’s admonition that the easiest person to fool is yourself should be borne in mind.)

Historians of science of all stripes, from romanticists like Paul de Kruif (author of the classic The Microbe Hunters) to sophisticated evolutionists like David Hull in Science as a Process, have reported that intellectual partisanship motivates a great deal of path-breaking research. “I’ll show him!” has spawned a lot of clever experiments. Burning curiosity and bland objectivity are hard to combine.

But how can such partisanship ever lead to intellectual progress? Partisans have committed to high-profile public bets on one or another side of a controversy; their long-term career and immediate emotional payoffs depend not directly on the truth, but on whether or not they “win” in the court of relevant opinion. The key to having science advance is for qualified non-partisan spectators of these disputes be able to act as independent judges to sort out which ideas are better.

Ideally, these adjacent skilled observers would have some skin in the game by virtue of having to bet their own research programs on what they think the truth is. If they choose to believe the wrong side of a dispute, their future research will fail, to their own detriment. That’s the critical form of incentive compatibility for making scientific judgments objective, well-described in Michael Polanyi’s “Republic of Science” article. If, for most observers, decisions about what to believe are closely connected to their own future productivity and scientific reputation, then the partisanship of theory advocates is mostly a positive, motivating exhaustive search for the strengths and weaknesses of the various competing theories. Self-interested observers will sort out the disputes as best they can, properly internalizing the social gains from propounding the truth.

The problem for this system comes when 1) the only scientific interest in a dispute lies among the partisans themselves, or 2) observers’ control over money, public policy, or status flows directly from choosing to believe one side or another regardless of the truth of their findings. Then, if a false consensus forms the only way for it come unstuck is for new researchers to benefit purely from the novelty of their revisionist findings–i.e., enough boredom and disquiet with the consensus sets in that some people are willing to entertain new ideas.


Fungibility v. Fetishes

For an economist studying business strategy, an interesting puzzle is why businesspeople, analysts, and regulators often don’t seem to perceive the fungibility of payments. Especially in dealing with bargaining issues, a persistent “optical illusion” causes them to fetishize particular transaction components without recognizing that the share of total gain accruing to a party is the sum of these components, regardless of the mix. Proponents of the “value-based” approach to strategy, which stresses unrestricted bargaining and the core solution concept, ought to be particularly exercised about this behavior, but even the less hard-edged V-P-C framework finds it difficult to accommodate.

Some examples:

  • There’s been some noise lately about U.S. telecom providers cutting back on the subsidies they offer users who buy smartphones. None of the articles address the question of whether the telecom firms can thereby force some combination of a) Apple and Samsung cutting their wholesale prices and b) end users coughing up more dough for (smartphone + service). The possibility that competition among wireless providers fixes the share of surplus that they can collect, so that cutting the phone subsidy will also require them to cut their monthly service rates, is never raised explicitly. There is a pervasive confusion between the form of payments and the total size of payments.

Read the rest of this entry »


Crowdsourcing Experiments, Mechanical Turk, ‘n stuff

The Economist has a piece on how crowdsourcing and tools like Mechanical Turk are transforming science: “the roar of the crowd.”  Here’s the blog dedicated to help scientists set up their experiments on Mechanical Turk, Experimental Turk.  I’m guessing it is a matter of time before some strategy-related experiments get done on Mechanical Turk – here’s probably a few such pieces (well, just a very loose search of mechanical turk+smj+mgtsci).


Follow

Get every new post delivered to your Inbox.

Join 139 other followers