“Can’t Believe It 2”
Posted: May 30, 2012 Filed under: Uncategorized | Tags: Belief, Bias, Confirmation bias, peer review, Research 4 CommentsMy earlier post – “can’t believe it” – triggered some bipolar comments (and further denials); also to what extent this behaviour can be observed among academics studying strategy. And, regarding the latter, I think: yes.
The denial of research findings obviously relates to confirmation bias (although it is not the same thing). Confirmation bias is a tricky thing: we – largely without realising it – are much more prone to notice things that confirm our prior beliefs. Things that go counter to them often escape our attention.
Things get particularly nasty – I agree – when we do notice the facts that defy our beliefs but we still don’t like them. Even if they are generated by solid research, we’d still like to find a reason to deny them, and therefore see people start to question the research itself vehemently (if not aggressively and emotionally).
It becomes yet more worrying to me – on a personal level – if even academic researchers themselves display such tendencies – and they do. What do you think a researcher in corporate social responsibility will be most critical of: a study showing it increases firm performance, or a study showing that it does not? Whose methodology do you think a researcher on gender biases will be more inclined to challenge: a research project showing no pay differences or a study showing that women are underpaid relative to men?
It’s only human and – slightly unfortunately – researchers are also human. And researchers are also reviewers and gate-keepers of the papers of other academics that are submitted for possible publication in academic journals. They bring their biases with them when determining what gets published and what doesn’t.
And there is some evidence of that: studies showing weak relationships between social performance and financial performance are less likely to make it into a management journal as compared to a finance journal (where more researchers are inclined to believe that social performance is not what a firm should care about), and perhaps vice versa.
No research is perfect, but the bar is often much higher for research generating uncomfortable findings. I have little doubt that reviewers and readers are much more forgiving when it comes to the methods of research that generates nicely belief-confirming results. Results we don’t like are much less likely to find their way into an academic journal. Which means that, in the end, research may end up being biased and misleading.
Can you elaborate on how rejection of research findings is not a form of confirmation bias?
P.S. Hence also why Max Planck said that science progresses funeral by funeral.
Nice set of posts, especially when coupled with your post earlier this year: https://strategyprofs.wordpress.com/2012/01/06/why-you-really-cant-trust-any-of-the-research-you-read/
My main question would be: is what you describe here really a problem?
In statistics, for example, there is an asymmetrical role between the Null hypothesis and the suggested alternative. One needs strong evidence in order to reject the null. I wonder if the confirmation bias and the higher bar set for publishing uncomfortable findings are just psychological and social mechanisms that keep similar asymmetry in real life (metaphorical p-values maybe?). After all, the null hypothesis is often just a set of assumptions and technical conveniences, but our prior knowledge is something that we really, and often strongly believe in.
Also, while denial and criticism may not be extremely productive by themselves, they may trigger activity that could be productive. Forming alternative hypotheses and explanations (initially to support our denying arguments), theoretically refining and redefining concepts to accurately capture things we think we know, improving our methods and precisions, etc. – all these, are discussion “drivers” that trigger more research and eventually advance our knowledge. Can we do all that without the emotional energy that accompanies and drives that moment of “I can’t believe it”?
I had an experience along the lines suggested by Amit when presenting a paper that claimed that very small Swedish environmentally oriented firms seemed to have lower financial performance than their “nearest neighbors” (a quasi-experimental design). At the occasion I presented a compilation of several papers, most of which were not as controversial, and that paper got 99% of the heat. Which was probably somewhat unfair. On the other hand, the vehement opposition DID help me reframe that particular paper so that it is now – I believe – a lot stronger. (And based on my reading of the recent peer review, it will likely eventually get published in a decent journal in its new, better version). So even though I suspect that the original opposition was likely somewhat ideologically colored, the process certainly helped improve the research.