The Translation Fallacy

English: The Rosetta Stone in the British Muse...

English: The Rosetta Stone in the British Museum. Français : La Pierre de Rosette, dans le British Museum. (Photo credit: Wikipedia)

If you have ever been unlucky enough to attend a large gathering of strategy academics – as I have, many times – it may have struck you that at some point during such a feast (euphemistically called “conference”), the subject matter would turn to talks of “relevance”. It is likely that the speakers were a variety of senior and grey – in multiple ways – interchanged with aspiring Young Turks. A peculiar meeting of minds, where the feeling might have dawned on you that the senior professors were displaying a growing fear of bowing out of the profession (or life in general) without ever having had any impact on the world they spent a lifetime studying, while the young assistant professors showed an endearing naivety believing they were not going to grow up like their academic parents.

And the conclusion of this uncomfortable alliance – under the glazing eyes of some mid-career, associate professors, who could no longer and not yet care about relevance – will likely have been that “we need to be better at translating our research for managers”; that is, if we’d just write up our research findings in more accessible language, without elaborating on the research methodology and theoretical terminology, managers would immediately spot the relevance in our research and eagerly suck up its wisdom.

And I think that’s bollocks.

I don’t think it is bollocks that we – academics – should try to write something that practicing managers are eager to read and learn about; I think it is bollocks that all it needs is a bit of translation in layman’s terms and the job is done.

Don’t kid yourself – I am inclined to say – it ain’t that easy. In fact, I think there are three reasons why I never see such a translation exercise work.

1. Ignorance

I believe it is an underestimation of the intricacies of the underlying structure of a good managerial article, and the subtleties of how to convincingly write for practicing managers. If you’re an academic, you might remember that in your first year as a PhD student you had the feeling it wasn’t too difficult to write an academic article such as the ones you had been reading for your first course, only to figure out, after a year or two of training, that you had been a bit naïve: you had been (blissfully) unaware of the subtleties of writing for an academic journal; how to structure the arguments; which prior studies to cite and where; which terminology to use and what to avoid; and so on. Well, good managerial articles are no different; if you haven’t developed the skill yet to write one, you likely don’t quite realise what it takes.

 2. False assumptions

It also seems that academics, wanting to write their first managerial piece, immediately assume they have to be explicitly prescriptive, and tell managers what to do. And the draft article – invariably based on “the five lessons coming out of my research” – would indeed be fiercely normative. Yet, those messages often seem impractically precise and not simple enough (“take up a central position in a network with structural holes”) or too simple to have any real use (“choose the right location”). You need to capture a busy executive’s attention and interest, giving them the feeling that they have gained a new insight into their own world by reading your work. If that is prescriptive: fine. But often precise advice is precisely wrong.

 3. Lack of content

And, of course, more often than not, there is not much worth translating… Because people have been doing their research with solely an academic audience in mind – and the desire to also tell the real world about it only came later – it has produced no insight relevant for practice. I believe that publishing your research in a good academic journal is a necessary condition for it to be relevant; crappy research – no matter how intriguing its conclusions – can never be considered useful. But rigour alone, unfortunately, is not a sufficient condition for it to be relevant and important in terms of its implications for the world of business.


“Can’t Believe It 2”

My earlier post – “can’t believe it” – triggered some bipolar comments (and further denials); also to what extent this behaviour can be observed among academics studying strategy. And, regarding the latter, I think: yes.

The denial of research findings obviously relates to confirmation bias (although it is not the same thing). Confirmation bias is a tricky thing: we – largely without realising it – are much more prone to notice things that confirm our prior beliefs. Things that go counter to them often escape our attention.

Things get particularly nasty – I agree – when we do notice the facts that defy our beliefs but we still don’t like them. Even if they are generated by solid research, we’d still like to find a reason to deny them, and therefore see people start to question the research itself vehemently (if not aggressively and emotionally).

It becomes yet more worrying to me – on a personal level – if even academic researchers themselves display such tendencies – and they do. What do you think a researcher in corporate social responsibility will be most critical of: a study showing it increases firm performance, or a study showing that it does not? Whose methodology do you think a researcher on gender biases will be more inclined to challenge: a research project showing no pay differences or a study showing that women are underpaid relative to men?

It’s only human and – slightly unfortunately – researchers are also human. And researchers are also reviewers and gate-keepers of the papers of other academics that are submitted for possible publication in academic journals. They bring their biases with them when determining what gets published and what doesn’t.

And there is some evidence of that: studies showing weak relationships between social performance and financial performance are less likely to make it into a management journal as compared to a finance journal (where more researchers are inclined to believe that social performance is not what a firm should care about), and perhaps vice versa.

No research is perfect, but the bar is often much higher for research generating uncomfortable findings. I have little doubt that reviewers and readers are much more forgiving when it comes to the methods of research that generates nicely belief-confirming results. Results we don’t like are much less likely to find their way into an academic journal. Which means that, in the end, research may end up being biased and misleading.


B-School Disruption Update

Want to be an entrepreneur? Enstitute is bringing back apprenticeships

This is the answer to those who think we will keep our research-based MBAs above water by making the curriculum more “relevant in the real world” … by which people seem to mean sacrificing academic content for: external projects with business sponsors, “living” case studies, 1st summer internships, support services for personal grooming, etc. As I have long argued, research faculty are not efficient providers of substitute “real world” experiences.

Apropos this discussion, last week, E[nstitute] launched in NYC by founders Kane Sarhan and Shaila Ittycheria. The idea is to pick up promising candidates with a high school diploma and put them through a two-year apprenticeship program mentored by some of NYC’s top entrepreneurs. Impressive.

And, it isn’t just business schools this program threatens — in a recent article, Brad Mcarty, editor at Insider points out, “… the average public university (in the US) will set you back nearly $80,000 for a 4-year program. And a private school will cost in excess of $150,000. At the end of that time, you have a bellybutton,” he writes. “Oh sure, you might have a piece of paper that says you have a Bachelor of Science or Art degree but what you actually have is something that has become so ubiquitous that it’s really not worth much more than the lint inside your own navel.”

That’s strong stuff and, sadly, uncomfortably close to the truth. Moreover, it speaks to strong potential demand for apprenticeship-style entrepreneurship programs like the one mentioned above. Personally, I think it’s terrific. The existence of programs like this create more value at the society level. From the b-school foxhole, they also force research-based MBA providers to think more carefully about what, if any, comparative advantage we have vis the many non-traditional competitors we now see invading our industry.

Hint: the answer will have to involve our research. This is what we do. And, contrary to the whining and hand-wringing of so many traditional MBA providers, teaching young people cutting-edge general principles (i.e., research-based knowledge) has substantial market value. We just stopped doing it a couple of decades ago.

 


Why you really can’t trust any of the research you read

Researchers in Management and Strategy worry a lot about bias – statistical bias. In case you’re not such an academic researcher, let me briefly explain.

Suppose you want to find out how many members of a rugby club have their nipples pierced (to pick a random example). The problem is, the club has 200 members and you don’t want to ask them all to take their shirts off. Therefore, you select a sample of 20 of them guys and ask them to bare their chests. After some friendly bantering they agree, and then it appears that no fewer than 15 of them have their nipples pierced, so you conclude that the majority of players in the club likely have undergone the slightly painful (or so I am told) aesthetic enhancement.

The problem is, there is a chance that you’re wrong. There is a chance that due to sheer coincidence you happened to select 15 pierced pairs of nipples where among the full set of 200 members they are very much the minority. For example, if in reality out of the 200 rugby blokes only 30 have their nipples pierced, due to sheer chance you could happen to pick 15 of them in your sample of 20, and your conclusion that “the majority of players in this club has them” is wrong.

Now, in our research, there is no real way around this. Therefore, the convention among academic researchers is that it is ok, and you can claim your conclusion based on only a sample of observations, as long as the probability that you are wrong is no bigger than 5%. If it ain’t – and one can relatively easily compute that probability – we say the result is “statistically significant”. Out of sheer joy, we then mark that number with a cheerful asterisk * and say amen.

Now, I just said that “one can relatively easily compute that probability” but that is not always entirely true. In fact, over the years statisticians have come up with increasingly complex procedures to correct for all sorts of potential statistical biases that can occur in research projects of various natures. They treat horrifying statistical conditions such as unobserved heterogeneity, selection bias, heteroscedasticity, and autocorrelation. Let me not try to explain to you what they are, but believe me they’re nasty. You don’t want to be caught with one of those.

Fortunately, the life of the researcher is made easy by standard statistical software packages. They offer nice user-friendly menus where one can press buttons to solve problems. For example, if you have identified a heteroscedasticity problem in your data, there are various buttons to press that can cure it for you. Now, note that it is my personal estimate (but notice, no claims of an asterisk!) that about 95 out of a 100 researchers have no clue what happens within their computers  when they press one of those magical buttons, but that does not mean it does not solve the problem. Professional statisticians will frown and smirk at the thought alone, but if you have correctly identified the condition and the way to treat it, you don’t necessarily have to fully understand how the cure works (although I think it often would help selecting the correct treatment). So far, so good.

Here comes the trick: All of those statistical biases are pretty much irrelevant. They are irrelevant because they are all dwarfed by another bias (for which there is no life-saving cure available in any of the statistical packages): publication bias.

The problem is that if you have collected a whole bunch of data and you don’t find anything or at least nothing really interesting and new, no journal is going to publish it. For example, the prestigious journal Administrative Science Quarterly proclaims in its “Invitation to Contributors” that it seeks to publish “counterintuitive work that disconfirms prevailing assumptions”. And perhaps rightly so; we’re all interested in learning something new. So if you, as a researcher, don’t find anything counterintuitive that disconfirms prevailing assumptions, you are usually not even going to bother writing it up. And in case you’re dumb enough to write it up and send it to a journal requesting them to publish it, you will swiftly (or less swiftly, dependent on what journal you sent it to) receive a reply that has the word “reject” firmly embedded in it.

Yet, unintended, this publication reality completely messes up the “5% convention”, i.e. that you can only claim a finding as real if there is only a 5% chance that what you found is sheer coincidence (rather than a counterintuitive insight that disconfirms prevailing assumptions). In fact, the chance that what you are reporting is bogus is much higher than the 5% you so cheerfully claimed with your poignant asterisk. Because journals will only publish novel, interesting findings – and therefore researchers only bother to write up seemingly intriguing counterintuitive findings – the chance that what they eventually are publishing is BS unwittingly is vast.

A recent article by Simmons, Nelson, and Simonsohn in Psychological Science (cheerfully entitled “False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant”) summed it up prickly clearly. If a researcher, running a particular experiment, does not find the result he was expecting, he may initially think “that’s because I did not collect enough data” and collect some more. He can also think “I used the wrong measure; let me use the other measure I also collected” or “I need to correct my models for whether the respondent was male or female” or “examine a slightly different set of conditions”. Yet, taking these (extremely common) measures raises the probability that what the researcher finds in his data is due to sheer chance from the conventional 5% to a whopping 60.7%, without the researcher realising it. He will still cheerfully put the all-important asterisk in his table and declare that he has found a counterintuitive insight that disconfirms some important prevailing assumption.

In management and strategy research we do highly similar things. We for instance collect data with two or three ideas in mind in terms of what we want to examine and test with them. If the first idea does not lead to a desired result, the researcher moves on to his second idea and then one can hear a sigh of relief behind a computer screen that “at least this idea was a good one”. In fact, you might only be moving on to “the next good idea” till you have hit on a purely coincidental result: 15 bulky guys with pierced nipples.

Things get really “funny” when one realises that what is considered interesting and publishable is different in different fields in Business Studies. For example, in fields like Finance and Economics, academics are likely to be fairly skeptical whether Corporate Social Responsibility is good for a firm’s financial performance. In the subfield of Management people are much more receptive to the idea that Corporate Social Responsibility should also benefit a firm in terms of its profitability. Indeed, as shown by a simple yet nifty study by Marc Orlitzky, recently published in Business Ethics Quarterly, articles published on this topic in Management journals report a statistical relationship between the two variables which is about twice as big as the ones reported in Economics, Finance, or Accounting journals. Of course, who does the research and where it gets printed should not have any bearing on what the actual relationship is but, apparently, preferences and publication bias do come into the picture with quite some force.

Hence, publication bias vastly dominates any of the statistical biases we get so worked up about, making them pretty much irrelevant. Is this a sad state of affairs? Ehm…. I think yes. Is there an easy solution for it? Ehm… I think no. And that is why we will likely all be suffering from publication bias for quite some time to come.


Follow

Get every new post delivered to your Inbox.

Join 133 other followers