The case against interview-based research (and a plea for facts)

I’ll admit it; I am rapidly becoming a skeptic when it comes to interview-based data. And the reason is that people (interviewees) just don’t know their business – although, of course, they think they do. 

For example, in an intriguing research project with my (rather exceptional) PhD student Amandine Ody, we asked lots of people in the Champagne industry whether different Champagne houses paid different prices for a kilogram of their raw material: grapes. The answer was unanimously and unambiguously “no”; everybody pays more or less the same price. But when we looked at the actual data (which are opaque at first sight and pretty hard to get), the price differences appeared huge: some paid 6 euros for a kilogram, others 8, and yet other 10 or even 12. Thinking it might be the (poor) quality of the data, we obtained a large sample of similar data from a different source: supplier contracts. Which showed exactly the same thing. But the people within the business really did not know; they thought everybody was paying about the same price. They were wrong. 

Then Amandine asked them which houses supplied Champagne for supermarket brands (a practice many in the industry thoroughly detest, but it is very difficult to observe who is hiding behind those supermarket labels). They mentioned a bunch of houses, both in terms of the type of houses and specific named ones, who they “were sure were behind it”. And they quite invariably were completely wrong. Using a clever but painstaking method, Amandine deduced who was really supplying the Champagne to the supermarkets, and she found out it was not the usual suspects. In fact, the houses that did it were exactly the ones no-one suspected, and the houses everyone thought were doing it were as innocent as a newborn baby. They were – again – dead wrong.

And this is not the only context and project where I have had such experiences, i.e. it is not just a French thing. With a colleague at University College London – Mihaela Stan – we analyzed the British IVF industry. One prominent practice in this industry is the role of a so-called integrator; one medical professional who is always “the face” towards the patient, i.e. a patient is always dealing with one and the same doctor or nurse, and not a different one very time the treatment is in a different stage. All interviewees told us that this really had no substance; it was just a way of comforting the patient. However, when we analyzed the practice’s actual influence – together with my good friend and colleague Phanish Puranam – we quickly discovered that the use of such an integrator had a very real impact on the efficacy of the IVF process; women simply had a substantially higher probability of getting pregnant when such an integrator, who coordinates across the various stages of the IVF cycle, was used. But the interviewees had no clue about the actual effects of the practice.* 

My examples are just conjectures, but there is also some serious research on the topic. Olav Sorenson and David Waguespack published a study on film distributors in which they showed that these distributors’ beliefs about what would make a film a success were plain wrong (they just made them come true by assigning them more resources based on this belief). John Mezias and Bill Starbuck published several articles in which they showed how people do not even know basic facts about their own companies, such as the sales of their own business unit, error rates, or quality indicators. People more often than not were several hundreds of percentages of the mark, when asked to report a number.

Of course interviews can sometimes be interesting; you can ask people about their perceptions, why they think they are doing something, and how they think things work. Just don’t make the mistake of believing them. 

Much the same is true for the use of questionnaires. They are often used to ask for basic facts and assessments: e.g. “how big is your company”, “how good are you at practice X”, and so on. Sheer nonsense is the most likely result. People do not know their business, both in terms of the simple facts and in terms of the complex processes that lead to success or failure. Therefore, do yourself (and us) a favor: don’t ask; get the facts.

 

* Although this was not necessarily a “direct effect”; the impact of the practice is more subtle than that.


7 Comments on “The case against interview-based research (and a plea for facts)”

  1. Ferran says:

    Would you say that this could partially be attributed to the mismatch between espoused theory and theory in use identified by Aryris and Shon?
    Such evidence could also be an example of the limitations on firm knowledge of its own actions and market position (something that could explain why firms hire external consultants and market research to gain knowledge on what they are actually doing).

  2. Scott Koenig says:

    There is a flaw in your reasoning – or at least you did not address it. For example, in the case of the wine house, the researcher “asked lots of people…” and those “lots of people” all gave similar answers which were proven wrong. I don’t doubt the accuracy of the data, but I do question the methodology.

    Were the people interviewed suppose to know the answer? If not, then why interview them? If they were suppose to know (I assumed they were screened for some basic qualifying criteria) then you have an “insight” for the management team – those who are suppose to be in the know – those making decisions for your company – do not really have the information they need to make the right decisions.

    The same reasoning applies to the other examples cited.

    This does not prove interview based research flawed, but rather the design, execution, and interpretation needs to be re-examined.

  3. Chen says:

    Could we consider this article as another supports the idea of “evidence-based management”?

    While one can get some facts from questionnaires, will the following statistical/quantitative analysis from those data may misleading our understanding about questions and/or phenomena we try to explore?

    I would also like to know Professor Vermeulem’s thought those works simply by doing theoretical reasoning or developing conceptual framework. I guess that will be another different ideas comparing with this article.

  4. Mr_Yeh says:

    You have just proven the importance of interviews as a data-collection technique. Had you not done them, you wouldn’t have been able to observe the aforementioned difference.

    • I was going to argue in similar ways. You have shown the need to do data triangulation, but you have also shown the importance of doing interview-based research. Because decisions in these organizations will be made by what people believe to be true (remember “knowledge is a justified true belief”?)
      Despite the true facts, these beliefs will impact the decisions that people in these organizations do, ie how they set goals and allocate resources, which is… hold on… strategy. :-)

  5. srp says:

    Amusingly, the argument here backs up the old-school economists who eschewed any kind of survey or interview data from economic actors while it challenges the heterodox types who criticized this penchant. I once heard Wassily Leontief go on an extended and entertaining rant about how stupid economists were to go around estimating production functions instead of just asking managers and engineers, who, Leontief assured us, could easily provide the coefficients needed for his input-output tables.

    BTW, McKinsey’s oldest method with new clients was supposedly to quiz client managers about their business and then gather data that often contradicted the answers they’d been given.

  6. alex says:

    I would like to know more about the profiles of the people who were interviewed. Yet I would be very surprised if people with commercial responsibilities (i.e. the Sales Director or Key Account Managers dealing with the big retail chains) in these companies producing Champagne didn’t know which companies supply private label products. The reason is that in general a) within many industries sales people and in particular key account managers (KAMs) often have experiences of more than one company and tend to know one another; similarly there is certain degree of exchange between people working in buying positions and people working in selling or marketing positions b) more importantly, these companies tend to bid for the same private label contracts; c) even more importantly, the supermarkets’ buyers usually share this information with key account managers in order to try to get a better deal, i.e. “Company X gave me this price and they got the contract for the next 6-12 months, would you give me a better deal?”. There are of course other reasons.
    However, would any of the KAMs know anything about any other critical information such as company turnover, number of employees, etc? Not at all or little at best. They didn’t even know the margins of the contracts with their specific client not not mention the value of contracts awarded by other retail chains covered by one of their colleagues. Not surprisingly they didn’t have a clue about the cost of raw materials etc. The same, yet reverse, may be said about people working in the purchasing or procurement office when asked about the knowledge of selling prices etc. The only ones who perhaps had a clue about this critical info were the sales director, the managing director (not even sure about this…) and more likely low paid people in the customer and purchasing office or nearly surely people dealing with interal control processes or in charge of the stats of the company. I mean, it is more than likely that senior people in key external facing roles (i.e. sales or purchasing) don’t know more than what drives directly their bonuses.
    As another observation, I would be interested in knowing how the question was framed. If for example you only asked a general question about “the average price/kg (or more likely price/tonne)” than this is a metric that tends to be relatively the same across the big players in many industries dealing with retail chains. If any players has much better numbers (over a difference of 5% I would say), this is usually very well known. Of course, for small players the situation may vary. As such, I would be not surprised if they all said “no, we are all given similar prices on average”. Yet these large players usually have a huge portfolio of products (i.e. many types of champagne and/or other wines) produced with comparable raw material and it very much depends on how their costs are allocated in each company. Nevertheless, the chances are that the price/tonne and the cost/tonne would vary hugely depending on the product unit. This is definitely not an information that many in the company would know, Perhaps only few people would known about the overarching picture and the details. KAMs or purchasing managers would only see a small part of it. What I am saying is that there are ‘natural’ barriers to who knows what. In addition in some industries, exactly because of the fact that it is easy for people to get to know these info by moving company and so on, it could be also the case that the top management team creates barriers to the transfer of these information and that everybody is only allowed to know his/her own bit. This was, at least, the situation in a multinational company I used to work for and my role was at the time to deal with all our KAMs in one single Country. Thus I am not very surprised by these results.
    This is obviously only my experience (i.e. a single case study ? ) and perhaps I am wrong and/or I misunderstood the post, yet, as far as I know being interested in this topic and reading a lot on KAM, there is a a similar situation in many other industries. Perhaps the Champagne industry is different yet this supports the point made in another comment that interviews would be actually important to find out this peculiarity.
    Having said that, I think that the post has a major point because it may stress that, for some research, knowing the actual numbers is critical despite of how much difficult is to get them. This surely depends on the research question. Moreover, it is worth noting that this may have an impact on how some scales are developed because asking, for example, managers for an evaluation with a Likert scale of some hard data in order to produce a measure of some performance may be definitely misleading. (e.g. one of those I can recall from an article is a metric on buying performance where the scale includes items such as “please rate from one to 7 how much you agree with the statement ‘our cost base is better than our competitors”). In these cases, the warnings of the articles are more than welcome!


Leave a comment