If you have ever been unlucky enough to attend a large gathering of strategy academics – as I have, many times – it may have struck you that at some point during such a feast (euphemistically called “conference”), the subject matter would turn to talks of “relevance”. It is likely that the speakers were a variety of senior and grey – in multiple ways – interchanged with aspiring Young Turks. A peculiar meeting of minds, where the feeling might have dawned on you that the senior professors were displaying a growing fear of bowing out of the profession (or life in general) without ever having had any impact on the world they spent a lifetime studying, while the young assistant professors showed an endearing naivety believing they were not going to grow up like their academic parents.
And the conclusion of this uncomfortable alliance – under the glazing eyes of some mid-career, associate professors, who could no longer and not yet care about relevance – will likely have been that “we need to be better at translating our research for managers”; that is, if we’d just write up our research findings in more accessible language, without elaborating on the research methodology and theoretical terminology, managers would immediately spot the relevance in our research and eagerly suck up its wisdom.
And I think that’s bollocks.
I don’t think it is bollocks that we – academics – should try to write something that practicing managers are eager to read and learn about; I think it is bollocks that all it needs is a bit of translation in layman’s terms and the job is done.
Don’t kid yourself – I am inclined to say – it ain’t that easy. In fact, I think there are three reasons why I never see such a translation exercise work.
I believe it is an underestimation of the intricacies of the underlying structure of a good managerial article, and the subtleties of how to convincingly write for practicing managers. If you’re an academic, you might remember that in your first year as a PhD student you had the feeling it wasn’t too difficult to write an academic article such as the ones you had been reading for your first course, only to figure out, after a year or two of training, that you had been a bit naïve: you had been (blissfully) unaware of the subtleties of writing for an academic journal; how to structure the arguments; which prior studies to cite and where; which terminology to use and what to avoid; and so on. Well, good managerial articles are no different; if you haven’t developed the skill yet to write one, you likely don’t quite realise what it takes.
2. False assumptions
It also seems that academics, wanting to write their first managerial piece, immediately assume they have to be explicitly prescriptive, and tell managers what to do. And the draft article – invariably based on “the five lessons coming out of my research” – would indeed be fiercely normative. Yet, those messages often seem impractically precise and not simple enough (“take up a central position in a network with structural holes”) or too simple to have any real use (“choose the right location”). You need to capture a busy executive’s attention and interest, giving them the feeling that they have gained a new insight into their own world by reading your work. If that is prescriptive: fine. But often precise advice is precisely wrong.
3. Lack of content
And, of course, more often than not, there is not much worth translating… Because people have been doing their research with solely an academic audience in mind – and the desire to also tell the real world about it only came later – it has produced no insight relevant for practice. I believe that publishing your research in a good academic journal is a necessary condition for it to be relevant; crappy research – no matter how intriguing its conclusions – can never be considered useful. But rigour alone, unfortunately, is not a sufficient condition for it to be relevant and important in terms of its implications for the world of business.
My co-blogger Russ Coff’s 2010 Strategic Management Journal piece on the coevolution of rent appropriation and capability development used Tony Fadell and the development of the iPod as an example. Here Tony Fadell talks about constraints, ignoring experts and embracing self-doubt:
The microfoundations-thing is misunderstood and abused in strategy. I try not to even use the word any more – given that it gets thrown around so loosely (it seemed that every talk at an SMS a few years ago managed to slip the word in). So some quick notes on microfoundations and strategy.
The “microfoundations”-effort in strategy is focused on the notion that individual-level heterogeneity matters – a lot. This means that who an organization is composed of, who self-selects into it, who leaves, matters – a lot. Extant theories of organization and capability don’t recognize this (for example, see Kogut and Zander, Nelson and Winter): in fact they argue that mobility is trivial, or (often implicitly) assume that individuals are homogeneous (by focusing directly on various collective constructs). Organizational effects are said to trump individual effects. But, I think the nested, individual-level effects trump higher level ones (individual>firm>industry effects) – some quick supporting points:
First, mobility: Mobility is the great litmus test for where heterogeneity might lie. If a particular individual leaves (in fact, who it is matters – a lot), does that impact organizational outcomes? Individual mobility is needed since analysis of variance is confounded without it: what is attributed to the collective level might in fact be an individual-level effect. A common mistake.
Second, Lotka distribution and variance within: If you look at productive activity in any setting, you’ll quickly note that it is highly skewed. The statistician Alfred Lotka pointed this out in 1926 in terms of scientific productivity, highlighting how in any population a few people are responsible for a radically disproportionate amount of productivity (measured in various ways): articles, citations, etc. In many settings the 80-20 rule (20% of people are responsible for 80% of the output) doesn’t even begin to cover it. So, look in any setting and you’ll find radical, nested heterogeneity – more heterogeneity within the system (this is often assumed away) than across.
Of course, it could be that highly talented individuals ALL select to be in a particular setting, which then might confound imputing heterogeneity.
Now, some common misunderstandings about microfoundations. First, the above does not mean that there aren’t any collective effects (benefits of interaction, learning, routines, structures, context etc etc): of course there are! But the first-order exercise ought to be to specify, as best we can, the nature and capabilities of the individuals involved. After that collective and emergent effects can be properly tested.
Though, note that–and this is important–collective effects can also be of the variety where the whole is less than the sum of the parts. Social psychology gives us lots of examples: social loafing and free-riding, influence (a la Asch), work on nominal versus interactive brainstorming, etc. This is often not recognized as we tend to ascribe virtues (but not costs) to organizational culture, interaction, community, etc. To complicate matters, individuals of course have much control over how much discretionary effort to put into collective tasks – (ah!) depending on the context.
Second, the microfoundations notion does not mean that we all become psychologists or über-reductionists, where we jump directly to genes or general intelligence (g), or we reduce everything back to the big bang. The aforementioned are quite common caricatures of microfoundations, or points of ridicule unleashed on anyone advocating methodological individualism. Rather, more simply, it is an approach that recognizes that individuals matter – and thus individuals are used as the basis for building models of social interaction and for explaining aggregate and emergent effects at the higher level.
Let me begin by acknowledging that the scientific process is far from perfect. It often heads off in wrong directions for extended periods, operates in fits and starts and isn’t cheap. Yet, science is to the discovery of knowledge what democracy is to the governance of society (Churchill’s dictum: the worst form, except for all the others that have been tried). Presumably, the “biases” Freek and Steve are talking about have been with us since Galileo. Yet, science advances all the same.
I put the word “biases” in scare quotes because I’m wondering how one would distinguish the scenario Freek identifies from one containing objective researchers with strong priors. Objective researchers are allowed to have strong priors, especially those who are experts in a field.
Still, suppose we stipulate that researchers’ emotions or preferences often lead them to hold dogmatic beliefs with respect to some favored, yet false, views (i.e., they completely ignore new, contradictory information). If the following conditions hold, then a field that follows the scientific method will eventually discard the false views: 1) not everyone in the field believes the false view; and, 2) it is possible to collect facts that refute the false views.
The reason for this is that scientific institutions provide enormous incentives to the “young Turks” of a field to overturn conventional wisdom. It’s true that there is also strong pressure on young scientists to conform to the CW. And one may well be able to enjoy a quiet career as a scientist by going whichever way the wind blows. But, you’ll never get famous that way. The history of science is loaded with examples of now famous scientists who are famous exactly because they broke with the CW.
Nor do I agree with Steve that some form of external refereeing is necessary for the system to work. True, it might take a generation for the up-and-comers to pry the CW from the cold, dead fingers of their senior colleagues but, eventually, that does happen. No external audience is required.
The problem with strategy (and most areas of social science) is that many of the objects in our theories are difficult, if not impossible, to measure. When this is true, the scientific process breaks down because item (2) above does not come in to play. So, for example, we have a theory known as “Porter’s 5 Forces” being taught without refinement from its original form for over 30 years. Indeed, in strategy, scholars are able to stake out a wide variety of sloppily constructed, ambiguous, and logically suspect theories for extended periods precisely because the lack of key data make them impossible to refute.
Start with Why, by Simon Sinek Came across this TED presentation by Simon Sinek. Normally, I skip the practitioner-oriented stuff, but this sounded sufficiently coherent and interesting to cause me to buy the book.
Freek’s latest post on confirmation bias notes that intellectual commitments can bias which research findings one believes. The tone of the post is that we would all be better off if such biases didn’t exist, but there is definitely a tradeoff here. Greater objectivity tends to go with lower intensity of interest in a subject. (Disinterested and uninterested are correlated, for those old-timers who remember when those words had different definitions.) That’s why you often find that those with strong views on controversial topics–including those with minority or even widely ridiculed opinions–often know more about the topic, the evidence, and the arguments pro and con than “objective” people who can’t be bothered to dig into the matter. Other than partisanship, the only thing that will get people interested enough to seriously assess competing claims is a personal stake in the truth of the matter. (And in all cases, Feynman’s admonition that the easiest person to fool is yourself should be borne in mind.)
Historians of science of all stripes, from romanticists like Paul de Kruif (author of the classic The Microbe Hunters) to sophisticated evolutionists like David Hull in Science as a Process, have reported that intellectual partisanship motivates a great deal of path-breaking research. “I’ll show him!” has spawned a lot of clever experiments. Burning curiosity and bland objectivity are hard to combine.
But how can such partisanship ever lead to intellectual progress? Partisans have committed to high-profile public bets on one or another side of a controversy; their long-term career and immediate emotional payoffs depend not directly on the truth, but on whether or not they “win” in the court of relevant opinion. The key to having science advance is for qualified non-partisan spectators of these disputes be able to act as independent judges to sort out which ideas are better.
Ideally, these adjacent skilled observers would have some skin in the game by virtue of having to bet their own research programs on what they think the truth is. If they choose to believe the wrong side of a dispute, their future research will fail, to their own detriment. That’s the critical form of incentive compatibility for making scientific judgments objective, well-described in Michael Polanyi’s “Republic of Science” article. If, for most observers, decisions about what to believe are closely connected to their own future productivity and scientific reputation, then the partisanship of theory advocates is mostly a positive, motivating exhaustive search for the strengths and weaknesses of the various competing theories. Self-interested observers will sort out the disputes as best they can, properly internalizing the social gains from propounding the truth.
The problem for this system comes when 1) the only scientific interest in a dispute lies among the partisans themselves, or 2) observers’ control over money, public policy, or status flows directly from choosing to believe one side or another regardless of the truth of their findings. Then, if a false consensus forms the only way for it come unstuck is for new researchers to benefit purely from the novelty of their revisionist findings–i.e., enough boredom and disquiet with the consensus sets in that some people are willing to entertain new ideas.
My earlier post – “can’t believe it” – triggered some bipolar comments (and further denials); also to what extent this behaviour can be observed among academics studying strategy. And, regarding the latter, I think: yes.
The denial of research findings obviously relates to confirmation bias (although it is not the same thing). Confirmation bias is a tricky thing: we – largely without realising it – are much more prone to notice things that confirm our prior beliefs. Things that go counter to them often escape our attention.
Things get particularly nasty – I agree – when we do notice the facts that defy our beliefs but we still don’t like them. Even if they are generated by solid research, we’d still like to find a reason to deny them, and therefore see people start to question the research itself vehemently (if not aggressively and emotionally).
It becomes yet more worrying to me – on a personal level – if even academic researchers themselves display such tendencies – and they do. What do you think a researcher in corporate social responsibility will be most critical of: a study showing it increases firm performance, or a study showing that it does not? Whose methodology do you think a researcher on gender biases will be more inclined to challenge: a research project showing no pay differences or a study showing that women are underpaid relative to men?
It’s only human and – slightly unfortunately – researchers are also human. And researchers are also reviewers and gate-keepers of the papers of other academics that are submitted for possible publication in academic journals. They bring their biases with them when determining what gets published and what doesn’t.
And there is some evidence of that: studies showing weak relationships between social performance and financial performance are less likely to make it into a management journal as compared to a finance journal (where more researchers are inclined to believe that social performance is not what a firm should care about), and perhaps vice versa.
No research is perfect, but the bar is often much higher for research generating uncomfortable findings. I have little doubt that reviewers and readers are much more forgiving when it comes to the methods of research that generates nicely belief-confirming results. Results we don’t like are much less likely to find their way into an academic journal. Which means that, in the end, research may end up being biased and misleading.
So, I have been running a little experiment on twitter. Oh well, it doesn’t really deserve the term “experiment” – at least in an academic vocabulary – because there certainly are no treatment effects or control groups. It does deserve the term “little” though, because there are only four observations.
My experiment was to post a few recent findings from academic research that some might find mildly controversial or – as it turns out – offending. These four hair raising findings were 1) selling junk food in schools does not lead to increased obesity, 2) family-friendly workplace practices do not improve firm performance (although they do not decrease them either), 3) girls take longer to heal from concussions, 4) firms headed up by CEOs with broader faces show higher profitability.
Only mildly controversial I’d say, and only to some. I was just curious to see what reactions it would trigger. Because I have noticed in the past that people seem inclined to dismiss academic evidence if they don’t like the results. If the results are in line with their own beliefs and preconceptions, its methods and validity are much less likely to be called stupid.
Selling junk food in schools does not lead to increased obesity is the finding of a very careful study by professors Jennifer Van Hook and Claire Altman. It provides strong evidence that selling junk food in schools does not lead to more fat kids. One can then speculate why this is – and their explanation that children’s food patterns and dietary preferences get established well before adolescence may be a plausible one – but you can’t deny their facts. Yet, it did lead to “clever” reactions such as “says more about academic research than junk food, I fear…”, by people who clearly hadn’t actually read the study.
Family-friendly workplace practices do not improve firm performance is another finding that is not welcomed by all. This large and competent study, by professors Nick Bloom, Toby Kretschmer and John van Reenen, was actually read by some, be it clearly without a proper understanding of its methodology (which, indeed, it being an academic paper, is hard to fully appreciate without proper research methodology training). It led to reactions that the study was “in fact, wrong”, made “no sense”, or even that it really showed the opposite; these silly professors just didn’t realise it.
Girls take longer to heal from concussions is the empirical fact established by Professor Tracey Covassin and colleagues. Of course there is no denying that girls and boys are physiologically different (one cursory look at my sister in the bathtub already taught me that at an early age), but the aforementioned finding still led to swift denials such as “speculation”!
That firms headed up by CEOs with broader faces achieve higher profitability – a careful (and, in my view, quite intriguing) empirical find by my colleague Margaret Ormiston and colleagues – triggered reactions such as “sometimes a study tells you more about the interests of the researcher, than about the object of the study” and “total nonsense”.
So I have to conclude from my little (academically invalid) mini-experiment that some people are inclined to dismiss results from research if they do not like them – and even without reading the research or without the skills to properly understand it. In contrast, other, nicer findings that I had posted in the past, which people did want to believe, never led to outcries of bad methodology and mentally retarded academics and, in fact, were often eagerly retweeted.
We all look for confirmation of our pre-existing beliefs and don’t like it much if these comfortable convictions are challenged. I have little doubt that this also heavily influences the type of research that companies conduct, condone, publish and pay attention to. Even if the findings are nicer than we preconceived (e.g. the availability of junk food does not make kids consume more of it), we prefer to stick to our old beliefs. And I guess that’s simply human; people’s convictions don’t change easily.
In the field of strategy, we always make a big thing out of differentiation: we tell firms that they have to do something different in the market place, and offer customers a unique value proposition. Ideas around product differentiation, value innovation, and whole Blue Oceans are devoted to it. But we also can’t deny that in many industries – if not most industries – firms more or less do the same thing.
Whether you take supermarkets, investment banks, airlines, or auditors, what you get as a customer is highly similar across firms.
- Ability to execute: What may be the case, is that despite doing pretty much the same thing, following the same strategy, there can be substantial differences between the firms in terms of their profitability. The reason can lie in execution: some firms have obtained capabilities that enable them to implement and hence profit from the strategy better than others. For example, Sainsbury’s supermarkets really aren’t all that different from Tesco’s, offering the same products at pretty much the same price in pretty much the same shape and fashion in highly identical shops with similarly tempting routes and a till at the end. But for many years, Tesco had a superior ability to organise the logistics and processes behind their supermarkets, raking up substantially higher profits in the process.
- Shake-out: As a consequence of such capability differences – although it can be a surprisingly slow process – due to their homogeneous goods, we may see firms start to compete on price, margins decline to zero, and the least efficient firms are pushed out of the market. And one can hear a sigh of relief amongst economists: “our theory works” (not that we particularly care about the world of practice, let alone be inclined to adapt our theory to it, but it is more comforting this way).
- A surprisingly common anomaly? But it also can’t be denied that there are industries in which firms offer pretty much the same thing, have highly similar capabilities, are not any different in their execution, and still maintain ridiculously high margins for a sustained period of time. And why is that? For example, as a customer, when you hire one of the Big Four accounting firms (PwC, Ernst & Young, KPMG, Deloitte), you really get the same stuff. They are organised pretty much the same way, they have the same type of people and cultures, and have highly similar processes in place. Yet, they also (still) make buckets of money, repeatedly turning and churning their partners into millionaires.
“But such markets shouldn’t exist!” we might cry out in despair. But they do. Even the Big Four themselves will admit – be it only in covert private conversations carefully shielding their mouths with their hands – that they are really not that different. And quite a few industries are like that. Is it a conspiracy, illegal collusion, or a business X file?
None of the above I am sure, or perhaps a bit of all of them… For one, industry norms seem to play a big role in much of it: unwritten (sometimes even unconscious), collective moral codes, sometimes even crossing the globe, in terms of how to behave and what to do when you want to be in this profession. Which includes the minimum margin to make on a surprisingly undifferentiated service.
Try to guess the context for this piece of writing. Is it part of a scholarly study on the history of convention centers? A tourist guidebook? Is it the catalogue to a museum display on convention-center architecture?
In order to attract growing numbers of conventions in the
second half of the twentieth century, cities incorporated
convention center construction within urban renewal and
redevelopment schemes, usually at the edge of core urban
areas where space would be available for construction of
large buildings with contiguous, flat-floor space.
I read about Microsoft’s acquisition of patents from AOL with some interest. They note that this reflects a price of $1.3M/patent and compare it to other recent escalations in the IP arms race. Analysts estimate that Google only paid $400k/patent in the $12B acquisition of Motorola Mobility. Nortel patents recently went for about $750k each. Of course, given the wide variance in the value of a patent, clearly the average is not particularly informative — it treats all of these patents as homogeneous which is certainly not the case. Nevertheless, the escalating prices do suggest that the arms race is unlikely to create much value for the firms (and certainly not for consumers).
However, buried in the stories is another rather interesting observation – some of the key players earn more from selling rivals’ handsets than their own. Read the rest of this entry »
I always enjoy witnessing a good debate. And I mean the type of debate where one person is given a thesis to defend, while the other person speaks in favour of the anti-thesis. Sometimes – when smart people really get into it – seeing two debaters line up the arguments and create the strongest possible defence can really clarify the pros and cons in my mind and hence make me understand the issue better.
For example – be it one in a written format – recently my good friend and colleague at the London Business School, Costas Markides, was asked by Business Week to debate the thesis that “happy workers will produce more and do their jobs better”. Harvard’s Teresa Amabile and Steven Kramer had the (relatively easy) task of defending the “pro”. I say relatively easy, because the thesis seems intuitively appealing, it is what we’d all like to believe, and they have actually done ample research on the topic.
My poor London Business School colleague was given the hapless task to defend the “con”: “no, happy workers don’t do any better”. Hapless indeed.
In fact, in spite of receiving some hate mail in the process, I think he did a rather good job. I am giving him the assessment “good” because indeed he made me think. He argues that having happy, smiley employees all abound might not necessarily be a good sign, because it might be a signal that something is wrong in your organisation, and you’re perhaps not making the tough but necessary choices.
As said, it made me think, and that can’t be bad. Might we not be dealing with a reversal of cause and effect here? Meaning: well-managed companies will get happy employees, but that does not mean that choosing to make your employees happy as a goal in and of itself will get you a better organisation? At least, it is worth thinking about.
In spite that perhaps to you it might seem a natural thing to have in an academic institution – a good debate – it is actually not easy to organise one in business academia. Most people are simply reluctant to do it – as I found out organising our yearly Ghoshal Conference at the London Business School – and perhaps they are right, because even fewer people are any good at it.
I guess that is because, to a professor, it feels unnatural to adopt and defend just one side of the coin, because we are trained to be nuanced about stuff and examine and see all sides of the argument. It is also true that (the more naïve part of) the audience will start to associate you with that side of the argument, “as if you really meant it”. Many of the comments Costas received from the public were of that nature, i.e. “he is that moronic guy who thinks you should make your employees unhappy”. Which of course is not what he meant at all. Nor was it the purpose of the debate.
Yet, I also think it is difficult to find people willing to debate a business issue because academics are simply afraid to have an opinion. We are not only trained to examine and see all sides of an argument, we are also trained to not believe in something – let alone argue in favour of it – until there is research that produced supportive evidence for it. In fact, if in an academic article you would ever suggest the existence of a certain relationship without presenting evidence, you’d be in for a good bellowing and a firm rejection letter. And perhaps rightly so, because providing evidence and thus real understanding is what research is about.
But, at some point, you also have to take a stand. As a paediatric neurologist once told me, “what I do is part art, part science”. What he meant is that he knew all the research on all medications and treatments, but at the end of the day every patient is unique and he would have to make a judgement call on what exact treatment to prescribe. And doing that requires an opinion.
You don’t hear much opinion coming from the ivory tower in business academia. Which means that the average business school professor does not receive much hate mail. It also means he doesn’t have much of an audience outside of the ivory tower.
I am a long standing fan of the Ig Nobel awards. The Ig Nobel awards are an initiative by the magazine Air (Annals of Improbable Research) and are handed out on a yearly basis – often by real Nobel Prize winners – to people whose research “makes people laugh and then think” (although its motto used to be to “honor people whose achievements cannot or should not be reproduced” – but I guess the organisers had to first experience the “then think” bit themselves).
With a few exceptions they are handed out for real research, done by academics, and published in scientific journals. Here are some of my old time favourites:
- BIOLOGY 2002, Bubier, Pexton, Bowers, and Deeming.“Courtship behaviour of ostriches towards humans under farming conditions in Britain” British Poultry Science 39(4)
- INTERDISCIPLINARY RESEARCH 2002. Karl Kruszelnicki (University of Sydney). “for performing a comprehensive survey of human belly button lint – who gets it, when, what color, and how much”
- MATHEMATICS 2002. Sreekumar and Nirmalan (Kerala Agricultural University). “Estimation of the total surface area in Indian Elephants” Veterinary Research Communications 14(1)
- TECHNOLOGY 2001, Jointly to Keogh (Hawthorn), for patenting the wheel (in 2001), and the Australian Patent Office for granting him the patent.
- PEACE 2000, the British Royal Navy, for ordering its sailors to stop using live cannon shells, and to instead just shout “Bang!”
- LITERATURE 1998, Dr. Mara Sidoli (Washington) for the report “farting as a defence against unspeakable dread”. Journal of analytical psychology 41(2)
To the best of my knowledge, there is (only) one individual who has not only won an Ig Nobel Award, but also a Nobel Prize. That person is Andre Geim. Geim – who is now at the University of Manchester – for long held the habit of dedicating a fairly substantial proportion of his time to just mucking about in his lab, trying to do “cool stuff”. In one of such sessions, together with his doctoral student Konstantin Novoselov, he used a piece of ordinary sticky tape (which allegedly they found in a bin) to peel off a very thin layer of graphite, taken from a pencil. They managed to make the layer of carbon one atom thick, inventing the material “graphene”.
In another session, together with Michael Berry from the University of Bristol, he experimented with the force of magnetism. Using a magnetized metal slab and a coil of wire in which a current is flowing as an electromagnet, they tried to make a magnetic force that exactly balanced gravity, to try and make various objects “float”. Eventually, they settled on a frog – which, like humans, mostly consists of water – and indeed managed to make it levitate.
The one project got Geim the Ig Nobel; the other one got him the Nobel Prize.
“Mucking about” was the foundation of these achievements. The vast majority of these experiments doesn’t go anywhere; some of them lead to an Ig Nobel and makes people laugh; others result in a Nobel Prize. Many of man’s great discoveries – in technology, medicine or art – have been achieved by mucking about. And many great companies were founded by mucking about, in a garage (Apple), a dorm room (Facebook), or a kitchen and a room above a bar (Xerox).
Unfortunately, in strategy research we don’t muck about much. In fact, people are actively discouraged from doing so. During pretty much any doctoral consortium, junior faculty meeting, or annual faculty review, a young academic in the field of Strategic Management is told – with ample insistence – to focus, figure out in what subfield he or she wants to be known, “who the five people are that are going to read your paper” (heard this one in a doctoral consortium myself), and “who your letter writers are going to be for tenure” (heard this one in countless meetings). The field of Strategy – or any other field within a business school for that matter – has no time and tolerance for mucking about. Disdain and a weary shaking of the head are the fates of those who try, and step off the proven path in an attempt to do something original with uncertain outcome: “he is never going to make tenure, that’s for sure”.
And perhaps that is also why we don’t have any Nobel Prizes.
This is what happens when the b-school market has excess capacity. ROI for students is negative, enrolment declines and, at some point, it is literally the case that the value of the land the school is built upon becomes more valuable in some alternative use.
One of my minor neuroses is an aversion to propagating errors of fact or logic. Indeed, I have to apply teeth to tongue at times when witnessing others propagating error. Managing this quirk productively is an important part of pedagogy, as experienced MBA instructors will immediately recognize. (Note that you will have many more opportunities to correct errors than to answer questions, because part of not understanding something is often not realizing it.)
Knowing when to pull the trigger on a correction to an error is the most subtle aspect. The first-best solution is another student immediately chiming in with an on-point critique, but that happens rarely. A lightly guided discussion that eventually corrects the error is next best, but there are practical challenges here as well, since a) limited class time may be available to deal with the topic, b) it can become aggravating for the students to play “guess what the professor is thinking,”, and c) the longer the uncorrected statement lies there the more likely that students will internalize the error and repeatedly spout it back in future classes, on exams, etc.
Assuming one has let the error go uncorrected as long as seems prudent and decided to directly intervene, it’s still often a challenge to a) precisely recognize the nature of an error and b) quickly come up with a concise, memorable, and understandable correction that will persistently displace the erroneous idea from the audience’s minds. Of course, experience helps, because errors tend to fall into repetitive patterns, allowing you to build up an internal database of diagnoses and appropriate responses. Here are some classic sallies with proposed responses below the fold. Suggested improvements to these responses (as well as additional examples of “favorite errors”) are welcomed in the comments.
1. “The company has a cost advantage because it makes more products and there are economies of scope in this industry.”
2. “The company has a cost advantage because it’s more vertically integrated (and Porter says that reduces costs).”
3. “The company has a cost advantage because it outsources more activities.”
4. “There are strong entry barriers because small companies can’t afford to pay the capital costs to operate in the industry.”
5. “The company needs to give better deals to its loyal customers.”
6. “The big growth in this industry comes from this new segment X, so the company should focus its resources on penetrating X.”
“So you want to start a company. You’ve finished your undergraduate degree and you’re peering into the haze of your future. Would it be better to continue on to an MBA or do an advanced degree in a nerdy pursuit like engineering or mathematics? Sure, tech skills are hugely in demand and there are a few high-profile nerd success stories, but how often do pencil-necked geeks really succeed in business? Aren’t polished, suited and suave MBA-types more common at the top? Not according to a recent white paper from Identified, tellingly entitled “Revenge of the Nerds.”
Interested? Yes, it does sound intriguing, doesn’t it? It is the start of an article, written by a journalist, based on a report by a company called “Identified”. In the report, you can find that “Identified is the largest database of professional information on Facebook. Our database includes over 50 million Facebook users and over 1.2 billion data points on professionals’ work history, education and demographic data”.
In the report, based on the analysis of data obtained from Facebook, under the header “the best degree for start-up success”, Identified says to present some “definitive conclusions” about “whether an MBA is worth the investment and if it really gets you to the top of the corporate food chain”. Let me no longer hold you in suspense (although I think by now you do see this one coming from a mile or two, like a Harry and Sally romance), their definitive conclusion is: “that if you want to build a company, an advanced degree in a subject like engineering beats an MBA any day”.
So I have read the report…
[insert deep sigh]
and – how shall I put it – I have a few doubts… ( = polite English euphemism). I think there is no way (on earth) that the authors can reach this conclusion based on the data that they’ve got. Allow me to explain:
Although Identified has “assembled a world class team of 15 engineers and data scientists to analyse this vast database and identify interesting trends, patterns and correlations” I am not entirely sure that they are not jumping to a few unwarranted conclusions. ( = polite English euphemism)
So, when they dig up from Facebook all the profiles of anyone listed as “CEO” or “founder”, they find that about ¾ are engineers and a mere ¼ are MBAs. (Actually, they don’t even find that, but let me not get distracted here). I have no quibbles with that; I am sure they do find what they find; after all, they do have “a world class team of 15 engineers and data scientists”, and a fact is a fact. What I have more quibbles with is how you get from that to the conclusion that if you want to build a company, an advanced degree in a subject like engineering beats an MBA any day.
Perhaps it may seem obvious and a legitimate conclusion to you: more CEOs have an engineering degree than an MBA, so surely getting an engineering degree is more likely to enable you to become a CEO? But, no, that is where it goes wrong; you cannot draw this conclusion from those data. Perhaps “a world class team of 15 engineers and data scientists [able] to analyse this vast database and identify interesting trends, patterns and correlations” are superbly able at digging up the data for you but, apparently, they are less skilled in drawing justifiable conclusions. (I am tempted to suggest that, for this, they would have been better off hiring an MBA, but will fiercely resist that temptation!)
The problem is, what we call, “unobserved heterogeneity”, coupled with some “selection bias”, finished with some “bollocks” (one of which is not a generally accepted statistical term) – and in this case there is lots of it. For example – to start with a simple one – perhaps there are simply a lot more engineers trying to start a company than MBAs. If there are 20 engineers trying to start a company and 9 of them succeed, while there are 5 MBAs trying it and 3 of them succeed, can you really conclude that an engineering degree is better for start-up success than an MBA?
But, you may object, why would there be more engineers who are trying to start a business? Alright then, since you insist, suppose out of the 10 engineers 9 succeed and out of the 10 MBAs only 3 do, but the 9 head $100,000 businesses and the three $100 million ones? Still so sure that an engineering degree is more useful to “get you to the top of the corporate food chain”? What about if the MBA companies have all been in existence for 15 years while all the engineering start-ups never make it past year 2?
And these are of course only very crude examples. There are likely more subtle processes going on as well. For instance, the same type of qualities that might make someone choose to do an engineering degree could prompt him or her to start a company, however, this same person might have been better off (in terms of being able to make the start-up a success) if s/he had done an MBA. And if you buy none of the above (because you are an engineer or about to be engaged to one) what about the following: people who chose to do an engineering degree are inherently smarter and more able people than MBAs, hence they start more and more successful companies. However, that still leaves wide open the possibility that such a very smart and able person would have been even more successful had s/he chosen to do an MBA before venturing.
What can you conclude from their findings?
I could go on for a while (and frankly I will) but I realise that none of my aforementioned scenarios will be the right one, yet the point is that there might very well be a bit going on of several of them. You cannot compare the ventures started by engineers with the ventures headed by MBAs, you can’t compare the two sets of people, you can’t conclude that engineers are more successful founding companies, and you certainly cannot conclude that getting an engineering degree makes you more likely to succeed in starting a business. So, what can you conclude from the finding that more CEOs/founders have a degree in engineering than an MBA? Well… precisely that; that more CEOs/founders have a degree in engineering than an MBA. And, I am sorry, not much else.
Real research (into such complex questions such as “what degree is most likely to lead to start-up success?) is more complex. And so will likely have to be the answer. For some type of businesses an MBA might be better, and for others an engineering degree. And some type of people might be more helped with an MBA, where other types are better off with an engineering degree. There is nothing wrong with deriving some interesting statistics from a database, but you have to be modest and honest about the conclusions you can link to them. It may sound more interesting if you claim that you find a definitive conclusion about what degree leads to start-up success – and it certainly will be more eagerly repeated by journalist and in subsequent tweets (as happened in this case) – but I am afraid that does not make it so.
It appears that selling “competitive” foods–often called junk foods–in schools has little market-expanding effect, at least if we use childhood obesity as a measure. The authors of this study look to have used pretty robust methods and found no link between attending a middle school where such marketed foods are sold and obesity. So firms’ efforts to penetrate these schools probably represent zero-sum market-share battles among brands, not a means of stimulating overall long-term consumption of these products.
Bonus question: If food firms make competitive bids to schools in order to get exclusive access (and I have no idea whether that is true–I’m analogizing from the many college campus exclusive soft-drink deals), then how would they feel about regulations banning them from school premises? Hint: Think about the impact of taking cigarette advertising off of TV on cigarette firm profits.
Over the weekend, an (anonymized) interview was published in a Dutch national newspaper with the three “whistle blowers” who exposed the enormous fraud of Professor Diederik Stapel. Stapel had gained stardom status in the field of social psychology but, simply speaking, had been making up all his data all the time. There are two things that struck me:
First, in a previous post I wrote about the fraud, based on a flurry of newspaper articles and the interim report that a committee examining the fraud has put together, I wrote that it eventually was his clumsiness faking the data that got him caught. Although that general picture certainly remained – he wasn’t very good at faking data; I think I could have easily done a better job (although I have never even tried anything like that, honest!) – but it wasn’t as clumsy as the newspapers sometimes made it out to be.
Specifically, I wrote “eventually, he did not even bother anymore to really make up newly faked data. He used the same (fake) numbers for different experiments, gave those to his various PhD students to analyze, who then in disbelief slaving away in their adjacent cubicles discovered that their very different experiments led to exactly the same statistical values (a near impossibility). When they compared their databases, there was substantial overlap”. Now, it now seems the “substantial overlap” was merely a part of one column of data. Plus, there were various other things that got him caught.
I don’t beat myself too hard over the head with my keyboard about repeating this misrepresentation by the newspapers (although I have given myself a small slap on the wrist – after having received a verbal one from one of the whistlers) because my piece focused on the “why did he do it?” rather than the “how did he get caught”, but it does show that we have to give the three whistle blowers (quite) a bit more credit than I – and others – originally thought.
The second point that caught my attention is that, since the fraught was exposed, various people have come out admitting that they had “had suspicions all the time”. You could say “yeah right” but there do appear to be quite a few signs that various people indeed had been having their doubts for a longer time. For instance, I have read an interview with a former colleague of Stapel at Tilburg University credibly admitting to this, I have directly spoken to people who said there had been rumors for longer, and the article with the whistle blowers suggests even Stapel’s faculty dean might not have been entirely dumbfounded that it had all been too good to be true after all… All the people who admit to having doubts in private state that they did not feel comfortable raising the issue while everyone just seemed to applaud Stapel and his Science publications.
This reminded me of the Abilene Paradox, first described by Professor Jerry Harvey, from the George Washington University. He described a leisure trip which he and his wife and parents made in Texas in July, in his parents’ un-airconditioned old Buick to a town called Abilene. It was a trip they had all agreed to – or at least not disagreed with – but, as it later turned out, none of them had wanted to go on. “Here we were, four reasonably sensible people who, of our own volition, had just taken a 106-mile trip across a godforsaken desert in a furnace-like temperature through a cloud-like dust storm to eat unpalatable food at a hole-in-the-wall cafeteria in Abilene, when none of us had really wanted to go”
The Abilene Paradox describes the situation where everyone goes along with something, mistakenly assuming that others’ people’s silence implies that they agree. And the (erroneous) feeling to be the only one who disagrees makes a person shut up as well, all the way to Abilene.
People had suspicions about Stapel’s “too good to be true” research record and findings but did not dare to speak up while no-one else did.
It seems there are two things that eventually made the three whistle blowers speak up and expose Stapel: Friendship and alcohol.
They had struck up a friendship and one night, fuelled by alcohol, raised their suspicions to one another. And, crucially, decided to do something about it. Perhaps there are some lessons in this for the world of business. For example, Jim Westphal, who has done extensive, thorough research on boards of directors, showed that boards often suffer from the Abilene Paradox, for instance when confronted with their company’s new strategy. Yet, Jim and colleagues also showed that friendship ties within top management teams might not be such a bad thing. We are often suspicious of social ties between boards and top managers, fearful that it might cloud their judgment and make them reluctant to discipline a CEO. But it may be that such friendship ties – whether fuelled by alcohol or not – might also help to lower the barriers to resolving the Abilene Paradox. So perhaps we should make friendships and alcohol mandatory – religion permitting – both during board meetings and academic gatherings. It would undoubtedly help making them more tolerable as well.
10. Strategies in the new European barter economy.
9. Tom Friedman: Why bubbles are far-sighted industrial policy when undertaken by bureaucrats.
8. Radical-disruptive-agile-entrepreneurial strategy implications of thought-controlled smartphones.
7. The Rose Bowl as case-discussion classroom: UCLA’s innovative response to online MBA competition.
6. Sorry we got WordPress shut down with that link to one of Russ’s videos—#!%& SOPA.
5. Harvard Business School replaces Ohio as the Cradle of Presidents.
4. Cuneiform Case Studies–archaeologists discover Babylonian analysis of the five forces. (“Gilgamesh had a decision to make…”)
3. “Sustainability” voted official cant word of the decade by the Academy of BS.
2. Facebook’s decision to display users’ Social Security numbers–bid for ad revenue or is Zuckerberg now just screwing with us for fun?
1. New SEC and FASB regulations on precise use of strategy and business buzzwords create “analyst apocalypse” and “consulting catastrophe.”
A shot across the bow in the New Criterion by James Panero:
For those of us who watch from the sidelines, the Occupy Wall Street movement may appear sympathetic to our own concerns. At the very least, it seems to offer a safety valve for others to vent their frustrations. Yet the history of idealistic occupations suggests this will also end poorly, with a polarized public and the movement collapsing in ruin.
Like the Commune, Occupy Wall Street is about the perfection of itself rather than the reform of others. This is a reason that the Occupationists differ from other protesters who go home at the end of a long march. For the Occupation, the tents do not come down until perfection is attained or destroyed.
The heart of OWS is therefore in its internal mechanics, especially its strictly “non-hierarchical” code of conduct. The manifestations of this code might appear foolish, but they emerge from a formula meant to challenge if not supplant our current system of government with the Occupation’s own forms of egalitarian command and control, a formula that grOWS ever more doctrinaire and insular for those who practice it. Many of these devices are still being developed in the “General Assemblies” of Occupationist cells. OWS already employs several to limit open speech, especially when the purity of the Occupation is confronted by the impurities of our existing laws and precedent.
From my perspective the reason why the “free/open source” movement succeeded is because they stopped protesting and started coding – i.e. they focused on developing solutions. Richard Stallman created two brilliant hacks – the GPL – an IP license that allowed sharing & GCC – the compiler. Solutions not protest!