I recently discovered I am an academic fraud. Now, I am sure there must be people out there whose immediate response is “of course you are”, “knew it” or “I am not surprised”, but I was.
Admittedly, what amounts to fraud when publishing as an academic isn’t always entirely clear to me – which, to some,will probably suffice to consider me suspect already (if not guilty-till-proven-innocent). I do get the extremes: If one writes up a truly new academic study giving the full account of the research underlying it, it ain’t fraud. If you make up the data – emulating the now infamous Diederik Stapel – it is. But sometimes in between, I am not always sure… Let me give you a few potential examples.
- Earlier this month, at the Editorial Board Meeting of the Academy of Management Journal, the editor reported that the journal would now start screening every submitted article for plagiarism. The software turns up whether parts of the text have been copied from earlier publications, including articles by the same author (in a case of self-plagiarism). After this, a fellow board member asked “can we access the same software to pre-screen our own articles before submitting them?” There wasn’t a murmur or hint of discontent in the room following this question, but I found it strange and uneasy. If you copy a piece of text, then pre-screen it and the software tells you you would be found out, you rewrite it a bit plugging in a few synonyms here and there and then it is ok and no longer considered fraud and plagiarism?!
- Geert Hofstede, one of the most highly cited social scientists ever (citations are considered a signal of “impact” in our academic world, and I seem to remember him once telling me that he had more citations than Karl Marx…), became famous for developing dimensions of national cultural differences. He published these dimensions left-right-and-centre – in academic journals, magazines and books – which greatly contributed to their and his exposure. He nowadays would be covered in tar and feathers and chased out of the ivory tower for self-plagiarism?
- Situation A: PhD student A copies a paragraph leading up to one of his hypotheses from a working paper by someone else he found on the web, without citation. Situation B: PhD student B copies a summary of a previously published academic article from a third, published paper that summarised the same article. Situation C: likewise, but with a citation to that third article, but no quotation marks. Situation D: likewise, but with citation and quotation marks. Who should get kicked out of the programme? At London Business School we have already dealt with situations A and B (the students were chased out), and D of course, but I am left wondering what we’d do in situation C.
- An academic – and an obvious fan of the Matthew Effect – buys 20,000 followers on Twitter. Yes, if you didn’t know, buying (fake) twitter followers is possible and easy. In fact, yesterday, I learned it is as cheap as chips. Yesterday, the Sunday Times covered the tale of an aspiring English celebrity who bought about 20,000 followers on Twitter to boost her profile. It just cost her a few hundred pounds/dollars. And, in fact, it sort of worked; she did raise her profile. But when she was found out – which isn’t actually that easy – she was ridiculed and quickly chased back to the dubious and crowded ranks of the British B-celebrities. But what would we do? How would we react to an academic buying 20,000 “followers”? Tar and feathers or applause for bringing the Matthew Effect to practice?
I am – apparently – a shameless self-plagiarising fraud because I sometimes get approached by business magazines who say “we read your blog post X and would like to republish it in our magazine”. And if they’re half decent (even by business magazine standards), I tend to say “yes”… In fact, I sometimes make the suggestion myself; when some magazine asks me “would you like to write an article on X for our wonderful magazine?” I usually say “no (way), but chapter X from my book would suit you well. Feel free to republish that”. Some acknowledge it was previously published; some don’t.
And, frankly, I don’t really care, and I will probably do it again. If it is my work, my copy-right, the magazine is fully aware of it, and it doesn’t harm the reader (they will know if they’ve seen it before, and otherwise they probably didn’t, or they might suffer from an enviable dose of business magazine amnesia), I won’t fear or dodge the tar and feathers. In fact, who knows, you may have read this very same post before!
When Laura d’Andrea Tyson was the Dean of London Business School – some years ago – she put together a committee to examine and reformulate the School’s strategy. Several professors sat on that committee. When I once asked her, having a drink at her home, why none of them were Strategy professors, she looked at me for about 5 seconds baffled. Eventually, she stammered, “yes, perhaps we could consider that in the future….”.
It was clear to me, from her stunned silence (and she wasn’t easily lost for words), that she had never even considered the thought before.
I, in contrast, thought it wasn’t such an alien idea; putting some strategy professors on the School’s strategy-making committee. We had – and still have – people in our Strategy department (e.g. Costas Markides, Sumantra Ghoshal) who not only had dozens of top academic publications behind their names but who also had an eager ear amongst strategy practitioners, through their Harvard Business Review publications and hundreds of thousands of business books sold (not to mention their fairly astronomical consulting fees).
Today, our current Dean – Sir Andrew Likierman – is working with a group of people on a huge strategic growth decision for the School, namely the acquisition of a nearby building from the local government that would increase our capacity overnight with about 70 percent. Once more, strategy professors have no closer role in the process than others; their voice is as lost in the quackmire as anyone else’s.
If Sir Andrew had been an executive MBA student in my elective (“Strategies for Growth”) writing an essay about the situation, I would ask him for a justification of the need for growth given the characteristics of the market; I’d ask him about the various options for growth (geographic expansion, e.g. a campus abroad; related diversification, e.g. on-line space, etc.), and how an analysis of the organisation’s resources and capabilities is linked to these various options, and so on. But a systematic analysis based on what we teach in our own classrooms and publish in our books and journals has, it seems, not even be considered.
And I genuinely wonder why that is? Because it is not only strategy professors and it is not only deans. Whenever the topic of the School’s brand name comes up, no-one seems inclined to pay more attention to our Marketing professors (some of whom are heavyweights in the field branding) than to the layman’s remarks of Economics or Strategy folk. When the School’s culture and values are being assessed, Organizational Behaviour professors are conspicuously absent from the organising committee (ironically it was run by a Marketing guy); likewise for Economics and OB professors when we are discussing incentives and remuneration. So why is that?
Is it that deep down we don’t actually believe what we teach? Or is it that we just don’t believe what any of our colleagues in other departments teach…? And that it could be somehow relevant to practice – including our own? Why do we charge companies and students small – and not so small – fortunes to take our guidance on how to make strategy, brands, and remuneration systems only to see that when our own organisation is dealing with them it all goes out the door?
I guess I simply don’t understand the psychology behind this. Wait… perhaps I should go ask my Organizational Behaviour professors down the corridor!
Since writing the piece below – perhaps not surprisingly; although it took me a bit by surprise (I didn’t think anyone actually read that stuff) – Sir Andrew contacted me. One could say that he took the oral exam following his essay on the School’s growth plans and passed it (with a distinction!).
In all seriousness, in hindsight, I think I was unfair to him – perhaps even presumptuous. I wrote “a systematic analysis based on what we teach in our own classrooms and publish in our books and journals has, it seems, not even be considered” and, now, I think I should not have written this. That I haven’t been involved in the process much and therefore have not seen the analysis of course does not mean it was never conducted. And it is a bit unfair, from the sidelines, to throw in a comment like that when someone has put in so much careful work. I apologise!
In fact, although Sir Andrew never lost his British cool, charme and good sense of humour, I realise it must actually have been “ever so slightly annoying” for him to read that comment, especially from a colleague, and he doesn’t deserve that. So: regarding the specifics of this example: forget it! ban it from your minds, memory, bookmarks and favourites (how would this Vermeulen guy know?! he wasn’t even there!)!
That you should pay more attention to Marketing professors when considering your school’s brandname, more attention to your OB professors when considering your incentive systems and values, more attention to Finance professors when managing your endownment and, God forbid, sometimes even to some strategy professors when considering your school’s strategy, I feel, does stand – so don’t throw out the baby with the bathwater just yet. But, yes, do get rid of that stinky bathwater.
Stevie Spring, who recently stepped down after a successful stint as CEO of Future plc, the specialty magazine publisher, once told me, “I am not really the company’s CEO; what I really am is its Chief Story Teller.”
What she meant is that she believed that telling a story was her most important task as a CEO. Actually, she insisted, her job was to tell the same story over and over again. And when she said ‘a story’, she meant that her job was to tell her representation of the company’s strategy: the direction she wanted to take the business and how that was going to make it prosper and survive. She felt that a good CEO should tell that kind of story repeatedly, to all employees, shareholders, fund managers and analysts. For, indeed, a good strategy does tell a story.
All successful CEOs whom I have seen were great storytellers. Not necessarily because of their oratorical skills, but because the characteristics of the strategy they had put together lent themselves to being told like a story — and a good one too! The most important thing for a CEO to do is to provide a coherent, compelling strategic direction for the company, one that is understood by everyone who has to contribute to its achievement. For that, a story must be told.
When I say this, I am not implying that CEOs need to engage in fiction, nor do they need to be overly dramatic. In my view, a good business strategy story has three characteristics.
First, the story must provide clear choices.
Stevie Spring’s choices were as clear as her forthright language: “We provide specialty magazines, for young males, in British.” Hence, it was clear what was out; there were to be no magazines on, say, ‘music’ (that is too broad), no magazines in German (although that could be a perfectly profitable business for someone else) and no magazines on pottery or vegetable gardens (unless that has recently seen a surge in popularity among young males in the UK without my knowing it). A good strategy story has to contain such a set of genuine choices.
Moreover, it has to be clear how the choices made by the company’s leaders hang together. For example, Frank Martin, who as a CEO orchestrated the revival of the British model-train maker, Hornby, by turning it from a toy company into a hobby company, put his strategy story in just 15 words. “We make perfect scale models for adult collectors, which appeal to some sense of nostalgia.” He decided to focus on making perfect scale models because that is what collectors look for. Moreover, people would usually specifically collect the Hornby brand because it reminded them of their childhood, and with it a nostalgic, foregone era. Frank Martin’s choices were not just a bunch of disconnected strategic decisions; they hung together, and, combined, made for a logical story.
Second, the story must tie to the company’s resources.
Importantly, the set of choices has to be clearly linked to the company’s unique resources, those that can give them a competitive advantage in an attractive segment of the market. Although Hornby had been hovering on the brink of bankruptcy for a decade, it still had some valuable resources. First of all, it possessed a valuable brand that was very well-known and appreciated by people who had owned a Hornby train as children.
Additionally, the company had a great design capability in its hometown of Margate. However, these resources weren’t worth much when competing with the cheaper Chinese toy makers. The children who wanted a toy train for their birthday didn’t know (and could care less) about the Hornby brand. The precision modelling skills of the engineers in Margate weren’t of much value in the toy segment, where things mostly had to be robust and durable. However, these two resources — an iconic brand and a design capability — were of considerable value when making ‘perfect scale models for adult collectors’. It was a perfect match of existing resources to strategy.
I observed a similar thing at the Sadler’s Wells theatre. Ten years ago, before the current CEO Alistair Spalding took over, the theatre put on all sorts of grand shows in various performing arts. Yet, the company was in dire straits, losing money evening-on-evening and by the bucket. Then, Spalding took over and highlighted his leadership with a clear story. He started telling everyone that the theatre was destined ‘to be the centre of innovation in dance’.
He did this because the company was blessed with two valuable resources: (1) an historic reputation for dance (although it had diversified outside dance in the preceding years) and (2) a theatre once designed specifically with dance in mind. Spalding understood that, with these unique resources, he needed to focus the theatre on dance again. Beyond that, he made it the spider in the web, a place where various innovative people and dance forms came together to create new art, a place where stars were formed.
Third, the story must explain a competitive advantage.
The story must not only provide choices that are linked to resources, it must also explain how these choices and resources are going to give the company a competitive advantage in an attractive market, one that others can’t easily emulate. For example, Hornby’s resources enabled it to make perfect scale models for adult collectors better than anyone else, but those adult collectors also happened to form a very affluent and growing segment, one in which margins were much better than in the super-competitive toy market. It isn’t much good to have a competitive advantage in a dying market; you want to be able to do something better than anyone else in a market that will make you grow and prosper.
Thus, it has to be clear from your strategy story why the market is attractive and how the resources are going to enable you to capture the value in that market better than anyone else. The story of the CEO of Fremantle Media, Tony Cohen, for example, was that his company was going to make television productions that were replicable in other countries, with spillovers into other media. Because of their worldwide presence, Fremantle Media were better than their national competitors at rolling out productions such as the X-factor, Pop Idols, game shows and sitcoms. While their local competitors could also develop attractive and innovative shows, Fremantle’s multinational’s presence enabled it to reap more value from them. Therefore, that’s what they focused upon: shows that they could replicate across the globe. It was their competitive advantage, and they built their story around it.
Of course, a good story alone is not enough. A leader still needs good products, people, marketing, finance and so on. But, without a good story, a leader will find it impossible to combine people and resources into a forceful strategic thrust. A good story is a necessary — although, alone, not sufficient — condition for success.
My message for leaders: if you get your story right, it can be a very powerful management tool indeed. It works to convince analysts, shareholders and the public that where you are taking the company is worth everyone’s time, energy and investment.
Perhaps even more importantly, it can provide inspiration to the people who will have to work with and implement the strategy. If employees understand the logic behind a company’s strategic choices and see how it might give the company a sustainable advantage over its competitors, they will soon believe in it. They will soon embrace it. And they will soon execute it. Collective belief is a strong precursor of success. Thus, a good story can spur a company forward and eventually make the story come true.
If you have ever been unlucky enough to attend a large gathering of strategy academics – as I have, many times – it may have struck you that at some point during such a feast (euphemistically called “conference”), the subject matter would turn to talks of “relevance”. It is likely that the speakers were a variety of senior and grey – in multiple ways – interchanged with aspiring Young Turks. A peculiar meeting of minds, where the feeling might have dawned on you that the senior professors were displaying a growing fear of bowing out of the profession (or life in general) without ever having had any impact on the world they spent a lifetime studying, while the young assistant professors showed an endearing naivety believing they were not going to grow up like their academic parents.
And the conclusion of this uncomfortable alliance – under the glazing eyes of some mid-career, associate professors, who could no longer and not yet care about relevance – will likely have been that “we need to be better at translating our research for managers”; that is, if we’d just write up our research findings in more accessible language, without elaborating on the research methodology and theoretical terminology, managers would immediately spot the relevance in our research and eagerly suck up its wisdom.
And I think that’s bollocks.
I don’t think it is bollocks that we – academics – should try to write something that practicing managers are eager to read and learn about; I think it is bollocks that all it needs is a bit of translation in layman’s terms and the job is done.
Don’t kid yourself – I am inclined to say – it ain’t that easy. In fact, I think there are three reasons why I never see such a translation exercise work.
I believe it is an underestimation of the intricacies of the underlying structure of a good managerial article, and the subtleties of how to convincingly write for practicing managers. If you’re an academic, you might remember that in your first year as a PhD student you had the feeling it wasn’t too difficult to write an academic article such as the ones you had been reading for your first course, only to figure out, after a year or two of training, that you had been a bit naïve: you had been (blissfully) unaware of the subtleties of writing for an academic journal; how to structure the arguments; which prior studies to cite and where; which terminology to use and what to avoid; and so on. Well, good managerial articles are no different; if you haven’t developed the skill yet to write one, you likely don’t quite realise what it takes.
2. False assumptions
It also seems that academics, wanting to write their first managerial piece, immediately assume they have to be explicitly prescriptive, and tell managers what to do. And the draft article – invariably based on “the five lessons coming out of my research” – would indeed be fiercely normative. Yet, those messages often seem impractically precise and not simple enough (“take up a central position in a network with structural holes”) or too simple to have any real use (“choose the right location”). You need to capture a busy executive’s attention and interest, giving them the feeling that they have gained a new insight into their own world by reading your work. If that is prescriptive: fine. But often precise advice is precisely wrong.
3. Lack of content
And, of course, more often than not, there is not much worth translating… Because people have been doing their research with solely an academic audience in mind – and the desire to also tell the real world about it only came later – it has produced no insight relevant for practice. I believe that publishing your research in a good academic journal is a necessary condition for it to be relevant; crappy research – no matter how intriguing its conclusions – can never be considered useful. But rigour alone, unfortunately, is not a sufficient condition for it to be relevant and important in terms of its implications for the world of business.
My earlier post – “can’t believe it” – triggered some bipolar comments (and further denials); also to what extent this behaviour can be observed among academics studying strategy. And, regarding the latter, I think: yes.
The denial of research findings obviously relates to confirmation bias (although it is not the same thing). Confirmation bias is a tricky thing: we – largely without realising it – are much more prone to notice things that confirm our prior beliefs. Things that go counter to them often escape our attention.
Things get particularly nasty – I agree – when we do notice the facts that defy our beliefs but we still don’t like them. Even if they are generated by solid research, we’d still like to find a reason to deny them, and therefore see people start to question the research itself vehemently (if not aggressively and emotionally).
It becomes yet more worrying to me – on a personal level – if even academic researchers themselves display such tendencies – and they do. What do you think a researcher in corporate social responsibility will be most critical of: a study showing it increases firm performance, or a study showing that it does not? Whose methodology do you think a researcher on gender biases will be more inclined to challenge: a research project showing no pay differences or a study showing that women are underpaid relative to men?
It’s only human and – slightly unfortunately – researchers are also human. And researchers are also reviewers and gate-keepers of the papers of other academics that are submitted for possible publication in academic journals. They bring their biases with them when determining what gets published and what doesn’t.
And there is some evidence of that: studies showing weak relationships between social performance and financial performance are less likely to make it into a management journal as compared to a finance journal (where more researchers are inclined to believe that social performance is not what a firm should care about), and perhaps vice versa.
No research is perfect, but the bar is often much higher for research generating uncomfortable findings. I have little doubt that reviewers and readers are much more forgiving when it comes to the methods of research that generates nicely belief-confirming results. Results we don’t like are much less likely to find their way into an academic journal. Which means that, in the end, research may end up being biased and misleading.
So, I have been running a little experiment on twitter. Oh well, it doesn’t really deserve the term “experiment” – at least in an academic vocabulary – because there certainly are no treatment effects or control groups. It does deserve the term “little” though, because there are only four observations.
My experiment was to post a few recent findings from academic research that some might find mildly controversial or – as it turns out – offending. These four hair raising findings were 1) selling junk food in schools does not lead to increased obesity, 2) family-friendly workplace practices do not improve firm performance (although they do not decrease them either), 3) girls take longer to heal from concussions, 4) firms headed up by CEOs with broader faces show higher profitability.
Only mildly controversial I’d say, and only to some. I was just curious to see what reactions it would trigger. Because I have noticed in the past that people seem inclined to dismiss academic evidence if they don’t like the results. If the results are in line with their own beliefs and preconceptions, its methods and validity are much less likely to be called stupid.
Selling junk food in schools does not lead to increased obesity is the finding of a very careful study by professors Jennifer Van Hook and Claire Altman. It provides strong evidence that selling junk food in schools does not lead to more fat kids. One can then speculate why this is – and their explanation that children’s food patterns and dietary preferences get established well before adolescence may be a plausible one – but you can’t deny their facts. Yet, it did lead to “clever” reactions such as “says more about academic research than junk food, I fear…”, by people who clearly hadn’t actually read the study.
Family-friendly workplace practices do not improve firm performance is another finding that is not welcomed by all. This large and competent study, by professors Nick Bloom, Toby Kretschmer and John van Reenen, was actually read by some, be it clearly without a proper understanding of its methodology (which, indeed, it being an academic paper, is hard to fully appreciate without proper research methodology training). It led to reactions that the study was “in fact, wrong”, made “no sense”, or even that it really showed the opposite; these silly professors just didn’t realise it.
Girls take longer to heal from concussions is the empirical fact established by Professor Tracey Covassin and colleagues. Of course there is no denying that girls and boys are physiologically different (one cursory look at my sister in the bathtub already taught me that at an early age), but the aforementioned finding still led to swift denials such as “speculation”!
That firms headed up by CEOs with broader faces achieve higher profitability – a careful (and, in my view, quite intriguing) empirical find by my colleague Margaret Ormiston and colleagues – triggered reactions such as “sometimes a study tells you more about the interests of the researcher, than about the object of the study” and “total nonsense”.
So I have to conclude from my little (academically invalid) mini-experiment that some people are inclined to dismiss results from research if they do not like them – and even without reading the research or without the skills to properly understand it. In contrast, other, nicer findings that I had posted in the past, which people did want to believe, never led to outcries of bad methodology and mentally retarded academics and, in fact, were often eagerly retweeted.
We all look for confirmation of our pre-existing beliefs and don’t like it much if these comfortable convictions are challenged. I have little doubt that this also heavily influences the type of research that companies conduct, condone, publish and pay attention to. Even if the findings are nicer than we preconceived (e.g. the availability of junk food does not make kids consume more of it), we prefer to stick to our old beliefs. And I guess that’s simply human; people’s convictions don’t change easily.
In the field of strategy, we always make a big thing out of differentiation: we tell firms that they have to do something different in the market place, and offer customers a unique value proposition. Ideas around product differentiation, value innovation, and whole Blue Oceans are devoted to it. But we also can’t deny that in many industries – if not most industries – firms more or less do the same thing.
Whether you take supermarkets, investment banks, airlines, or auditors, what you get as a customer is highly similar across firms.
- Ability to execute: What may be the case, is that despite doing pretty much the same thing, following the same strategy, there can be substantial differences between the firms in terms of their profitability. The reason can lie in execution: some firms have obtained capabilities that enable them to implement and hence profit from the strategy better than others. For example, Sainsbury’s supermarkets really aren’t all that different from Tesco’s, offering the same products at pretty much the same price in pretty much the same shape and fashion in highly identical shops with similarly tempting routes and a till at the end. But for many years, Tesco had a superior ability to organise the logistics and processes behind their supermarkets, raking up substantially higher profits in the process.
- Shake-out: As a consequence of such capability differences – although it can be a surprisingly slow process – due to their homogeneous goods, we may see firms start to compete on price, margins decline to zero, and the least efficient firms are pushed out of the market. And one can hear a sigh of relief amongst economists: “our theory works” (not that we particularly care about the world of practice, let alone be inclined to adapt our theory to it, but it is more comforting this way).
- A surprisingly common anomaly? But it also can’t be denied that there are industries in which firms offer pretty much the same thing, have highly similar capabilities, are not any different in their execution, and still maintain ridiculously high margins for a sustained period of time. And why is that? For example, as a customer, when you hire one of the Big Four accounting firms (PwC, Ernst & Young, KPMG, Deloitte), you really get the same stuff. They are organised pretty much the same way, they have the same type of people and cultures, and have highly similar processes in place. Yet, they also (still) make buckets of money, repeatedly turning and churning their partners into millionaires.
“But such markets shouldn’t exist!” we might cry out in despair. But they do. Even the Big Four themselves will admit – be it only in covert private conversations carefully shielding their mouths with their hands – that they are really not that different. And quite a few industries are like that. Is it a conspiracy, illegal collusion, or a business X file?
None of the above I am sure, or perhaps a bit of all of them… For one, industry norms seem to play a big role in much of it: unwritten (sometimes even unconscious), collective moral codes, sometimes even crossing the globe, in terms of how to behave and what to do when you want to be in this profession. Which includes the minimum margin to make on a surprisingly undifferentiated service.
I always enjoy witnessing a good debate. And I mean the type of debate where one person is given a thesis to defend, while the other person speaks in favour of the anti-thesis. Sometimes – when smart people really get into it – seeing two debaters line up the arguments and create the strongest possible defence can really clarify the pros and cons in my mind and hence make me understand the issue better.
For example – be it one in a written format – recently my good friend and colleague at the London Business School, Costas Markides, was asked by Business Week to debate the thesis that “happy workers will produce more and do their jobs better”. Harvard’s Teresa Amabile and Steven Kramer had the (relatively easy) task of defending the “pro”. I say relatively easy, because the thesis seems intuitively appealing, it is what we’d all like to believe, and they have actually done ample research on the topic.
My poor London Business School colleague was given the hapless task to defend the “con”: “no, happy workers don’t do any better”. Hapless indeed.
In fact, in spite of receiving some hate mail in the process, I think he did a rather good job. I am giving him the assessment “good” because indeed he made me think. He argues that having happy, smiley employees all abound might not necessarily be a good sign, because it might be a signal that something is wrong in your organisation, and you’re perhaps not making the tough but necessary choices.
As said, it made me think, and that can’t be bad. Might we not be dealing with a reversal of cause and effect here? Meaning: well-managed companies will get happy employees, but that does not mean that choosing to make your employees happy as a goal in and of itself will get you a better organisation? At least, it is worth thinking about.
In spite that perhaps to you it might seem a natural thing to have in an academic institution – a good debate – it is actually not easy to organise one in business academia. Most people are simply reluctant to do it – as I found out organising our yearly Ghoshal Conference at the London Business School – and perhaps they are right, because even fewer people are any good at it.
I guess that is because, to a professor, it feels unnatural to adopt and defend just one side of the coin, because we are trained to be nuanced about stuff and examine and see all sides of the argument. It is also true that (the more naïve part of) the audience will start to associate you with that side of the argument, “as if you really meant it”. Many of the comments Costas received from the public were of that nature, i.e. “he is that moronic guy who thinks you should make your employees unhappy”. Which of course is not what he meant at all. Nor was it the purpose of the debate.
Yet, I also think it is difficult to find people willing to debate a business issue because academics are simply afraid to have an opinion. We are not only trained to examine and see all sides of an argument, we are also trained to not believe in something – let alone argue in favour of it – until there is research that produced supportive evidence for it. In fact, if in an academic article you would ever suggest the existence of a certain relationship without presenting evidence, you’d be in for a good bellowing and a firm rejection letter. And perhaps rightly so, because providing evidence and thus real understanding is what research is about.
But, at some point, you also have to take a stand. As a paediatric neurologist once told me, “what I do is part art, part science”. What he meant is that he knew all the research on all medications and treatments, but at the end of the day every patient is unique and he would have to make a judgement call on what exact treatment to prescribe. And doing that requires an opinion.
You don’t hear much opinion coming from the ivory tower in business academia. Which means that the average business school professor does not receive much hate mail. It also means he doesn’t have much of an audience outside of the ivory tower.
I am a long standing fan of the Ig Nobel awards. The Ig Nobel awards are an initiative by the magazine Air (Annals of Improbable Research) and are handed out on a yearly basis – often by real Nobel Prize winners – to people whose research “makes people laugh and then think” (although its motto used to be to “honor people whose achievements cannot or should not be reproduced” – but I guess the organisers had to first experience the “then think” bit themselves).
With a few exceptions they are handed out for real research, done by academics, and published in scientific journals. Here are some of my old time favourites:
- BIOLOGY 2002, Bubier, Pexton, Bowers, and Deeming.“Courtship behaviour of ostriches towards humans under farming conditions in Britain” British Poultry Science 39(4)
- INTERDISCIPLINARY RESEARCH 2002. Karl Kruszelnicki (University of Sydney). “for performing a comprehensive survey of human belly button lint – who gets it, when, what color, and how much”
- MATHEMATICS 2002. Sreekumar and Nirmalan (Kerala Agricultural University). “Estimation of the total surface area in Indian Elephants” Veterinary Research Communications 14(1)
- TECHNOLOGY 2001, Jointly to Keogh (Hawthorn), for patenting the wheel (in 2001), and the Australian Patent Office for granting him the patent.
- PEACE 2000, the British Royal Navy, for ordering its sailors to stop using live cannon shells, and to instead just shout “Bang!”
- LITERATURE 1998, Dr. Mara Sidoli (Washington) for the report “farting as a defence against unspeakable dread”. Journal of analytical psychology 41(2)
To the best of my knowledge, there is (only) one individual who has not only won an Ig Nobel Award, but also a Nobel Prize. That person is Andre Geim. Geim – who is now at the University of Manchester – for long held the habit of dedicating a fairly substantial proportion of his time to just mucking about in his lab, trying to do “cool stuff”. In one of such sessions, together with his doctoral student Konstantin Novoselov, he used a piece of ordinary sticky tape (which allegedly they found in a bin) to peel off a very thin layer of graphite, taken from a pencil. They managed to make the layer of carbon one atom thick, inventing the material “graphene”.
In another session, together with Michael Berry from the University of Bristol, he experimented with the force of magnetism. Using a magnetized metal slab and a coil of wire in which a current is flowing as an electromagnet, they tried to make a magnetic force that exactly balanced gravity, to try and make various objects “float”. Eventually, they settled on a frog – which, like humans, mostly consists of water – and indeed managed to make it levitate.
The one project got Geim the Ig Nobel; the other one got him the Nobel Prize.
“Mucking about” was the foundation of these achievements. The vast majority of these experiments doesn’t go anywhere; some of them lead to an Ig Nobel and makes people laugh; others result in a Nobel Prize. Many of man’s great discoveries – in technology, medicine or art – have been achieved by mucking about. And many great companies were founded by mucking about, in a garage (Apple), a dorm room (Facebook), or a kitchen and a room above a bar (Xerox).
Unfortunately, in strategy research we don’t muck about much. In fact, people are actively discouraged from doing so. During pretty much any doctoral consortium, junior faculty meeting, or annual faculty review, a young academic in the field of Strategic Management is told – with ample insistence – to focus, figure out in what subfield he or she wants to be known, “who the five people are that are going to read your paper” (heard this one in a doctoral consortium myself), and “who your letter writers are going to be for tenure” (heard this one in countless meetings). The field of Strategy – or any other field within a business school for that matter – has no time and tolerance for mucking about. Disdain and a weary shaking of the head are the fates of those who try, and step off the proven path in an attempt to do something original with uncertain outcome: “he is never going to make tenure, that’s for sure”.
And perhaps that is also why we don’t have any Nobel Prizes.
“So you want to start a company. You’ve finished your undergraduate degree and you’re peering into the haze of your future. Would it be better to continue on to an MBA or do an advanced degree in a nerdy pursuit like engineering or mathematics? Sure, tech skills are hugely in demand and there are a few high-profile nerd success stories, but how often do pencil-necked geeks really succeed in business? Aren’t polished, suited and suave MBA-types more common at the top? Not according to a recent white paper from Identified, tellingly entitled “Revenge of the Nerds.”
Interested? Yes, it does sound intriguing, doesn’t it? It is the start of an article, written by a journalist, based on a report by a company called “Identified”. In the report, you can find that “Identified is the largest database of professional information on Facebook. Our database includes over 50 million Facebook users and over 1.2 billion data points on professionals’ work history, education and demographic data”.
In the report, based on the analysis of data obtained from Facebook, under the header “the best degree for start-up success”, Identified says to present some “definitive conclusions” about “whether an MBA is worth the investment and if it really gets you to the top of the corporate food chain”. Let me no longer hold you in suspense (although I think by now you do see this one coming from a mile or two, like a Harry and Sally romance), their definitive conclusion is: “that if you want to build a company, an advanced degree in a subject like engineering beats an MBA any day”.
So I have read the report…
[insert deep sigh]
and – how shall I put it – I have a few doubts… ( = polite English euphemism). I think there is no way (on earth) that the authors can reach this conclusion based on the data that they’ve got. Allow me to explain:
Although Identified has “assembled a world class team of 15 engineers and data scientists to analyse this vast database and identify interesting trends, patterns and correlations” I am not entirely sure that they are not jumping to a few unwarranted conclusions. ( = polite English euphemism)
So, when they dig up from Facebook all the profiles of anyone listed as “CEO” or “founder”, they find that about ¾ are engineers and a mere ¼ are MBAs. (Actually, they don’t even find that, but let me not get distracted here). I have no quibbles with that; I am sure they do find what they find; after all, they do have “a world class team of 15 engineers and data scientists”, and a fact is a fact. What I have more quibbles with is how you get from that to the conclusion that if you want to build a company, an advanced degree in a subject like engineering beats an MBA any day.
Perhaps it may seem obvious and a legitimate conclusion to you: more CEOs have an engineering degree than an MBA, so surely getting an engineering degree is more likely to enable you to become a CEO? But, no, that is where it goes wrong; you cannot draw this conclusion from those data. Perhaps “a world class team of 15 engineers and data scientists [able] to analyse this vast database and identify interesting trends, patterns and correlations” are superbly able at digging up the data for you but, apparently, they are less skilled in drawing justifiable conclusions. (I am tempted to suggest that, for this, they would have been better off hiring an MBA, but will fiercely resist that temptation!)
The problem is, what we call, “unobserved heterogeneity”, coupled with some “selection bias”, finished with some “bollocks” (one of which is not a generally accepted statistical term) – and in this case there is lots of it. For example – to start with a simple one – perhaps there are simply a lot more engineers trying to start a company than MBAs. If there are 20 engineers trying to start a company and 9 of them succeed, while there are 5 MBAs trying it and 3 of them succeed, can you really conclude that an engineering degree is better for start-up success than an MBA?
But, you may object, why would there be more engineers who are trying to start a business? Alright then, since you insist, suppose out of the 10 engineers 9 succeed and out of the 10 MBAs only 3 do, but the 9 head $100,000 businesses and the three $100 million ones? Still so sure that an engineering degree is more useful to “get you to the top of the corporate food chain”? What about if the MBA companies have all been in existence for 15 years while all the engineering start-ups never make it past year 2?
And these are of course only very crude examples. There are likely more subtle processes going on as well. For instance, the same type of qualities that might make someone choose to do an engineering degree could prompt him or her to start a company, however, this same person might have been better off (in terms of being able to make the start-up a success) if s/he had done an MBA. And if you buy none of the above (because you are an engineer or about to be engaged to one) what about the following: people who chose to do an engineering degree are inherently smarter and more able people than MBAs, hence they start more and more successful companies. However, that still leaves wide open the possibility that such a very smart and able person would have been even more successful had s/he chosen to do an MBA before venturing.
What can you conclude from their findings?
I could go on for a while (and frankly I will) but I realise that none of my aforementioned scenarios will be the right one, yet the point is that there might very well be a bit going on of several of them. You cannot compare the ventures started by engineers with the ventures headed by MBAs, you can’t compare the two sets of people, you can’t conclude that engineers are more successful founding companies, and you certainly cannot conclude that getting an engineering degree makes you more likely to succeed in starting a business. So, what can you conclude from the finding that more CEOs/founders have a degree in engineering than an MBA? Well… precisely that; that more CEOs/founders have a degree in engineering than an MBA. And, I am sorry, not much else.
Real research (into such complex questions such as “what degree is most likely to lead to start-up success?) is more complex. And so will likely have to be the answer. For some type of businesses an MBA might be better, and for others an engineering degree. And some type of people might be more helped with an MBA, where other types are better off with an engineering degree. There is nothing wrong with deriving some interesting statistics from a database, but you have to be modest and honest about the conclusions you can link to them. It may sound more interesting if you claim that you find a definitive conclusion about what degree leads to start-up success – and it certainly will be more eagerly repeated by journalist and in subsequent tweets (as happened in this case) – but I am afraid that does not make it so.
Over the weekend, an (anonymized) interview was published in a Dutch national newspaper with the three “whistle blowers” who exposed the enormous fraud of Professor Diederik Stapel. Stapel had gained stardom status in the field of social psychology but, simply speaking, had been making up all his data all the time. There are two things that struck me:
First, in a previous post I wrote about the fraud, based on a flurry of newspaper articles and the interim report that a committee examining the fraud has put together, I wrote that it eventually was his clumsiness faking the data that got him caught. Although that general picture certainly remained – he wasn’t very good at faking data; I think I could have easily done a better job (although I have never even tried anything like that, honest!) – but it wasn’t as clumsy as the newspapers sometimes made it out to be.
Specifically, I wrote “eventually, he did not even bother anymore to really make up newly faked data. He used the same (fake) numbers for different experiments, gave those to his various PhD students to analyze, who then in disbelief slaving away in their adjacent cubicles discovered that their very different experiments led to exactly the same statistical values (a near impossibility). When they compared their databases, there was substantial overlap”. Now, it now seems the “substantial overlap” was merely a part of one column of data. Plus, there were various other things that got him caught.
I don’t beat myself too hard over the head with my keyboard about repeating this misrepresentation by the newspapers (although I have given myself a small slap on the wrist – after having received a verbal one from one of the whistlers) because my piece focused on the “why did he do it?” rather than the “how did he get caught”, but it does show that we have to give the three whistle blowers (quite) a bit more credit than I – and others – originally thought.
The second point that caught my attention is that, since the fraught was exposed, various people have come out admitting that they had “had suspicions all the time”. You could say “yeah right” but there do appear to be quite a few signs that various people indeed had been having their doubts for a longer time. For instance, I have read an interview with a former colleague of Stapel at Tilburg University credibly admitting to this, I have directly spoken to people who said there had been rumors for longer, and the article with the whistle blowers suggests even Stapel’s faculty dean might not have been entirely dumbfounded that it had all been too good to be true after all… All the people who admit to having doubts in private state that they did not feel comfortable raising the issue while everyone just seemed to applaud Stapel and his Science publications.
This reminded me of the Abilene Paradox, first described by Professor Jerry Harvey, from the George Washington University. He described a leisure trip which he and his wife and parents made in Texas in July, in his parents’ un-airconditioned old Buick to a town called Abilene. It was a trip they had all agreed to – or at least not disagreed with – but, as it later turned out, none of them had wanted to go on. “Here we were, four reasonably sensible people who, of our own volition, had just taken a 106-mile trip across a godforsaken desert in a furnace-like temperature through a cloud-like dust storm to eat unpalatable food at a hole-in-the-wall cafeteria in Abilene, when none of us had really wanted to go”
The Abilene Paradox describes the situation where everyone goes along with something, mistakenly assuming that others’ people’s silence implies that they agree. And the (erroneous) feeling to be the only one who disagrees makes a person shut up as well, all the way to Abilene.
People had suspicions about Stapel’s “too good to be true” research record and findings but did not dare to speak up while no-one else did.
It seems there are two things that eventually made the three whistle blowers speak up and expose Stapel: Friendship and alcohol.
They had struck up a friendship and one night, fuelled by alcohol, raised their suspicions to one another. And, crucially, decided to do something about it. Perhaps there are some lessons in this for the world of business. For example, Jim Westphal, who has done extensive, thorough research on boards of directors, showed that boards often suffer from the Abilene Paradox, for instance when confronted with their company’s new strategy. Yet, Jim and colleagues also showed that friendship ties within top management teams might not be such a bad thing. We are often suspicious of social ties between boards and top managers, fearful that it might cloud their judgment and make them reluctant to discipline a CEO. But it may be that such friendship ties – whether fuelled by alcohol or not – might also help to lower the barriers to resolving the Abilene Paradox. So perhaps we should make friendships and alcohol mandatory – religion permitting – both during board meetings and academic gatherings. It would undoubtedly help making them more tolerable as well.
Researchers in Management and Strategy worry a lot about bias – statistical bias. In case you’re not such an academic researcher, let me briefly explain.
Suppose you want to find out how many members of a rugby club have their nipples pierced (to pick a random example). The problem is, the club has 200 members and you don’t want to ask them all to take their shirts off. Therefore, you select a sample of 20 of them guys and ask them to bare their chests. After some friendly bantering they agree, and then it appears that no fewer than 15 of them have their nipples pierced, so you conclude that the majority of players in the club likely have undergone the slightly painful (or so I am told) aesthetic enhancement.
The problem is, there is a chance that you’re wrong. There is a chance that due to sheer coincidence you happened to select 15 pierced pairs of nipples where among the full set of 200 members they are very much the minority. For example, if in reality out of the 200 rugby blokes only 30 have their nipples pierced, due to sheer chance you could happen to pick 15 of them in your sample of 20, and your conclusion that “the majority of players in this club has them” is wrong.
Now, in our research, there is no real way around this. Therefore, the convention among academic researchers is that it is ok, and you can claim your conclusion based on only a sample of observations, as long as the probability that you are wrong is no bigger than 5%. If it ain’t – and one can relatively easily compute that probability – we say the result is “statistically significant”. Out of sheer joy, we then mark that number with a cheerful asterisk * and say amen.
Now, I just said that “one can relatively easily compute that probability” but that is not always entirely true. In fact, over the years statisticians have come up with increasingly complex procedures to correct for all sorts of potential statistical biases that can occur in research projects of various natures. They treat horrifying statistical conditions such as unobserved heterogeneity, selection bias, heteroscedasticity, and autocorrelation. Let me not try to explain to you what they are, but believe me they’re nasty. You don’t want to be caught with one of those.
Fortunately, the life of the researcher is made easy by standard statistical software packages. They offer nice user-friendly menus where one can press buttons to solve problems. For example, if you have identified a heteroscedasticity problem in your data, there are various buttons to press that can cure it for you. Now, note that it is my personal estimate (but notice, no claims of an asterisk!) that about 95 out of a 100 researchers have no clue what happens within their computers when they press one of those magical buttons, but that does not mean it does not solve the problem. Professional statisticians will frown and smirk at the thought alone, but if you have correctly identified the condition and the way to treat it, you don’t necessarily have to fully understand how the cure works (although I think it often would help selecting the correct treatment). So far, so good.
Here comes the trick: All of those statistical biases are pretty much irrelevant. They are irrelevant because they are all dwarfed by another bias (for which there is no life-saving cure available in any of the statistical packages): publication bias.
The problem is that if you have collected a whole bunch of data and you don’t find anything or at least nothing really interesting and new, no journal is going to publish it. For example, the prestigious journal Administrative Science Quarterly proclaims in its “Invitation to Contributors” that it seeks to publish “counterintuitive work that disconfirms prevailing assumptions”. And perhaps rightly so; we’re all interested in learning something new. So if you, as a researcher, don’t find anything counterintuitive that disconfirms prevailing assumptions, you are usually not even going to bother writing it up. And in case you’re dumb enough to write it up and send it to a journal requesting them to publish it, you will swiftly (or less swiftly, dependent on what journal you sent it to) receive a reply that has the word “reject” firmly embedded in it.
Yet, unintended, this publication reality completely messes up the “5% convention”, i.e. that you can only claim a finding as real if there is only a 5% chance that what you found is sheer coincidence (rather than a counterintuitive insight that disconfirms prevailing assumptions). In fact, the chance that what you are reporting is bogus is much higher than the 5% you so cheerfully claimed with your poignant asterisk. Because journals will only publish novel, interesting findings – and therefore researchers only bother to write up seemingly intriguing counterintuitive findings – the chance that what they eventually are publishing is BS unwittingly is vast.
A recent article by Simmons, Nelson, and Simonsohn in Psychological Science (cheerfully entitled “False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant”) summed it up prickly clearly. If a researcher, running a particular experiment, does not find the result he was expecting, he may initially think “that’s because I did not collect enough data” and collect some more. He can also think “I used the wrong measure; let me use the other measure I also collected” or “I need to correct my models for whether the respondent was male or female” or “examine a slightly different set of conditions”. Yet, taking these (extremely common) measures raises the probability that what the researcher finds in his data is due to sheer chance from the conventional 5% to a whopping 60.7%, without the researcher realising it. He will still cheerfully put the all-important asterisk in his table and declare that he has found a counterintuitive insight that disconfirms some important prevailing assumption.
In management and strategy research we do highly similar things. We for instance collect data with two or three ideas in mind in terms of what we want to examine and test with them. If the first idea does not lead to a desired result, the researcher moves on to his second idea and then one can hear a sigh of relief behind a computer screen that “at least this idea was a good one”. In fact, you might only be moving on to “the next good idea” till you have hit on a purely coincidental result: 15 bulky guys with pierced nipples.
Things get really “funny” when one realises that what is considered interesting and publishable is different in different fields in Business Studies. For example, in fields like Finance and Economics, academics are likely to be fairly skeptical whether Corporate Social Responsibility is good for a firm’s financial performance. In the subfield of Management people are much more receptive to the idea that Corporate Social Responsibility should also benefit a firm in terms of its profitability. Indeed, as shown by a simple yet nifty study by Marc Orlitzky, recently published in Business Ethics Quarterly, articles published on this topic in Management journals report a statistical relationship between the two variables which is about twice as big as the ones reported in Economics, Finance, or Accounting journals. Of course, who does the research and where it gets printed should not have any bearing on what the actual relationship is but, apparently, preferences and publication bias do come into the picture with quite some force.
Hence, publication bias vastly dominates any of the statistical biases we get so worked up about, making them pretty much irrelevant. Is this a sad state of affairs? Ehm…. I think yes. Is there an easy solution for it? Ehm… I think no. And that is why we will likely all be suffering from publication bias for quite some time to come.
The fraud of Diederik Stapel – professor of social psychology at Tilburg University in the Netherlands – was enormous. His list of publications was truly impressive, both in terms of the content of the articles as well as its sheer number and the prestige of the journals in which it was published: dozens of articles in all the top psychology journals in academia with a number of them in famous general science outlets such as Science. His seemingly careful research was very thorough in terms of its research design, and was thought to reveal many intriguing insights about fundamental human nature. The problem was, he had made it all up…
For years – so we know now – Diederik Stapel made up all his data. He would carefully reiterature, design all the studies (with his various co-authors), set up the experiments, print out all the questionnaires, and then, instead of actually doing the experiments and distributing the questionnaires, made it all up. Just like that.
He finally got caught because, eventually, he did not even bother anymore to really make up newly faked data. He used the same (fake) numbers for different experiments, gave those to his various PhD students to analyze, who then in disbelief slaving away in their adjacent cubicles discovered that their very different experiments led to exactly the same statistical values (a near impossibility). When they compared their databases, there was substantial overlap. There was no denying it any longer; Diederik Stapel, was making it up; he was immediately fired by the university, admitted to his lengthy fraud, and handed back his PhD degree.
In an open letter, sent to Dutch newspapers to try to explain his actions, he cited the huge pressures to come up with interesting findings that he had been under, in the publish or perish culture that exist in the academic world, which he had been unable to resist, and which led him to his extreme actions.
There are various things I find truly remarkable and puzzling about the case of Diederik Stapel.
- The first one is the sheer scale and (eventually) outright clumsiness of his fraud. It also makes me realize that there must be dozens, maybe hundreds of others just like him. They just do it a little bit less, less extreme, and are probably a bit more sophisticated about it, but they’re subject to the exact same pressures and temptations as Diederik Stapel. Surely others give in to them as well. He got caught because he was flying so high, he did it so much, and so clumsily. But I am guessing that for every fraud that gets caught, due to hubris, there are at least ten other ones that don’t.
- The second one is that he did it at all. Of course because it is fraud, unethical, and unacceptable, but also because it sort of seems he did not really need it. You have to realize that “getting the data” is just a very small proportion of all the skills and capabilities one needs to get published. You have to really know and understand the literature; you have to be able to carefully design an experiment, ruling out any potential statistical biases, alternative explanations, and other pitfalls; you have to be able to write it up so that it catches people’s interest and imagination; and you have to be able to see the article through the various reviewers and steps in the publication process that every prestigious academic journal operates. Those are substantial and difficult skills; all of which Diederik Stapel possessed. All he did is make up the data; something which is just a small proportion of the total set of skills required, and something that he could have easily outsourced to one of his many PhD students. Sure, you then would not have had the guarantee that the experiment would come out the way you wanted them, but who knows, they could.
- That’s what I find puzzling as well; that at no point he seems to have become curious whether his experiments might actually work without him making it all up. They were interesting experiments; wouldn’t you at some point be tempted to see whether they might work…?
- Truly amazing I also find the fact that he never stopped. It seems he has much in common with Bernard Madoff and his Ponzi Scheme, or the notorious traders in investments banks such as 827 million Nick Leeson, who brought down Barings Bank with his massive fraudulent trades, Societe Generale’s 4.9 billion Jerome Kerviel, and UBS’s 2.3 billion Kweku Adoboli. The difference: Stapel could have stopped. For people like Madoff or the rogue traders, there was no way back; once they had started the fraud there was no stopping it. But Stapel could have stopped at any point. Surely at some point he must have at least considered this? I guess he was addicted; addicted to the status and aura of continued success.
- Finally, what I find truly amazing is that he was teaching the Ethics course at Tilburg University. You just don’t make that one up; that’s Dutch irony at its best.
“Entrepreneurship can only be self-taught. There are many ways to do it right and even more wrong, but it cannot be processed, bottled, packaged, and delivered from a lectern”, one of our readers – Michael Marotta – commented on an earlier post.
I am not sure I agree with the suggestion of that statement, namely that “entrepreneurship can only be self-taught”. Of course we hear it more often – “you cannot teach entrepreneurship” – but I have yet to see any evidence of it. Granted, this is a weak statement, since the evidence that business education helps with anything is rather scarce (although there is some)!
However, the fact that the majority of entrepreneurs did not have formal business education does not tell me anything. Suppose out of 1000 attempted entrepreneurs indeed only 100 had formal business education. It might still be very possible that out of the 100, 50 of them became successful, where out of the 900 others only 300 became successful. This means that out of the 350 successful entrepreneurs, a mere 50 had formal business education. However, 50% of business educated entrepreneurs became successful, while only 1/3 of entrepreneurs without business education did.
My feeling about the potentially influence of business education on the odds of becoming a successful entrepreneur are quite the opposite of Marotta’s. I see quite a few attempted entrepreneurs with good business ideas and energy, however, they make some basic mistakes when attempting to build it into a business. The sheer logic of how to set up a viable business – once you have had a good idea – is something that is open to being “processed, bottled, packaged, and delivered from a lectern” (although that is hardly what we do in B-school).
Having a great idea and ample vision and energy perhaps is a necessary condition for becoming a successful entrepreneur, but it is not sufficient; this requires many other skills, and for some of them, education helps. Out of the 10 different skills needed to become a successful entrepreneur, perhaps only 5 can be taught or enhanced through business education, but those 5 will clearly improve your odds of making it.
Perhaps the majority of successful entrepreneurs do not have formal business education, but I have yet to meet a successful enterpreneur who did go to business school who proclaims his/her education was not a great help in becoming a success. Invariably, those people claim their education helped them a lot. In fact, many of such business school alumni donate generously to their alma mater. For example, one of London Business School’s successful alumni entrepreneurs, Tony Wheeler (founder of Lonely Planet travel guides) regularly donates very substantial amounts of money to the School, because he believes his education there helped him greatly in making his business a success, and he wants others to have the same experience and opportunity.
In the absence of any formal evidence on whether business school education helps or hinders becoming a successful entrepreneur, I am inclined to rely on their judgement: business school education helps, if you want to become a successful entrepreneur.
“I am not that surprised that an academic of entrepreneurship (are you kidding me?) would lead a story about one of the world’s best innovators and CEO’s about that he actually and in fact ! OMG had body odour as a teenager because of his diet, not to mention the rest of your embarrassing piece. Forbes would be best sticking with writers that are inspired by such great entrepreneurs as Steve Jobs, and not with writers such as this, who are unhappy they have not had the courage to ‘live the life they love and not settle’ and so sit in front of their computer with not much else to do but trying to bring others down. Shame on you Mr Vermeulen”.
This is just one of the comments I received on my earlier piece “Steve Jobs – the man was fallible” (also published on my Forbes blog). Of course, this was not unanticipated; having the audacity to suggest that, in fact, the great man did not possess the ability to walk on water was the closest thing to business blasphemy. And indeed a written stoning duly followed.
But why is suggesting that a human being like Steve Jobs was in fact fallible – who, in the same piece, I also called “a management phenomenon”, “fantastically able”, “a legend”, and “a great leader” – by some considered to be such an act of blasphemy? All I did was claim that he was “fallible”, “not omnipotent”, and “not always right”, which as far as I can see comes with the definition of being human?
And I guess that’s exactly it; in life and certainly in death Steve Jobs transcended the status of being human and reached the status of deity. A journalist of the Guardian compared the reaction (especially in the US) to the death of Steve Jobs with the reaction in England to the death of Princess Diana; a collective outpour of almost aggressive emotion by people who only ever saw the person they are grieving about briefly on television or at best in a distance. Suggesting Princess Diana was fallible was not a healthy idea immediately following her death (and still isn’t); nor was suggesting Steve Jobs was human.
We are inclined to deify successful people in the public eye, and in our time that certainly includes CEOs. In the past, in various cultures, it may have been ancient warriors, Olympians, or saints. They became mythical and transcended humanity, quite literally reaching God-like status.
Historians and geneticists argue that this inclination for deification is actually deeply embedded in the human psyche, and we have evolved to be prone to worship. There is increasing consensus that man came to dominate the earth – and for instance drive out Neanderthalers, who were in fact stronger, likely more intelligent, and had more sophisticated tools – because of our superior ability to organize into larger social systems. And a crucial role in this, fostering social cohesion, was religion, which centers on myths and deities. This inclination for worship very likely became embedded into our genetic system, and it is yearning to come out and be satisfied, and great people such as Jack Welch, Steve Jobs, and Lady Di serve to fulfill this need.
But that of course does not mean that they were infallible and could in fact walk on water. We just don’t want to hear it. Great CEOs realize that their near deification is a gross exaggeration, and sometimes even get annoyed by its suggestion – Amex’s Ken Chenault told me that he did not like it at all, and I have seen that same reaction in Southwest’s Herb Kelleher. Slightly less-great CEOs do start to believe their own status, and people like Enron’s Jeff Skilling or Ahold’s Cees van der Hoeven come to mind; not coincidentally they are often associated with spectacular business downfalls. I have never spoken to Steve Jobs, but I am guessing he might not have disagreed with the qualifications “not omnipotent”, “not always right” and, most of all, “human”.
As a student, at Reed College, Steve Jobs came to believe that if he ate only fruits he would eliminate all mucus and not need to shower anymore. It didn’t work. He didn’t smell good. When he got a job at Atari, given his odor, he was swiftly moved into the night shift, where he would be less disruptive to the nostrils of his fellow colleagues.
The job at Atari exposed him to the earliest generation of video games. It also exposed him to the world business and what it meant build up and run a company. Some years later, with Steve Wozniak, he founded Apple in Silicon Valley (of course in a garage) and quite quickly, although just in his late twenties, grew to be a management phenomenon, featuring in the legendary business book by Tom Peters and Bob Waterman “In Search of Excellence”.
But, in fact, shortly after the book became a bestseller, by the mid 1980s, Apple was in trouble. Although their computers were far ahead of their time in terms of usability – mostly thanks to the Graphical User Interface (based on an idea he had cunningly copied from Xerox) – they were just bloody expensive. Too expensive for most people. For example, the so-called Lisa retailed for no less than $10,000 (and that is 1982 dollars!). John Sculley – CEO – recalled “We were so insular, that we could not manufacture a product to sell for under $3,000.” Steve Jobs was fantastically able to assemble and motivate a team op people that managed to invent a truly revolutionary product, but he also was unable to turn it into profit. Read the rest of this entry »