Motivation Trumps IQ. What now?

An article recently posted in Slate reviews research showing that a significant portion of the variation in IQ tests is attributable to motivation rather than ability. In one striking study researchers measured the children’s IQ and split them into High, Average, and Low groups. They reran the test offering the low group an M&M for every correct answer. As a result of this simple incentive, the low group’s score went from 79 to 97 – on par with the average group.

Ok, so incentives work. Perhaps not a big surprise on many levels.

On the other hand, there is a large OB/HRM literature invested in the conclusion that performance increases are associated with hiring employees with a higher IQ. The assumption there is that IQ measures ability as opposed to motivation.

This raises a critical question for strategy scholars. Is motivation an immutable attribute of human capital Read the rest of this entry »


Healthcare Price Control Shuffle…

Romney and Ryan have incorrectly characterized Obamacare as a “Raid on Medicare” and news organizations and the Obama campaign have fired back that is it actually a program to reduce healthcare costs — an important achievement of the administration. This whole discussion misses the fundamental point that $716 billion in savings would be the result of mandated price controls. Given that this is a major intervention, it is important to understand how these altered incentives will affect the U.S. healthcare system.

Medicare currently pays providers 30% less than private insurers and Obamacare will further reduce that to save $716 billion in payments to providers (hospitals, doctors, etc.). At the same time, broader coverage (another goal of the new law) will undoubtedly increase demand for services. How will these effects play out?

We already know that some providers are less willing to accept Medicare Read the rest of this entry »


Individual Bias and Collective Truth?

Freek’s latest post on confirmation bias notes that intellectual commitments can bias which research findings one believes. The tone of the post is that we would all be better off if such biases didn’t exist, but there is definitely a tradeoff here. Greater objectivity tends to go with lower intensity of interest in a subject. (Disinterested and uninterested are correlated, for those old-timers who remember when those words had different definitions.) That’s why you often find that those with strong views on controversial topics–including those with minority or even widely ridiculed opinions–often know more about the topic, the evidence, and the arguments pro and con than “objective” people who can’t be bothered to dig into the matter. Other than partisanship, the only thing that will get people interested enough to seriously assess competing claims is a personal stake in the truth of the matter. (And in all cases, Feynman’s admonition that the easiest person to fool is yourself should be borne in mind.)

Historians of science of all stripes, from romanticists like Paul de Kruif (author of the classic The Microbe Hunters) to sophisticated evolutionists like David Hull in Science as a Process, have reported that intellectual partisanship motivates a great deal of path-breaking research. “I’ll show him!” has spawned a lot of clever experiments. Burning curiosity and bland objectivity are hard to combine.

But how can such partisanship ever lead to intellectual progress? Partisans have committed to high-profile public bets on one or another side of a controversy; their long-term career and immediate emotional payoffs depend not directly on the truth, but on whether or not they “win” in the court of relevant opinion. The key to having science advance is for qualified non-partisan spectators of these disputes be able to act as independent judges to sort out which ideas are better.

Ideally, these adjacent skilled observers would have some skin in the game by virtue of having to bet their own research programs on what they think the truth is. If they choose to believe the wrong side of a dispute, their future research will fail, to their own detriment. That’s the critical form of incentive compatibility for making scientific judgments objective, well-described in Michael Polanyi’s “Republic of Science” article. If, for most observers, decisions about what to believe are closely connected to their own future productivity and scientific reputation, then the partisanship of theory advocates is mostly a positive, motivating exhaustive search for the strengths and weaknesses of the various competing theories. Self-interested observers will sort out the disputes as best they can, properly internalizing the social gains from propounding the truth.

The problem for this system comes when 1) the only scientific interest in a dispute lies among the partisans themselves, or 2) observers’ control over money, public policy, or status flows directly from choosing to believe one side or another regardless of the truth of their findings. Then, if a false consensus forms the only way for it come unstuck is for new researchers to benefit purely from the novelty of their revisionist findings–i.e., enough boredom and disquiet with the consensus sets in that some people are willing to entertain new ideas.


Fungibility v. Fetishes

For an economist studying business strategy, an interesting puzzle is why businesspeople, analysts, and regulators often don’t seem to perceive the fungibility of payments. Especially in dealing with bargaining issues, a persistent “optical illusion” causes them to fetishize particular transaction components without recognizing that the share of total gain accruing to a party is the sum of these components, regardless of the mix. Proponents of the “value-based” approach to strategy, which stresses unrestricted bargaining and the core solution concept, ought to be particularly exercised about this behavior, but even the less hard-edged V-P-C framework finds it difficult to accommodate.

Some examples:

  • There’s been some noise lately about U.S. telecom providers cutting back on the subsidies they offer users who buy smartphones. None of the articles address the question of whether the telecom firms can thereby force some combination of a) Apple and Samsung cutting their wholesale prices and b) end users coughing up more dough for (smartphone + service). The possibility that competition among wireless providers fixes the share of surplus that they can collect, so that cutting the phone subsidy will also require them to cut their monthly service rates, is never raised explicitly. There is a pervasive confusion between the form of payments and the total size of payments.

Read the rest of this entry »


Crowd-sourcing Strategy Formulation

The current issue of McKinsey Quarterly features an interesting article on firms crowd-sourcing strategy formulation. This is another way that technology may shake up the strategy field (See also Mike’s discussion of the MBA bubble). The article describes examples in a variety of companies. Some, like Wikimedia and Redhat aren’t much of a surprise given their open innovation focus. However, we should probably take notice when more traditional companies (like 3M, HCL Technologies, and Rite-Solutions) use social media in this way.  For example, Rite-Solutions, a software provider for the US Navy, defense contractors and fire departments, created an internal market for strategic initiatives:

Would-be entrepreneurs at Rite-Solutions can launch “IPOs” by preparing an Expect-Us (rather than a prospectus)—a document that outlines the value creation potential of the new idea … Each new stock debuts at $10, and every employee gets $10,000 in play money to invest in the virtual idea market and thereby establish a personal intellectual portfolio Read the rest of this entry »


Sleeping With the Enemy Part II

In an earlier post, I noted Target’s costly decision to end its on-line outsourcing arrangement with Amazon’s cloud service and take all its work in-house. The short-term costs were considerable, both in direct outlays and in performance degradation, and the long-term benefits were hard to pin down. Vague paranoia rather than careful analysis seemed to have driven the decision. I pointed out that firms often seemed unwilling to “sleep with the enemy,” i.e. purchase critical inputs from a direct rival, but the case for such reluctance was weak.

A few months ago, an apparent counterexample popped up. Swatch, the Swiss wristwatch giant, decided unilaterally to cease supplying mechanical watch assemblies to a host of competing domestic brands that are completely dependent on Swatch for these key components. These competitors (including Constant, LVMH, and Chanel) sued, fruitlessly, to force Swatch to continue to sell to them. The Swiss Federal Administrative Court  backed up a deal Swatch cut with the  Swiss competition authorities that allows Swatch to begin reducing its shipments to rivals. The competition authority will report later this year on how much grace time Swatch’s customers must be given to find new sources of supply, and these customers may appeal to the highest Swiss court. For now, Swatch’s customers are scrambling for alternative sources of supply in order to stay in business. The stakes are especially high because overall business is booming, with lots of demand in Asia.

Read the rest of this entry »


How do you grow a capability?

The “dynamic capabilities” literature, I think, is a bit of a mess: lots of jargon, conflicting arguments (and levels of analysis) and little agreement even on a basic definition.  I don’t really like to get involved in definitional debates, though I think the idea of a capability, the ability to do/accomplish something (whether individual or collective), is fundamental for strategy scholars.

Last weekend I was involved in a “microfoundations of strategy” panel (with Jay Barney and Kathy Eisenhardt).  One of the questions that I raised, and find quite intriguing, is the question of how we might “grow” a capability.  The intuition for “growing” something, as a form of explanation, comes from simulation and agent-based modeling.   For example, Epstein has argued, “if you didn’t grow it, you didn’t explain it” (here’s the reference).   I like that intuition.  As I work with colleagues in engineering and computer science, this “growth” mentality seems to implicitly be there.  Things are not taken for granted, but explained by “growing” them.  Capabilities aren’t just the result of “history” or “experience” (a common explanation in strategy), but rather that history and experience needs to be unpacked and understood more specifically.  What were the choices that led to this history?  Who are the central actors?  What are the incentives and forms of governance?  Etc.

So, if we were to “grow” a capability, I think there are some very basic ingredients.  First, I think understanding the nature, capability and choices of the individuals involved is important.  Second, the nature of the interactions and aggregation matters.  The interaction of  individuals and actors can lead to emergent, non-linear and collective outcomes.  Third, I think the structural and design-related choices (e.g., markets versus hierarchy) and factors are important in the emergence (or not) of capabilities. Those are a few of the “ingredients.”

I’m not sure that the “how do you grow a capability”-intuition is helpful in all situations.  However, I do find that there is a tendency to use short-hand code words (routines, history, experience), and the growth notion requires us to open up these black boxes and to more carefully investigate the constituent parts, mechanisms and interactions that lead to the development or “growth” of capability.


Manipulating journal impact factors

Via several folks on Facebook (e.g., Marcel Bogers, Der Chao Chen) – here’s a short blog post on how journals are manipulating their impact factors: coerced citations and manipulated impact factors – dirty tricks of academic journals.

Here’s the Science piece that the above blog post refers to: Coercive citation in academic publishing.  Here’s the data (pdf).


Duly Noted: Solving the Principal-Agent Problem in Firms:The dumbest idea in the world?

This article in Forbes argues that a new book by the Dean of Rotman School provides an antidote to the rampant excesses of modern day capitalism.  The principle swipe is against the landmark paper (over 29000 Google Scholar citations)  by Jensen and Meckling on both the prevalence of the principal agent problem in the governance of firms and the various solutions to overcome it – including creating incentives that maximize shareholder value.  Quoting Jack Welch, former CEO of GE, the article says that maximizing shareholder value is the dumbest idea in the world.  I my self am not sure if this is THE dumbest idea in the world – in fact there are many more that would easily surpass P-A problem resolution – but I am sure this will ignite a debate about why firm’s exist – what is the best governance mechanism for them and the role of economic theory and action in our lives.  I for one need to go back and read the article and then read the book.


Actually, small companies are better at innovation than large companies

Grant McCracken summarizes an Economist post that argues that big companies are better at innovation than small ones (well, he discusses both sides).

But theory says that small companies are actually the winners.

Economists have long wrestled with this, the “diseconomies,” problem: why do smaller organizations outperform large ones?  (Todd Zenger’s 1992 Management Sci piece summarizes this work nicely.)  Schumpeter indeed went both ways on this (Dick Langlois discusses the “two Schumpeters”-thesis a bit here).  But yes, large organizations seemingly have the resources, complementary assets, access to talent etc to outperform small organizations.  But small organizations still outperform large ones.

Large organizations have lots of problems (I’ll spare the references, for now).  They

  • mis-specify incentives,
  • suffer from problems of social loafing (free-rider problem),
  • engage in unnecessary  intervention, etc.

And, if large organizations had such an advantage, why not take this argument to the extreme and simply organize everything under one large firm?  That, of course, was one of Coase’s central questions.  Obviously the organization-market boundary matters and there are costs associated with hierarchy.

Sure – there are lots of contingencies, caveats and exceptions [insert example from Apple or 3M].  And, definitions matter [what exactly is “small” versus “large”].  But on the whole, the theory says small companies win in the innovation game.


Innocentive challenges

Innocentive is listing some new, cool challenges that relate to the social sciences and strategy:

Here are all their challenges.  And the external ones are also listed (DARPA, P&G, NASA, etc).

The Darpa Shredder Challenge was solved a few weeks ago by a small team in San Francisco, “All Your Shreds Are Below To U.S.”


Reinventing discovery: the promise of open science

I’ve been skimming/reading through Michael Nielsen’s (pioneer in quantum computing) new (2012) book Reinventing discovery: the new era of networked science, Princeton University Press.  The book chronicles the various open science and open innovation initiatives from the past and present: Torvalds and Linux, Tim Gowers’ polymath project (see his post: is massively collaborative mathematics possible), the failed quantum wiki (qwiki) effort, Galaxy Zoo, collaborative fictionSloan Digital Sky Survey (SDSS)Open Architecture Network, Foldit, SPIRESPaul Ginsbarg’s arXiv, the Public Library of Science (PLoS), of course Innocentive, etc, etc.

My quick take on the book – it is a nice review of the existing forms that open innovation and open science are taking.  I’ve read or followed most of the above projects over the years so the book doesn’t cover too much new territory from that perspective.  The language in the book isn’t too precise –e.g., “network” isn’t very specific (I suppose in this case it simply means internet, broadly, and more general openness). But then again, this isn’t really an academic book (lots of great footnotes though).  But the book is a great review of some of the existing efforts in open innovation and open science.

But beyond detailing the many instances of increased openness in science, the book touches more generally on the possibilities of “citizen science” (David Kirsch posted about citizen science on orgtheory.net, see here).  I think there are lots of interesting possibilities: funding, tapping into cognitive ‘surplus,’ perhaps gamification, and many other forms of collaboration.  And the book leaves off with some important problems for and questions about open science.  How do you get the incentives right for openness?  Who should be the gatekeepers?  What institutions are needed to support openness?  Etc.

Here’s the author speaking at Google a few weeks ago:


Fraud in the ivory tower (and a big one too)

The fraud of Diederik Stapel – professor of social psychology at Tilburg University in the Netherlands – was enormous. His list of publications was truly impressive, both in terms of the content of the articles as well as its sheer number and the prestige of the journals in which it was published: dozens of articles in all the top psychology journals in academia with a number of them in famous general science outlets such as Science. His seemingly careful research was very thorough in terms of its research design, and was thought to reveal many intriguing insights about fundamental human nature. The problem was, he had made it all up…

For years – so we know now – Diederik Stapel made up all his data. He would carefully reiterature, design all the studies (with his various co-authors), set up the experiments, print out all the questionnaires, and then, instead of actually doing the experiments and distributing the questionnaires, made it all up. Just like that.

He finally got caught because, eventually, he did not even bother anymore to really make up newly faked data. He used the same (fake) numbers for different experiments, gave those to his various PhD students to analyze, who then in disbelief slaving away in their adjacent cubicles discovered that their very different experiments led to exactly the same statistical values (a near impossibility). When they compared their databases, there was substantial overlap. There was no denying it any longer; Diederik Stapel, was making it up; he was immediately fired by the university, admitted to his lengthy fraud, and handed back his PhD degree.

In an open letter, sent to Dutch newspapers to try to explain his actions, he cited the huge pressures to come up with interesting findings that he had been under, in the publish or perish culture that exist in the academic world, which he had been unable to resist, and which led him to his extreme actions.

There are various things I find truly remarkable and puzzling about the case of Diederik Stapel.

  • The first one is the sheer scale and (eventually) outright clumsiness of his fraud. It also makes me realize that there must be dozens, maybe hundreds of others just like him. They just do it a little bit less, less extreme, and are probably a bit more sophisticated about it, but they’re subject to the exact same pressures and temptations as Diederik Stapel. Surely others give in to them as well. He got caught because he was flying so high, he did it so much, and so clumsily. But I am guessing that for every fraud that gets caught, due to hubris, there are at least ten other ones that don’t.
  • The second one is that he did it at all. Of course because it is fraud, unethical, and unacceptable, but also because it sort of seems he did not really need it. You have to realize that “getting the data” is just a very small proportion of all the skills and capabilities one needs to get published. You have to really know and understand the literature; you have to be able to carefully design an experiment, ruling out any potential statistical biases, alternative explanations, and other pitfalls; you have to be able to write it up so that it catches people’s interest and imagination; and you have to be able to see the article through the various reviewers and steps in the publication process that every prestigious academic journal operates. Those are substantial and difficult skills; all of which Diederik Stapel possessed. All he did is make up the data; something which is just a small proportion of the total set of skills required, and something that he could have easily outsourced to one of his many PhD students. Sure, you then would not have had the guarantee that the experiment would come out the way you wanted them, but who knows, they could.
  • That’s what I find puzzling as well; that at no point he seems to have become curious whether his experiments might actually work without him making it all up. They were interesting experiments; wouldn’t you at some point be tempted to see whether they might work…?
  • Truly amazing I also find the fact that he never stopped. It seems he has much in common with Bernard Madoff and his Ponzi Scheme, or the notorious traders in investments banks such as 827 million Nick Leeson, who brought down Barings Bank with his massive fraudulent trades, Societe Generale’s 4.9 billion Jerome Kerviel, and UBS’s 2.3 billion Kweku Adoboli. The difference: Stapel could have stopped. For people like Madoff or the rogue traders, there was no way back; once they had started the fraud there was no stopping it. But Stapel could have stopped at any point. Surely at some point he must have at least considered this? I guess he was addicted; addicted to the status and aura of continued success.
  • Finally, what I find truly amazing is that he was teaching the Ethics course at Tilburg University. You just don’t make that one up; that’s Dutch irony at its best.

DOE makes $150m loan conditional on matching private funds – firm folds

Here is the WIRED link: EV Startup Aptera Motors Pulls the Plug: “The company that brought us a three-wheeled sperm-shaped two-wheeler shuts its doors after four years.” More here: The 190 MPG Aptera electric care that never was.

No kidding. My faith in government bureaucrats to make successful commercialization picks is, as we used to say in Nevada, lower than a snakes belly in a wagon rut. How many Department of Something-or-Other types do you think have the slightest idea of what Porter’s Five Forces are? And that’s a 30-year-old framework in strategy. Don’t even get me started on the open invitation to political corruption that these policies tend to create.

Those who worry about our dependence on foreign oil – a worry I share, by the way – often cite historic examples of government projects that successfully developed new technologies that would never have seen the light of day (or, at best, would have seen it decades later) had the country relied on the private sector to do it.  The Manhattan Project to develop the nuclear bomb during WWII is a favorite citation.

The problem we are seeing today is that the government is presently throwing money at firms claiming they can commercialize green technologies that, in reality, have not yet passed the basic development stage. You can’t get private investors to ante up when taxpayers are shouldering half the risk? That’s a very strong signal that those who spend their lives evaluating such things believe your technology is not ready for prime time. There is a big difference between government involvement in basic technology and government involvement in its commercialization.

Now, if folks are really serious about a Manhattan Project-style effort to, say, develop an efficient electric car, then let’s do it right! Get the smartest scientists from the top schools, fence them in at a top-secret facility in the middle of some desert, and don’t let them out until they succeed. I think that might work. And, I’m pretty sure the scientists’ home institutions would go for it.

Then, turn the technology over to the VCs to compete in the commercialization stage.

 


Incentives to blog: the iron blogger

Benjamin Mako Hill has set up the “Iron Blogger.”  The Iron Blogger is an effort to precommit oneself to blogging at least once a week.  If not, you owe Mako $5 which goes into a common pool of money – for get-togethers.   Here are the rules and participants (which include StrategyProfs very own Karim – so we can expect at least a weekly post from him).  Presumably you have to be a Bostonian to participate.

The effort has some links to Ulysses tying himself to the mast to avoid the Sirens.  For some social theory on this, check out Jon Elster’s book Ulysses Sirens: Studies in Rationality and Irrationality, Cambridge University Press.


Can There Be Strategy in Distributed Movements? Linux and #OWS

An an ongoing research puzzle for me has been how distributed movements, open source|wikipedia, mobilize collective action and get individual incentives and actions aligned.   Is the apparent lack of “strategy” a virtue or a vice?  For example, Linus Torvalds, founder of Linux, has  argued that “brownian motion” drives Linux development:

<From: Linus Torvalds
Subject: Re: Coding style – a non-issue
Date: Fri, 30 Nov 2001 16:50:34 -0800 (PST)

On Fri, 30 Nov 2001, Rik van Riel wrote:
>
> I’m very interested too, though I’ll have to agree with Larry
> that Linux really isn’t going anywhere in particular and seems
> to be making progress through sheer luck.

Hey, that’s not a bug, that’s a FEATURE!

You know what the most complex piece of engineering known to man in the
whole solar system is?

Guess what – it’s not Linux, it’s not Solaris, and it’s not your car.

It’s you. And me.

And think about how you and me actually came about – not through any
complex design.

Right. “sheer luck”.

Well, sheer luck, AND:
– free availability and _crosspollination_ through sharing of “source
code”, although biologists call it DNA.
– a rather unforgiving user environment, that happily replaces bad
versions of us with better working versions and thus culls the herd
(biologists often call this “survival of the fittest”)
– massive undirected parallel development (“trial and error”)

I’m deadly serious: we humans have _never_ been able to replicate
something more complicated than what we ourselves are, yet natural
selection did it without even thinking.

<….later in thread…>

A strong vision and a sure hand sound like good things on paper. It’s just
that I have never _ever_ met a technical person (including me) whom I
would trust to know what is really the right thing to do in the long run.

Too strong a strong vision can kill you – you’ll walk right over the edge,
firm in the knowledge of the path in front of you.

I’d much rather have “brownian motion”, where a lot of microscopic
directed improvements end up pushing the system slowly in a direction that
none of the individual developers really had the vision to see on their
own.

And I’m a firm believer that in order for this to work _well_, you have to
have a development group that is fairly strange and random.

To get back to the original claim – where Larry idolizes the Sun
engineering team for their singlemindedness and strict control – and the
claim that Linux seems ot get better “by luck”: I really believe this is
important.

The problem with “singlemindedness and strict control” (or “design”) is
that it sure gets you from point A to point B in a much straighter line,
and with less expenditure of energy, but how the HELL are you going to
consistently know where you actually want to end up? It’s not like we know
that B is our final destination.

In fact, most developers don’t know even what the right _intermediate_
destinations are, much less the final one. And having somebody who shows
you the “one true path” may be very nice for getting a project done, but I
have this strong belief that while the “one true path” sometimes ends up
being the right one (and with an intelligent leader it may _mostly_ be the
right one), every once in a while it’s definitely the wrong thing to do.

And if you only walk in single file, and in the same direction, you only
need to make one mistake to die.

In contrast, if you walk in all directions at once, and kind of feel your
way around, you may not get to the point you _thought_ you wanted, but you
never make really bad mistakes, because you always ended up having to
satisfy a lot of _different_ opinions. You get a more balanced system.

So the question for me has been if this is just an accidental feature of a distributed movement or can we actually drive collective action this way?
The recent emergence of #OWS provides an interesting case study unfolding in real time. Fast Company has a nice entry about how the movement came about:

And not posting clear demands, while essentially a failing, has unintended virtue. Anyone who is at all frustrated with the economy–perhaps even 99% of Americans–can feel that this protest is their own.
 

So is this the way to develop strategy?


Who are the top one percent?

Interesting podcast at Econtalk on wealth, income distribution etc – “Kaplan on inequality and the top 1%.”   Much of the Roberts and Kaplan discussion focuses on this paper (pdf) – “Wall Street and Main Street: What Contributes to the Rise in the Highest Incomes.”

Paul Krugman makes a different argument – “Oligarchy, American Style.”

This is obviously a heated debate – as illustrated by Freek’s post (also see the comments).


What’s wrong with senior executive pay? – lots (in my view)

There are three things I do not like about top management pay: 1) they usually get paid too much, 2) way too large a part is flexible, performance-related pay, 3) often, a very sizeable chunk of it is paid through stock options. 

I used to think – naively – that high top management pay was high simply due to supply and demand: these smart people with lots of business acumen and experience are hard to come by; therefore you have to pay them lots. These grumpy anti-corporates claiming their pay is too high are just envious and naive. Turns out I was (maybe not envious, but certainly naive).

Pay level

Because digging into the rigorous research on the topic – and there is quite a bit of it – I learned that there is really not much of a relationship between firm performance and top management pay. These guys (mostly guys) get paid a lot whether or not their company’s performance is any good. Moreover, I learned what sort of factors push up top managers’ remuneration – and it ain’t supply and demand. It has much more to do with selecting the right company directors (to serve on your remumeration committee) and making sure you are well networked and socialized into the business elite.* Now I have to conclude: top management pay is generally too high, and quite a bit too high. 

Flexible pay

Secondly: where does this absurd idea come from that 80+ percent of these guys’ remuneration has to be performance related?! “To reward them for good performance and stimulate them to act in the best interest of the company and its shareholders” you might say? To which I would reply “oh, come on!?” If your CEO is the type of guy who needs 90 percent performance-related pay or otherwise he won’t act in the best interest of the company, I would say the perfect time to get rid of him is yesterday. You and I do not need 90 percent performance related pay to do our best, do we? So why would it be allowed to hold for top managers? As Henry Mintzberg put it: “Real leaders don’t take bonuses”. 

Moreover, one should only pay performance-related remuneration if you can actually measure the person’s performance. And that is – especially for top managers – actually pretty darn hard to do. The strategic decisions one takes this year will often only be felt 5 or 10 years from now, if not longer. Moreover, the performance of the company – which we always take to proxy the CEO’s performance – is influenced by a whole bunch of other things; many not under a CEO’s control. Hence, short term financial performance figures are a terrible indicator of a top manager’s performance in the job and long-term performance contracts all but impossible to specify. If you can’t reliably measure performance, don’t have performance-related pay, and certainly not 80+ percent of it. We know from ample research that humans start manipulating their performance when you tie their remuneration to some strange metric and, guess what, CEOs are pretty human (at least in that respect); they do too.

 Options

 Finally: stock options… Once again, I have to say “oh, come on…”. We pretty much take for granted that we pay top managers by awarding them options, but don’t quite realize any more why. When I ask this question to my students or the executives in my lecture room (“why do we actually pay them in options…?”) usually a stunned silence follows after which someone mumbles “because they are cheap to hand out…?”. I usually try to remain polite after such an answer but why would they be cheap; cheaper than cash, or shares for that matter? True, it does not cost you anything out of pocket if you give them an option to buy shares for say 100 one year from now, while your present share price is 90, but if the share price by that time is 150 it does cost you 50. Moreover, you could have sold that stock option to someone who would have happily paid you good money for it, so in terms of opportunity costs it is realy money too. No, stock options are not cheaper than cash, shares, or whatever.

We give them options to stimulate them to take more risk. “Risk?! We want them to take more risk?!” thou might think. Yes, that’s what you are doing if you give them options. If the share price is 150 at the time the option expires, the CEO can buy the shares at 100 and thus make 50. However, if the share price is 90 the option is worthless, and the CEO does not make anything. However, the trick is that the CEO then does not care whether the share price is 90 or, say, 50 – in either case he does not make any money; worthless is worthless. As a consequence, when his options (i.e. the right to buy shares at 100) are about to expire and the company’s share price is still 90, he has a great incentive to quickly take a massive amount of risk. Going to a roulette table would already be a rational to do.

Because if you placed the company’s capital on red, and the ball hits red, share price may jump from 90 to 130, and suddenly your options are worth a lot of money (130-100 to be precise). However, if your bet fails, the ball hits black and you lose a ton of money, who cares; the share price may fall from 90 to 50, but your options were worthless anyway. Hence, options give a top manager the upside risk, as we say, but do not give them the downside risk. Therefore, we incentivize them to take risk. You might think “I seldom see herds of CEOs in a casino by the time options expire, so this grumpy Vermeulen guy must be exaggerating” but I’d reply we have seen quite a lot of casino-type strategy in various businesses lately (e.g. banks). More importantly, we know from research that CEOs do take excessive risk due to stock options (see for instance Sanders and Hambrick, 2007; Zhang e.a. 2008). I think it would be naive to think that we give CEOs 90 percent performance related pay and most of it in stock options, and then think that they will not start acting in the way the remuneration system stimulates them to do. Of course it influences their decisions, and if it didn’t, there would be no reason left to make their pay flexible and based on options, now would there?

Therefore, I would say, out with the performance-related pay for top managers (a good bottle of wine at Christmas and, if you insist, a small cheque like the rest of us would do). And while we’re at it, let’s try to reduce the level as well.

* e.g. O’Reilly, Main, and Crystal, 1988; Porac, Wade, and Pollock, 1999; Westphal and Zajac, 1995.


Psyched Out Strategy: What is a firm?

Glenn Hoetker recently gave me the opportunity to consider what new contributions the field of psychology could offer to the strategy literature (see the description here). The video illustrates how behavior often depends more on perception than on reality — does it matter if the steering wheel is attached or not if the other driver acts as if it is? Often, researchers are interested in organizational outcomes and theorize that the underlying behaviors are driven by objective reality. What research opportunities are highlighted as we take seriously the subjective nature of our most central constructs?

In this installment, we explore the question, “what is a firm?” This is so taken for granted in the field that most of you will probably stop reading here. Read the rest of this entry »


Time-critical social mobilization

The most recent issue of Science has a very practical and interesting piece on time-critical social mobilization (here’s the non-gated arXiv version).

The article recounts the winning team’s strategy in the DARPA network challenge – a challenge where 10 red weather balloons were placed in locations throughout the US.   The winning MIT team found them all in less than 9 hours: check out their use of the web, tweets, the use of incentives ($40,000), etc.

In terms of incentives, the MIT team used the promised prize money as the incentive — $4,000 for each of the 10 balloons.  $2000 per balloon was promised to the first person sending the balloon coordinates, $1000 to the person who recruited the finder onto the team, $500 to whoever invited the inviter, $250 to whoever invited that person, etc.

Here are some of the other strategies:

  • The second team, from Georgia Tech, used an altruism-based approach (the money would be donated to the Red Cross) – they found nine of the ten balloons.
  • George Hotz, a Twitter celebrity, recruited his followers – he found eight of the ten balloons.

Check out the paper for additional details (lots of cool stuff on networks, recruitment, etc).

Here’s the abstract:

The World Wide Web is commonly seen as a platform that can harness the collective abilities of large numbers of people to accomplish tasks with unprecedented speed, accuracy, and scale. To explore the Web’s ability for social mobilization, the Defense Advanced Research Projects Agency (DARPA) held the DARPA Network Challenge, in which competing teams were asked to locate 10 red weather balloons placed at locations around the continental United States. Using a recursive incentive mechanism that both spread information about the task and incentivized individuals to act, our team was able to find all 10 balloons in less than 9 hours, thus winning the Challenge. We analyzed the theoretical and practical properties of this mechanism and compared it with other approaches.

Here’s where the balloons were located: