Motivation Trumps IQ. What now?
Posted: September 6, 2012 Filed under: cognition, competitive advantage, human capital, incentives, Uncategorized 5 CommentsAn article recently posted in Slate reviews research showing that a significant portion of the variation in IQ tests is attributable to motivation rather than ability. In one striking study researchers measured the children’s IQ and split them into High, Average, and Low groups. They reran the test offering the low group an M&M for every correct answer. As a result of this simple incentive, the low group’s score went from 79 to 97 – on par with the average group.
Ok, so incentives work. Perhaps not a big surprise on many levels.
On the other hand, there is a large OB/HRM literature invested in the conclusion that performance increases are associated with hiring employees with a higher IQ. The assumption there is that IQ measures ability as opposed to motivation.
This raises a critical question for strategy scholars. Is motivation an immutable attribute of human capital Read the rest of this entry »
Healthcare Price Control Shuffle…
Posted: September 1, 2012 Filed under: economics, Healthcare, incentives, law and society 3 CommentsRomney and Ryan have incorrectly characterized Obamacare as a “Raid on Medicare” and news organizations and the Obama campaign have fired back that is it actually a program to reduce healthcare costs — an important achievement of the administration. This whole discussion misses the fundamental point that $716 billion in savings would be the result of mandated price controls. Given that this is a major intervention, it is important to understand how these altered incentives will affect the U.S. healthcare system.
Medicare currently pays providers 30% less than private insurers and Obamacare will further reduce that to save $716 billion in payments to providers (hospitals, doctors, etc.). At the same time, broader coverage (another goal of the new law) will undoubtedly increase demand for services. How will these effects play out?
We already know that some providers are less willing to accept Medicare Read the rest of this entry »
Individual Bias and Collective Truth?
Posted: May 30, 2012 Filed under: cognition, collective behavior, incentives, philosophy, psychology, research, Uncategorized Leave a commentFreek’s latest post on confirmation bias notes that intellectual commitments can bias which research findings one believes. The tone of the post is that we would all be better off if such biases didn’t exist, but there is definitely a tradeoff here. Greater objectivity tends to go with lower intensity of interest in a subject. (Disinterested and uninterested are correlated, for those old-timers who remember when those words had different definitions.) That’s why you often find that those with strong views on controversial topics–including those with minority or even widely ridiculed opinions–often know more about the topic, the evidence, and the arguments pro and con than “objective” people who can’t be bothered to dig into the matter. Other than partisanship, the only thing that will get people interested enough to seriously assess competing claims is a personal stake in the truth of the matter. (And in all cases, Feynman’s admonition that the easiest person to fool is yourself should be borne in mind.)
Historians of science of all stripes, from romanticists like Paul de Kruif (author of the classic The Microbe Hunters) to sophisticated evolutionists like David Hull in Science as a Process, have reported that intellectual partisanship motivates a great deal of path-breaking research. “I’ll show him!” has spawned a lot of clever experiments. Burning curiosity and bland objectivity are hard to combine.
But how can such partisanship ever lead to intellectual progress? Partisans have committed to high-profile public bets on one or another side of a controversy; their long-term career and immediate emotional payoffs depend not directly on the truth, but on whether or not they “win” in the court of relevant opinion. The key to having science advance is for qualified non-partisan spectators of these disputes be able to act as independent judges to sort out which ideas are better.
Ideally, these adjacent skilled observers would have some skin in the game by virtue of having to bet their own research programs on what they think the truth is. If they choose to believe the wrong side of a dispute, their future research will fail, to their own detriment. That’s the critical form of incentive compatibility for making scientific judgments objective, well-described in Michael Polanyi’s “Republic of Science” article. If, for most observers, decisions about what to believe are closely connected to their own future productivity and scientific reputation, then the partisanship of theory advocates is mostly a positive, motivating exhaustive search for the strengths and weaknesses of the various competing theories. Self-interested observers will sort out the disputes as best they can, properly internalizing the social gains from propounding the truth.
The problem for this system comes when 1) the only scientific interest in a dispute lies among the partisans themselves, or 2) observers’ control over money, public policy, or status flows directly from choosing to believe one side or another regardless of the truth of their findings. Then, if a false consensus forms the only way for it come unstuck is for new researchers to benefit purely from the novelty of their revisionist findings–i.e., enough boredom and disquiet with the consensus sets in that some people are willing to entertain new ideas.
Fungibility v. Fetishes
Posted: May 28, 2012 Filed under: behavioral economics, cognition, economics, incentives, research 3 CommentsFor an economist studying business strategy, an interesting puzzle is why businesspeople, analysts, and regulators often don’t seem to perceive the fungibility of payments. Especially in dealing with bargaining issues, a persistent “optical illusion” causes them to fetishize particular transaction components without recognizing that the share of total gain accruing to a party is the sum of these components, regardless of the mix. Proponents of the “value-based” approach to strategy, which stresses unrestricted bargaining and the core solution concept, ought to be particularly exercised about this behavior, but even the less hard-edged V-P-C framework finds it difficult to accommodate.
Some examples:
- There’s been some noise lately about U.S. telecom providers cutting back on the subsidies they offer users who buy smartphones. None of the articles address the question of whether the telecom firms can thereby force some combination of a) Apple and Samsung cutting their wholesale prices and b) end users coughing up more dough for (smartphone + service). The possibility that competition among wireless providers fixes the share of surplus that they can collect, so that cutting the phone subsidy will also require them to cut their monthly service rates, is never raised explicitly. There is a pervasive confusion between the form of payments and the total size of payments.
Crowd-sourcing Strategy Formulation
Posted: May 10, 2012 Filed under: collective behavior, competitive advantage, current events, entrepreneurship, incentives, innovation, networks, open innovation, organization design, research, Stakeholders, technology 24 CommentsThe current issue of McKinsey Quarterly features an interesting article on firms crowd-sourcing strategy formulation. This is another way that technology may shake up the strategy field (See also Mike’s discussion of the MBA bubble). The article describes examples in a variety of companies. Some, like Wikimedia and Redhat aren’t much of a surprise given their open innovation focus. However, we should probably take notice when more traditional companies (like 3M, HCL Technologies, and Rite-Solutions) use social media in this way. For example, Rite-Solutions, a software provider for the US Navy, defense contractors and fire departments, created an internal market for strategic initiatives:
Would-be entrepreneurs at Rite-Solutions can launch “IPOs” by preparing an Expect-Us (rather than a prospectus)—a document that outlines the value creation potential of the new idea … Each new stock debuts at $10, and every employee gets $10,000 in play money to invest in the virtual idea market and thereby establish a personal intellectual portfolio Read the rest of this entry »
Sleeping With the Enemy Part II
Posted: April 5, 2012 Filed under: Corporate strategy, economics, incentives, networks, Vertical integration 8 CommentsIn an earlier post, I noted Target’s costly decision to end its on-line outsourcing arrangement with Amazon’s cloud service and take all its work in-house. The short-term costs were considerable, both in direct outlays and in performance degradation, and the long-term benefits were hard to pin down. Vague paranoia rather than careful analysis seemed to have driven the decision. I pointed out that firms often seemed unwilling to “sleep with the enemy,” i.e. purchase critical inputs from a direct rival, but the case for such reluctance was weak.
A few months ago, an apparent counterexample popped up. Swatch, the Swiss wristwatch giant, decided unilaterally to cease supplying mechanical watch assemblies to a host of competing domestic brands that are completely dependent on Swatch for these key components. These competitors (including Constant, LVMH, and Chanel) sued, fruitlessly, to force Swatch to continue to sell to them. The Swiss Federal Administrative Court backed up a deal Swatch cut with the Swiss competition authorities that allows Swatch to begin reducing its shipments to rivals. The competition authority will report later this year on how much grace time Swatch’s customers must be given to find new sources of supply, and these customers may appeal to the highest Swiss court. For now, Swatch’s customers are scrambling for alternative sources of supply in order to stay in business. The stakes are especially high because overall business is booming, with lots of demand in Asia.
How do you grow a capability?
Posted: March 13, 2012 Filed under: collective behavior, human capital, incentives, organization design, psychology, rants, theory of the firm 3 CommentsThe “dynamic capabilities” literature, I think, is a bit of a mess: lots of jargon, conflicting arguments (and levels of analysis) and little agreement even on a basic definition. I don’t really like to get involved in definitional debates, though I think the idea of a capability, the ability to do/accomplish something (whether individual or collective), is fundamental for strategy scholars.
Last weekend I was involved in a “microfoundations of strategy” panel (with Jay Barney and Kathy Eisenhardt). One of the questions that I raised, and find quite intriguing, is the question of how we might “grow” a capability. The intuition for “growing” something, as a form of explanation, comes from simulation and agent-based modeling. For example, Epstein has argued, “if you didn’t grow it, you didn’t explain it” (here’s the reference). I like that intuition. As I work with colleagues in engineering and computer science, this “growth” mentality seems to implicitly be there. Things are not taken for granted, but explained by “growing” them. Capabilities aren’t just the result of “history” or “experience” (a common explanation in strategy), but rather that history and experience needs to be unpacked and understood more specifically. What were the choices that led to this history? Who are the central actors? What are the incentives and forms of governance? Etc.
So, if we were to “grow” a capability, I think there are some very basic ingredients. First, I think understanding the nature, capability and choices of the individuals involved is important. Second, the nature of the interactions and aggregation matters. The interaction of individuals and actors can lead to emergent, non-linear and collective outcomes. Third, I think the structural and design-related choices (e.g., markets versus hierarchy) and factors are important in the emergence (or not) of capabilities. Those are a few of the “ingredients.”
I’m not sure that the “how do you grow a capability”-intuition is helpful in all situations. However, I do find that there is a tendency to use short-hand code words (routines, history, experience), and the growth notion requires us to open up these black boxes and to more carefully investigate the constituent parts, mechanisms and interactions that lead to the development or “growth” of capability.
Manipulating journal impact factors
Posted: February 28, 2012 Filed under: ethics, incentives, research 3 CommentsVia several folks on Facebook (e.g., Marcel Bogers, Der Chao Chen) – here’s a short blog post on how journals are manipulating their impact factors: coerced citations and manipulated impact factors – dirty tricks of academic journals.
Here’s the Science piece that the above blog post refers to: Coercive citation in academic publishing. Here’s the data (pdf).
Duly Noted: Solving the Principal-Agent Problem in Firms:The dumbest idea in the world?
Posted: December 30, 2011 Filed under: are you kidding me?, competitive advantage, Corporate strategy, economics, game theory, incentives 6 CommentsThis article in Forbes argues that a new book by the Dean of Rotman School provides an antidote to the rampant excesses of modern day capitalism. The principle swipe is against the landmark paper (over 29000 Google Scholar citations) by Jensen and Meckling on both the prevalence of the principal agent problem in the governance of firms and the various solutions to overcome it – including creating incentives that maximize shareholder value. Quoting Jack Welch, former CEO of GE, the article says that maximizing shareholder value is the dumbest idea in the world. I my self am not sure if this is THE dumbest idea in the world – in fact there are many more that would easily surpass P-A problem resolution – but I am sure this will ignite a debate about why firm’s exist – what is the best governance mechanism for them and the role of economic theory and action in our lives. I for one need to go back and read the article and then read the book.
Actually, small companies are better at innovation than large companies
Posted: December 20, 2011 Filed under: human capital, incentives, innovation, productivity 3 CommentsGrant McCracken summarizes an Economist post that argues that big companies are better at innovation than small ones (well, he discusses both sides).
But theory says that small companies are actually the winners.
Economists have long wrestled with this, the “diseconomies,” problem: why do smaller organizations outperform large ones? (Todd Zenger’s 1992 Management Sci piece summarizes this work nicely.) Schumpeter indeed went both ways on this (Dick Langlois discusses the “two Schumpeters”-thesis a bit here). But yes, large organizations seemingly have the resources, complementary assets, access to talent etc to outperform small organizations. But small organizations still outperform large ones.
Large organizations have lots of problems (I’ll spare the references, for now). They
- mis-specify incentives,
- suffer from problems of social loafing (free-rider problem),
- engage in unnecessary intervention, etc.
And, if large organizations had such an advantage, why not take this argument to the extreme and simply organize everything under one large firm? That, of course, was one of Coase’s central questions. Obviously the organization-market boundary matters and there are costs associated with hierarchy.
Sure – there are lots of contingencies, caveats and exceptions [insert example from Apple or 3M]. And, definitions matter [what exactly is “small” versus “large”]. But on the whole, the theory says small companies win in the innovation game.
Innocentive challenges
Posted: December 16, 2011 Filed under: incentives, innovation 1 CommentInnocentive is listing some new, cool challenges that relate to the social sciences and strategy:
- Describe large-scale uses of human-machine teamwork
- Models motivating and supporting altruism within communities
- The economist-innocentive transparency challenge
Here are all their challenges. And the external ones are also listed (DARPA, P&G, NASA, etc).
The Darpa Shredder Challenge was solved a few weeks ago by a small team in San Francisco, “All Your Shreds Are Below To U.S.”
Reinventing discovery: the promise of open science
Posted: December 6, 2011 Filed under: cognition, collective behavior, incentives, innovation, open innovation 2 CommentsI’ve been skimming/reading through Michael Nielsen’s (pioneer in quantum computing) new (2012) book Reinventing discovery: the new era of networked science, Princeton University Press. The book chronicles the various open science and open innovation initiatives from the past and present: Torvalds and Linux, Tim Gowers’ polymath project (see his post: is massively collaborative mathematics possible), the failed quantum wiki (qwiki) effort, Galaxy Zoo, collaborative fiction, Sloan Digital Sky Survey (SDSS), Open Architecture Network, Foldit, SPIRES, Paul Ginsbarg’s arXiv, the Public Library of Science (PLoS), of course Innocentive, etc, etc.
My quick take on the book – it is a nice review of the existing forms that open innovation and open science are taking. I’ve read or followed most of the above projects over the years so the book doesn’t cover too much new territory from that perspective. The language in the book isn’t too precise –e.g., “network” isn’t very specific (I suppose in this case it simply means internet, broadly, and more general openness). But then again, this isn’t really an academic book (lots of great footnotes though). But the book is a great review of some of the existing efforts in open innovation and open science.
But beyond detailing the many instances of increased openness in science, the book touches more generally on the possibilities of “citizen science” (David Kirsch posted about citizen science on orgtheory.net, see here). I think there are lots of interesting possibilities: funding, tapping into cognitive ‘surplus,’ perhaps gamification, and many other forms of collaboration. And the book leaves off with some important problems for and questions about open science. How do you get the incentives right for openness? Who should be the gatekeepers? What institutions are needed to support openness? Etc.
Here’s the author speaking at Google a few weeks ago:
Fraud in the ivory tower (and a big one too)
Posted: December 5, 2011 Filed under: business school, ethics, incentives 14 CommentsThe fraud of Diederik Stapel – professor of social psychology at Tilburg University in the Netherlands – was enormous. His list of publications was truly impressive, both in terms of the content of the articles as well as its sheer number and the prestige of the journals in which it was published: dozens of articles in all the top psychology journals in academia with a number of them in famous general science outlets such as Science. His seemingly careful research was very thorough in terms of its research design, and was thought to reveal many intriguing insights about fundamental human nature. The problem was, he had made it all up…
For years – so we know now – Diederik Stapel made up all his data. He would carefully reiterature, design all the studies (with his various co-authors), set up the experiments, print out all the questionnaires, and then, instead of actually doing the experiments and distributing the questionnaires, made it all up. Just like that.
He finally got caught because, eventually, he did not even bother anymore to really make up newly faked data. He used the same (fake) numbers for different experiments, gave those to his various PhD students to analyze, who then in disbelief slaving away in their adjacent cubicles discovered that their very different experiments led to exactly the same statistical values (a near impossibility). When they compared their databases, there was substantial overlap. There was no denying it any longer; Diederik Stapel, was making it up; he was immediately fired by the university, admitted to his lengthy fraud, and handed back his PhD degree.
In an open letter, sent to Dutch newspapers to try to explain his actions, he cited the huge pressures to come up with interesting findings that he had been under, in the publish or perish culture that exist in the academic world, which he had been unable to resist, and which led him to his extreme actions.
There are various things I find truly remarkable and puzzling about the case of Diederik Stapel.
- The first one is the sheer scale and (eventually) outright clumsiness of his fraud. It also makes me realize that there must be dozens, maybe hundreds of others just like him. They just do it a little bit less, less extreme, and are probably a bit more sophisticated about it, but they’re subject to the exact same pressures and temptations as Diederik Stapel. Surely others give in to them as well. He got caught because he was flying so high, he did it so much, and so clumsily. But I am guessing that for every fraud that gets caught, due to hubris, there are at least ten other ones that don’t.
- The second one is that he did it at all. Of course because it is fraud, unethical, and unacceptable, but also because it sort of seems he did not really need it. You have to realize that “getting the data” is just a very small proportion of all the skills and capabilities one needs to get published. You have to really know and understand the literature; you have to be able to carefully design an experiment, ruling out any potential statistical biases, alternative explanations, and other pitfalls; you have to be able to write it up so that it catches people’s interest and imagination; and you have to be able to see the article through the various reviewers and steps in the publication process that every prestigious academic journal operates. Those are substantial and difficult skills; all of which Diederik Stapel possessed. All he did is make up the data; something which is just a small proportion of the total set of skills required, and something that he could have easily outsourced to one of his many PhD students. Sure, you then would not have had the guarantee that the experiment would come out the way you wanted them, but who knows, they could.
- That’s what I find puzzling as well; that at no point he seems to have become curious whether his experiments might actually work without him making it all up. They were interesting experiments; wouldn’t you at some point be tempted to see whether they might work…?
- Truly amazing I also find the fact that he never stopped. It seems he has much in common with Bernard Madoff and his Ponzi Scheme, or the notorious traders in investments banks such as 827 million Nick Leeson, who brought down Barings Bank with his massive fraudulent trades, Societe Generale’s 4.9 billion Jerome Kerviel, and UBS’s 2.3 billion Kweku Adoboli. The difference: Stapel could have stopped. For people like Madoff or the rogue traders, there was no way back; once they had started the fraud there was no stopping it. But Stapel could have stopped at any point. Surely at some point he must have at least considered this? I guess he was addicted; addicted to the status and aura of continued success.
- Finally, what I find truly amazing is that he was teaching the Ethics course at Tilburg University. You just don’t make that one up; that’s Dutch irony at its best.
DOE makes $150m loan conditional on matching private funds – firm folds
Posted: December 3, 2011 Filed under: are you kidding me?, current events, entrepreneurship, incentives, innovation, open innovation, rants 2 CommentsHere is the WIRED link: EV Startup Aptera Motors Pulls the Plug: “The company that brought us a three-wheeled sperm-shaped two-wheeler shuts its doors after four years.” More here: The 190 MPG Aptera electric care that never was.
No kidding. My faith in government bureaucrats to make successful commercialization picks is, as we used to say in Nevada, lower than a snakes belly in a wagon rut. How many Department of Something-or-Other types do you think have the slightest idea of what Porter’s Five Forces are? And that’s a 30-year-old framework in strategy. Don’t even get me started on the open invitation to political corruption that these policies tend to create.
Those who worry about our dependence on foreign oil – a worry I share, by the way – often cite historic examples of government projects that successfully developed new technologies that would never have seen the light of day (or, at best, would have seen it decades later) had the country relied on the private sector to do it. The Manhattan Project to develop the nuclear bomb during WWII is a favorite citation.
The problem we are seeing today is that the government is presently throwing money at firms claiming they can commercialize green technologies that, in reality, have not yet passed the basic development stage. You can’t get private investors to ante up when taxpayers are shouldering half the risk? That’s a very strong signal that those who spend their lives evaluating such things believe your technology is not ready for prime time. There is a big difference between government involvement in basic technology and government involvement in its commercialization.
Now, if folks are really serious about a Manhattan Project-style effort to, say, develop an efficient electric car, then let’s do it right! Get the smartest scientists from the top schools, fence them in at a top-secret facility in the middle of some desert, and don’t let them out until they succeed. I think that might work. And, I’m pretty sure the scientists’ home institutions would go for it.
Then, turn the technology over to the VCs to compete in the commercialization stage.
Incentives to blog: the iron blogger
Posted: November 19, 2011 Filed under: incentives 1 CommentBenjamin Mako Hill has set up the “Iron Blogger.” The Iron Blogger is an effort to precommit oneself to blogging at least once a week. If not, you owe Mako $5 which goes into a common pool of money – for get-togethers. Here are the rules and participants (which include StrategyProfs very own Karim – so we can expect at least a weekly post from him). Presumably you have to be a Bostonian to participate.
The effort has some links to Ulysses tying himself to the mast to avoid the Sirens. For some social theory on this, check out Jon Elster’s book Ulysses Sirens: Studies in Rationality and Irrationality, Cambridge University Press.
Can There Be Strategy in Distributed Movements? Linux and #OWS
Posted: November 19, 2011 Filed under: collective behavior, current events, incentives, innovation 6 CommentsAn an ongoing research puzzle for me has been how distributed movements, open source|wikipedia, mobilize collective action and get individual incentives and actions aligned. Is the apparent lack of “strategy” a virtue or a vice? For example, Linus Torvalds, founder of Linux, has argued that “brownian motion” drives Linux development:
<From: Linus Torvalds
Subject: Re: Coding style – a non-issue
Date: Fri, 30 Nov 2001 16:50:34 -0800 (PST)On Fri, 30 Nov 2001, Rik van Riel wrote:
>
> I’m very interested too, though I’ll have to agree with Larry
> that Linux really isn’t going anywhere in particular and seems
> to be making progress through sheer luck.Hey, that’s not a bug, that’s a FEATURE!
You know what the most complex piece of engineering known to man in the
whole solar system is?Guess what – it’s not Linux, it’s not Solaris, and it’s not your car.
It’s you. And me.
And think about how you and me actually came about – not through any
complex design.Right. “sheer luck”.
Well, sheer luck, AND:
– free availability and _crosspollination_ through sharing of “source
code”, although biologists call it DNA.
– a rather unforgiving user environment, that happily replaces bad
versions of us with better working versions and thus culls the herd
(biologists often call this “survival of the fittest”)
– massive undirected parallel development (“trial and error”)I’m deadly serious: we humans have _never_ been able to replicate
something more complicated than what we ourselves are, yet natural
selection did it without even thinking.<….later in thread…>
A strong vision and a sure hand sound like good things on paper. It’s just
that I have never _ever_ met a technical person (including me) whom I
would trust to know what is really the right thing to do in the long run.Too strong a strong vision can kill you – you’ll walk right over the edge,
firm in the knowledge of the path in front of you.I’d much rather have “brownian motion”, where a lot of microscopic
directed improvements end up pushing the system slowly in a direction that
none of the individual developers really had the vision to see on their
own.And I’m a firm believer that in order for this to work _well_, you have to
have a development group that is fairly strange and random.To get back to the original claim – where Larry idolizes the Sun
engineering team for their singlemindedness and strict control – and the
claim that Linux seems ot get better “by luck”: I really believe this is
important.The problem with “singlemindedness and strict control” (or “design”) is
that it sure gets you from point A to point B in a much straighter line,
and with less expenditure of energy, but how the HELL are you going to
consistently know where you actually want to end up? It’s not like we know
that B is our final destination.In fact, most developers don’t know even what the right _intermediate_
destinations are, much less the final one. And having somebody who shows
you the “one true path” may be very nice for getting a project done, but I
have this strong belief that while the “one true path” sometimes ends up
being the right one (and with an intelligent leader it may _mostly_ be the
right one), every once in a while it’s definitely the wrong thing to do.And if you only walk in single file, and in the same direction, you only
need to make one mistake to die.In contrast, if you walk in all directions at once, and kind of feel your
way around, you may not get to the point you _thought_ you wanted, but you
never make really bad mistakes, because you always ended up having to
satisfy a lot of _different_ opinions. You get a more balanced system.
So the question for me has been if this is just an accidental feature of a distributed movement or can we actually drive collective action this way?
The recent emergence of #OWS provides an interesting case study unfolding in real time. Fast Company has a nice entry about how the movement came about:
And not posting clear demands, while essentially a failing, has unintended virtue. Anyone who is at all frustrated with the economy–perhaps even 99% of Americans–can feel that this protest is their own.
So is this the way to develop strategy?
Who are the top one percent?
Posted: November 16, 2011 Filed under: human capital, incentives Leave a commentInteresting podcast at Econtalk on wealth, income distribution etc – “Kaplan on inequality and the top 1%.” Much of the Roberts and Kaplan discussion focuses on this paper (pdf) – “Wall Street and Main Street: What Contributes to the Rise in the Highest Incomes.”
Paul Krugman makes a different argument – “Oligarchy, American Style.”
This is obviously a heated debate – as illustrated by Freek’s post (also see the comments).
Psyched Out Strategy: What is a firm?
Posted: November 10, 2011 Filed under: cognition, incentives, organization design, psychology, theory of the firm 5 CommentsGlenn Hoetker recently gave me the opportunity to consider what new contributions the field of psychology could offer to the strategy literature (see the description here). The video illustrates how behavior often depends more on perception than on reality — does it matter if the steering wheel is attached or not if the other driver acts as if it is? Often, researchers are interested in organizational outcomes and theorize that the underlying behaviors are driven by objective reality. What research opportunities are highlighted as we take seriously the subjective nature of our most central constructs?
In this installment, we explore the question, “what is a firm?” This is so taken for granted in the field that most of you will probably stop reading here. Read the rest of this entry »
Time-critical social mobilization
Posted: November 4, 2011 Filed under: cognition, collective behavior, incentives 2 CommentsThe most recent issue of Science has a very practical and interesting piece on time-critical social mobilization (here’s the non-gated arXiv version).
The article recounts the winning team’s strategy in the DARPA network challenge – a challenge where 10 red weather balloons were placed in locations throughout the US. The winning MIT team found them all in less than 9 hours: check out their use of the web, tweets, the use of incentives ($40,000), etc.
In terms of incentives, the MIT team used the promised prize money as the incentive — $4,000 for each of the 10 balloons. $2000 per balloon was promised to the first person sending the balloon coordinates, $1000 to the person who recruited the finder onto the team, $500 to whoever invited the inviter, $250 to whoever invited that person, etc.
Here are some of the other strategies:
- The second team, from Georgia Tech, used an altruism-based approach (the money would be donated to the Red Cross) – they found nine of the ten balloons.
- George Hotz, a Twitter celebrity, recruited his followers – he found eight of the ten balloons.
Check out the paper for additional details (lots of cool stuff on networks, recruitment, etc).
Here’s the abstract:
The World Wide Web is commonly seen as a platform that can harness the collective abilities of large numbers of people to accomplish tasks with unprecedented speed, accuracy, and scale. To explore the Web’s ability for social mobilization, the Defense Advanced Research Projects Agency (DARPA) held the DARPA Network Challenge, in which competing teams were asked to locate 10 red weather balloons placed at locations around the continental United States. Using a recursive incentive mechanism that both spread information about the task and incentivized individuals to act, our team was able to find all 10 balloons in less than 9 hours, thus winning the Challenge. We analyzed the theoretical and practical properties of this mechanism and compared it with other approaches.
Here’s where the balloons were located: