A review of George Szpiro’s 2011 book on the history of the Black-Scholes option-pricing formula uses Southwest Airlines’famous fuel-price-hedging strategy as a key piece of its explanation for why firms might want to use options. Southwest’s hedging has received a lot of attention; the gains and losses on these financial trades have rivaled operating profits and losses on its income statement. Most commentators have applauded this aggressive trading activity, merely cautioning that sometimes Southwest guesses wrong about future oil prices and loses a lot of money.
What no one seems to ask is why Southwest shareholders would want the firm to be speculating in the fuel market in the first place. Unless these hedges materially reduced the risk of bankruptcy–and Southwest’s balance sheet is typically stronger than its rivals’–the classic argument applies: Shareholders should not want corporate managers to hedge industry-specific risks, such as swings in fuel prices, because they can very easily deal with these risks themselves by holding a diversified portfolio of stocks (including oil firms) or even by buying their own options on oil prices. Southwest’s financial risk reduction via hedging conveys little or no benefit to the owners of the firm.
But wait, many will object–doesn’t hedging give Southwest a cost advantage over its rivals when oil prices go up? And since these hedges are often accomplished by options, isn’t there an asymmetry, since when Southwest guesses wrong, it only loses the price it paid for the option? Doesn’t the airline therefore lower its costs by these trades, gaining a leg up on its rivals?
The answer is No. These hedges have no impact whatsoever on Southwest’s cost of being an airline operator. They constitute an independent, speculative financial side business, a business that is exactly as good for Southwest shareholders as the CFO’s team is at outguessing the fuel market. Even when Southwest guesses right, it is not improving the airline business’s competitiveness.
To see why this is true, think about the incremental fuel cost to Southwest of running a flight with or without the hedge. If the spot price of fuel is $x/gallon at the time of the flight and it consumes y gallons, then the fuel cost is xy. If Southwest has successfully hedged the oil price, then it will make a bunch of money after closing out its position, but it would still independently save $xy by not running the flight. If Southwest has guessed wrong and lost money on the hedge, it would also save $xy by not running the flight. So the cost of operation–the increment in expenditure caused by producing another unit–is unaltered by the hedging strategy.
This situation should be easy to visualize because the hedges are on oil rather than jet fuel and because they are settled for cash rather than physical delivery. But even if the hedges were denominated in physically delivered jet fuel, successful or unsuccessful hedging would have no impact on airline operating costs. If Southwest just bought fuel early for $(x-a)/gallon and stored it until the spot price was $x/gallon, the opportunity cost of the flight would still be $xy, since the airline could cancel the flight and sell y gallons for that amount. The incremental expenditure difference between flying and not flying is exactly the same. (If opportunity cost confuses you, visualize that Southwest has some fuel on hand purchased at the lower hedged price and some at the spot price, and note that it doesn’t matter which barrel of gas goes into which plane–all the fuel is fungible, and it is all worth $x/gallon if that’s what it could be sold for.)
Now, risk-averse behavior by managers may be in their own interest, depending on the form of their compensation, the structure of the labor market, and their perceived ability differential over their peers. But it is of little help to the owners of public firms that are far from bankruptcy. That’s a point that should not be hedged.
Barry Lynn, apparently some sort of John Kenneth Galbraith wannabe, has an amusingly cockeyed post over at the Harvard Business Review blog. He seems to think that state regulations protecting local beer distributors from vertically integrated competitors are the font of virtue, preserving needed diversity in the beer market by allowing craft and micro-brewers to get their product delivered. But if the big brewers were legally able (and motivated) to foreclose distribution of the small brands, they would be legally able to do it without vertically integrating into distribution (by requiring exclusivity).
A simpler analysis: When there were many competing major brewers, independent multi-brewer distributors made economic sense, since they eliminated needless duplication of sales and delivery of all those brands to retail establishments. With the consolidation of the beer industry into two giant companies that own all the big brands (and a shift from on-premises to at-home consumption), a single-brewer distribution firm can now internalize almost all those economies. Then the beer industry starts to look a bit more like the soft-drink industry, where two major firms own and develop all the major brands and we don’t blink an eye at their bottler/distributors having exclusive relationships with the upstream brand owners or even being vertically integrated with them. If your local Costco or supermarket won’t carry a micro-brew or an off-brand soda, it’s unlikely to be due to market power on the part of the distributors.
UPDATE: It seems that AB InBev, owner of Budweiser and many other beer brands, is indeed shifting to more of a product innovation strategy and running into distribution problems with these new products:
“That’s not to say that AB InBev has perfected the process. Profit this year was hurt by higher distribution and administration costs in the U.S. as the brewer struggled to keep up with demand for Platinum and Lime-A-Rita, which required extensive — and expensive — countrywide distribution.”
So maybe there are strategic reasons why AB InBev would want more control over its distribution pipeline.
I just saw a recent article in the Chronicle of Higher Education on the emerging field of neuroeconomics. Unlike behavioral economics, where ideas from psychology have been ported over to economics to explain various individual “anomalies” in choice behavior, in neuroeconomics much of the intellectual traffic has gone in the other direction–economic modeling tools are helpful in understanding psychological processes (including where those processes deviate from classic economic theory). The axiomatic approach to choice makes it a lot easier to parse out how the brain’s actual mechanisms do or don’t obey these axioms.
An important guy to watch in this area is Paul Glimcher, who mostly stays out of the popular press but is a hardcore pioneer in trying to create a unified (or “consilient”) science encompassing neuroscience, psychology, and economics. I’ve learned a lot from reading his Foundations of Neuroeonomics (2010) and Decisions, Uncertainty, and the Brain (2004): why reference points (as in prospect theory) are physiologically required; how evolutionary theory makes a functionalist and optimizing account of brain behavior more plausible than a purely mechanical, piecemeal, reflex-type theory; why complementarity of consumption goods presents a difficult puzzle for neuroscience; and much more.
Alvin E. Roth is a Professor of Economics and Business Administration, currently at Harvard and soon at Stanford. He is one of the kindest people I know. As of yesterday, he is a Nobel laureate.
Dr. Roth’s interests include “game theory, experimental economics, and market design” says the Harvard website. But Dr. Roth became famous for putting economic theory to work – in the real world. He has designed and redesigned markets and institutions for better performance. Dr. Roth has changed how doctors and hospitals find each other, how students are assigned to high schools, and how kidney patients are matched with a donor.
Putting theory to work is risky. Most of us, me included, describe reality and hypothesize about causes and effects: what makes people cooperative or why some companies are successful, for example. We find it plenty difficult to convince peers, reviewers and editors of our ideas. Implementation is a whole different realm. We can advise, but usually let others practice: executives, government officials, leaders.
But Dr. Roth is different. Acting as both a scholar and an entrepreneur, he embarked on a difficult and perilous journey to reshape institutions. He had to convince laymen that economic theories are useful. He had to bear the risk of failure for organizational and political reasons. He could have failed even if right. Changing the way students are assigned to schools can disturb powerful education official and supervisors; reallocating kidneys to patients can upset hospitals and doctors.
Somehow, Roth triumphed. In his success, he made markets better and society – more prosperous. He also set a challenge for the rest of us. Coming up with a good idea and convincing your colleagues may be just the beginning of a journey. Putting it to action may be the ultimate goal.
For all of his accomplishments, Al remains friendly, humble and approachable. He seems excited by ideas, not glory. A day after the Nobel committee bestowed his prize, he wrote to me “it’s been a busy day…”. Probably nothing out of the ordinary for him.
A great New York Times article this morning (link below) details ways in which the patent system gets used as both an offensive and defensive weapon, with billions of dollars of collateral damage to start-ups, consumers (see the “patent tax”), and innovation in general. The victim in the opening Vignette (Vlingo, a voice-recognition software start-up) might have been saved by a simple change in the rules: make the losers of patent lawsuits pay the legal costs of the winner. It turns out that it’s rather easy to kill small firms (or force them to sell to you) by launching a patent lawsuit against them that bleeds them dry with legal fees. You don’t have to win — you just have to force them to fight until they no longer have any money. Vlingo ultimately won the patent lawsuit that had been filed by a much larger rival, but had to loot its own meager coffers to pay the legal fees of doing so. Vlingo slumped home with its patent lawsuit victory and shut its doors for good. If losers of such battles paid the legal fees of winners, such fights might both be less common, and less likely to be fatal.
The article also points out that software patents have proven particularly dangerous because they are prone to protecting vague claims like “a software algorithm for calculating online prices,” thereby granting the patent holder vast tracks of technological real estate. An interesting talk by Tilo Peters at the Strategic Management Society conference yesterday points to another useful tool for rationalizing some of this misuse of the patent system: Strategic disclosure. If, for example, you decided to publish a manifesto about all of the things you might do with software in the reasonable future (remember patents have a “usefulness” condition so you’re not allowed to claim something deemed non-feasible), you might be able to essentially proclaim that technological territory as unpatentable. It wouldn’t prevent competitors from developing in those areas, but it could keep them from patenting in those areas. In essence, it transforms a space in which property rights may be allocated into one in which property rights may not. I’ve left out some details but you get the idea.
Now it occurs to me that a fair amount of strategic disclosure in the smart phone space took place in the form of Star Trek episodes. I’m going to go look for references to prior art…
Alex Tabarrok’s pictorial commentary on patent policy, drawn on a napkin, posits that the current patent system is somewhat too strong and thereby decreases innovation (the link to his original post is below). I have to say, however, that I don’t think patent strength is the problem. The problem is that the growth in patent applications over the last two decades has vastly exceeded the growth in resources available to the patent office, resulting in 1) long delays between patent application and granting (which can render patents completely pointless in fast moving industries), and 2) inadequate ability to examine the patent applications for novelty, usefulness and non-obviousness. This lowers the value of good patents (because they aren’t granted quick enough or may be fallaciously challenged) and increases the likelihood of bad patents being granted. As a result, for many individuals and firms, the expected net gains from manipulating the patent system for the purposes of extortion (hostage taking, patent trolling) now exceeds the expected net gains from using the patent system to actually innovate.
It’s difficult to assess how patent strength affects innovation without first making sure that patents are being granted and used the way the system had originally intended.
Alex Tabarrok’s original post can be found here: http://marginalrevolution.com/marginalrevolution/2012/09/patent-theory-on-the-back-of-a-napkin.html?utm_source=feedburner&utm_medium=email&utm_campaign=Feed%3A+marginalrevolution%2Ffeed+%28Marginal+Revolution%29
We love small businesses. We love entrepreneurs. Do we love them too much? The Economist thinks that this may be the case, reminding us that our liking may have more to do with ideology (or self-adulation) than with economic reality.
Small is not Beautiful: Why small firms are less wonderful than you think
Romney and Ryan have incorrectly characterized Obamacare as a “Raid on Medicare” and news organizations and the Obama campaign have fired back that is it actually a program to reduce healthcare costs — an important achievement of the administration. This whole discussion misses the fundamental point that $716 billion in savings would be the result of mandated price controls. Given that this is a major intervention, it is important to understand how these altered incentives will affect the U.S. healthcare system.
Medicare currently pays providers 30% less than private insurers and Obamacare will further reduce that to save $716 billion in payments to providers (hospitals, doctors, etc.). At the same time, broader coverage (another goal of the new law) will undoubtedly increase demand for services. How will these effects play out?
We already know that some providers are less willing to accept Medicare Read the rest of this entry »
A terrific paper by Cormac Herley, Microsoft Research, came out entitled, “Why do Nigerian Scammers Say There are from Nigeria.” It turns out that 51% of scam emails mention Nigeria as the source of funds. Given that “Nigerian scammer” now make it regularly into joke punch-lines, why in the world would scammer continue to identify themselves in this way? The paper was mentioned in a news item here, if you want the executive summary version but, really, I can’t imagine readers of this blog not finding the actual paper worthwhile and fun (it contains a terrific little model of scamming).
In a nutshell, the number of people who are gullible enough to fall for an online scam is tiny compared to the population that has to be sampled. This creates a huge false positive problem, that is, people who respond in some way and, hence, require an expenditure of scammer resources but who ultimately do not follow follow through on being duped.
As the author explains, in these situations, false positives (people identified as viable marks but who do not ultimately fall for the scam) must be balanced against false negatives (people who would fall for the scam but who are not targeted by the scammer). Since targeting is essentailly costless, the main concern of scammers is the false positive: someone who responds to an initial email with replies, phone calls, etc. – that require scammer resources to field – but who eventually fails to take the bait. Apparently, it does not take too many false positives before the scam becomes unprofitable. What makes this problem a serious issue is that the size of the vulnerable population relative to the population that is sampled (i.e., with an initial email) is minuscule.
Scammer solution? Give every possible hint – including self-identifying yourself as being from Nigeria – that you are a stereotypical scammer without actually saying so. Anyone replying to such an offer must be incredibly naive and uninformed (to say the least). False positives under this strategy drop considerably!
UPDATE: Josh Gans was blogging about this last week over at Digitopoly. He’s not convinced of the explanation though. To the extent there are “vigilante” types who are willing to expend resources to mess with scammers, the Easy-ID strategy could incur additional costs. As an interesting side note, in discussing this with Josh, he at one point suggested the idea that when legit firms come across scammers, they should counterattack by flooding them with, e.g., millions of fake/worthless credit card numbers (setting of something like a false positive atom bomb). Just one snag: US laws protect scammers from these kinds of malicious attacks.
By now, you may be getting sick of reading articles and blog posts about the crisis in higher education. This post is different. It proposes an explanation of why students have been willing to pay more and more for undergraduate and professional degrees at the same time that these degrees are becoming both less scarce and more dumbed down. And that explanation rests on a simple and plausible economic hypothesis.
For an economist studying business strategy, an interesting puzzle is why businesspeople, analysts, and regulators often don’t seem to perceive the fungibility of payments. Especially in dealing with bargaining issues, a persistent “optical illusion” causes them to fetishize particular transaction components without recognizing that the share of total gain accruing to a party is the sum of these components, regardless of the mix. Proponents of the “value-based” approach to strategy, which stresses unrestricted bargaining and the core solution concept, ought to be particularly exercised about this behavior, but even the less hard-edged V-P-C framework finds it difficult to accommodate.
- There’s been some noise lately about U.S. telecom providers cutting back on the subsidies they offer users who buy smartphones. None of the articles address the question of whether the telecom firms can thereby force some combination of a) Apple and Samsung cutting their wholesale prices and b) end users coughing up more dough for (smartphone + service). The possibility that competition among wireless providers fixes the share of surplus that they can collect, so that cutting the phone subsidy will also require them to cut their monthly service rates, is never raised explicitly. There is a pervasive confusion between the form of payments and the total size of payments.
I’m reporting from another great ACAC conference. This conference featured retrospectives marking the 30 year anniversaries for Nelson and Winter’s book and Lippman and Rumelt’s article. Kudos to Bill and the organizing committee for putting it together.
A starting similarity in the two is that they were both directly intended to influence conversations in economics and both missed their marks. For example, about 1% of the cites for Lippman and Rumelt were in top Econ journals – despite the fact that the article appeared in the Bell Journal of Economics. Lippman & Rumelt recorded a video specifically for the occasion Read the rest of this entry »
Try to guess the context for this piece of writing. Is it part of a scholarly study on the history of convention centers? A tourist guidebook? Is it the catalogue to a museum display on convention-center architecture?
In order to attract growing numbers of conventions in the
second half of the twentieth century, cities incorporated
convention center construction within urban renewal and
redevelopment schemes, usually at the edge of core urban
areas where space would be available for construction of
large buildings with contiguous, flat-floor space.
In an earlier post, I noted Target’s costly decision to end its on-line outsourcing arrangement with Amazon’s cloud service and take all its work in-house. The short-term costs were considerable, both in direct outlays and in performance degradation, and the long-term benefits were hard to pin down. Vague paranoia rather than careful analysis seemed to have driven the decision. I pointed out that firms often seemed unwilling to “sleep with the enemy,” i.e. purchase critical inputs from a direct rival, but the case for such reluctance was weak.
A few months ago, an apparent counterexample popped up. Swatch, the Swiss wristwatch giant, decided unilaterally to cease supplying mechanical watch assemblies to a host of competing domestic brands that are completely dependent on Swatch for these key components. These competitors (including Constant, LVMH, and Chanel) sued, fruitlessly, to force Swatch to continue to sell to them. The Swiss Federal Administrative Court backed up a deal Swatch cut with the Swiss competition authorities that allows Swatch to begin reducing its shipments to rivals. The competition authority will report later this year on how much grace time Swatch’s customers must be given to find new sources of supply, and these customers may appeal to the highest Swiss court. For now, Swatch’s customers are scrambling for alternative sources of supply in order to stay in business. The stakes are especially high because overall business is booming, with lots of demand in Asia.
The drumbeat continues: MIT launches free onine “fully automated” course. Aside from the fact that these innovations have major implications for the livelihoods of my friends and I, the economics are interesting per se.
With the elimination of capacity constraints on the distribution side, will brick-and-mortar education providers go the way of Blockbuster and Borders? The market does not like brick-and-morter. It is inefficient – costly and inconvenient.
What happens when one professor can serve the entire market? Will superstars play an even larger role in academia? Will there be a market for top researchers (scarce) or good teachers (less so)? The same question holds at the institution level. Will everyone get a degree (and work for) HBS one day?
UPDATE: Megan McArdle provides a more thoughtful essay on this event at the Atlantic.
After watching Jeremy Lin (Knicks) score 38 points against the Lakers tonight, I’m now on the Lin bandwagon. I don’t really even follow basketball that closely, but this seems like an intriguing story.
How on earth did someone like this go unnoticed? Seriously. He happened to get an opportunity to show his stuff as Carmelo Anthony and Amare Stoudemire are injured – and boy has he delivered.
Here’s a kid who didn’t get recruited for college ball, despite a tremendous record in high school. He was a superstar at Harvard but went undrafted by the NBA after graduating from Harvard (in economics) in 2010. He played a few games for Golden State and Houston, but was cut by both. He has played D-league basketball this year, until a few weeks ago. As of last week, he did not have a contract.
But come on: is basketball truly this inefficient at identifying and sorting talent? The comparisons and transfer of ability across “levels” (high school-college-professional) of course is tricky, though you would think that with time there would be increased sophistication.
Now, four games of course doesn’t make anyone a star. But even if Lin proves to “just” be a solid bencher, it seems that talent scouts clearly undervalued Lin (who lived in his brother’s apartment until recently). How much latent talent is out there? (I think that at the quarterback position in professional football – there are significant problems in identifying talent, but that’s another story.)
There are of course also some very interesting player-context/team-fit, interaction-type issues here, and I’m not sure that this really gets carefully factored beyond just individual contribution (thus not recognizing emergent positive, or negative, player*player effects). It’ll be interesting to see what happens, for example, when Carmelo Anthony is added back into the mix.
Well, it’ll be interesting to see how all this plays out. There is in fact a sabermetrics-type, stats-heavy, Moneyball-like thing in basketball as well – called ABPRmetrics. I would be curious to know whether there are ways to statistically identify Lin-type undervaluation and potential, and whether phenoms like this lead to better metrics for identifying talent.
UPDATE: Here’s ONE analyst/statistician who saw Lin’s potential in 2010.
Last week I had the great good fortune to attend the Max Planck Institute at Leipzig’s first conference on Rigorous Theories of Business Strategies in a World of Evolving Knowledge. The conference spanned and intense four days of presentations, exploration, and discussion on formal approaches to business strategy. Participants were terrific and covered the scholarly spectrum: philosophers, psychologists, game theorists, mathematicians, and physicists. Topics included cooperative game theory, unawareness games, psychological micro-foundations of decision making, and information theory. It was heartening to see growth in the community of formal theorists interested in strategy and my guess is that the event will spawn interesting new research projects and productive coauthoring partnerships. (Thanks to our hosts, Jurgen Jost and Timo Ehrig for organizing and sponsoring the conference!)
If one had to pick a single, overarching theme, it would have to be the exploration of formal approaches to modeling agents with bounded rationality. For example, I presented on subjective equilibrium in repeated games and its application to strategy. Others discussed heuristic-based decision making, unawareness, ambiguity, NK-complexity, memory capacity constraints, the interaction of language and cognition, and dynamic information transmission.
Over the course of the conference, it struck me just how offensive so many of my colleagues find the rationality assumptions so commonly used in economic theory. Of course, rational expectations models are the most demanding of their agents and, as such, seem to generate the greatest outrage. What I mean to convey is the sense that displeasure with these kinds of modeling choices go beyond dispassionate, objective criticism and into indignation and even anger. If you are a management scholar, you know what I mean.
Thus, at a conference such as this, we spend a lot of time reminding ourselves of all the research that points to all the limitations of human cognition. We detail how humans suffer from decision processes that are emotional, memory constrained, short-sighted, logically inconsistent, biased, bad at even rudimentary probability assessment, and so on. Then, we explore ways to build formal models in which our agents are endowed with “more realistic” cognitive abilities.
Perhaps contrary to your intuition, this is heady stuff from a modeler’s point-of-view: formalizing stylized facts about real cognition is seen as a worthy challenge … and discovering where the new assumptions lead is always amusing. From the perspective of many management scholars, such theories are more realistic, better able to explain observations of shockingly stupid decisions by business practitioners and, hence, superior to the silly, overly simplistic models that employ a false level rationality.
I am not mocking the sentiment. In fact, I agree with it. Indeed, none of the economists I know dispute the fact that human cognition is quite limited or that perfect rationality is an extreme and unrealistic assumption. (This isn’t to say there aren’t those who believe otherwise but, if there are, they are not acquaintances of mine.) On the contrary, careers have been made in game theory by finding clever ways to model some observed form of irrationality and using it to explain some observed form of decision failure. If this is the research agenda then, surely, we have hardly scratched the surface.
Yet, as I thought about it during the MPI conference last week, it dawned on me that our great preoccupation with irrational agents is misdirected. That animals as cognitively limited as us often, if not typically, fail to achieve rational consistency in our endeavors is no puzzle. What else would you expect? Rather, the deep mystery is how agents so limited in rational thought invent democracy, create the internet, land on the moon, and run purposeful organizations that succeed in a free market. Casual empiricism suggests that the pattern of objective-oriented progress in the history of mankind is too pervasive to ascribe to dumb luck. Even at the individual level, in spite of their many cognitive failings, the majority of people lead purposeful, productive lives.
This leads me to remind readers that economists invented the rational expectations model precisely because it was the only option that came anywhere close to explaining observed patterns in economy-level reactions to changes in government policies. This, even though the perfect rationality assumption is axiomatically false. There you have it.
Which leaves open the challenge of identifying which features of human cognition lead to persistent patterns of success in highly unstable environments. I conjecture that our refined pattern recognition abilities play a role in this apparent miracle. Other candidates include our determination to see causality everywhere we look as well as our incredible mental flexibility. Social factors and institutions must be involved — and, somewhere in there, a modicum of rationality and logic. After all, we did invent math.
Mario Polese provides a nice short history (up to the present) of oversold urban revitalization strategies in City Journal. Interestingly, these theories succeed with municipal decision makers for the same kinds of reasons that pop-strategy notions flourish with company managers: They fit the zeitgeist, they flatter the preconceptions and prejudices of the decision-making class, they claim to magically bypass the obstacles to success, and they enable the rent seeking of powerful coalitions. Their obvious theoretical and empirical drawbacks as all-purpose nostrums have little effect on their propagation, and their promoters often flourish despite a complete lack of proven efficacy.
One useful thought exercise for assessing urban development strategies is to imagine yourself the monopoly landowner in a city and think about what policies would maximize the value of your holdings (or rent stream). It quickly becomes apparent that for cities of any size or complexity, your chances of picking sectoral, much less firm-level, “winners” are very low, unlike the owner of, say, a shopping mall. The peculiar difficulty is that cities have both the “internal” complexity of closed systems and the “external” complexity of open systems in a turbulent environment.
Centrally planning complementarities and synergies within the city overwhelms the monopoly landowner’s knowledge and modeling prowess, because 1) the interactions are manifold and hard to decompose and 2) the city itself is what Hayek called an order (or cosmos) with different people pursuing different objectives, not an organization (or taxis) where a single hierarchy of objectives can be imposed; the denizens of the city don’t work for the landowner and are not deployable resources. The best you can do is provide the most effective sector-neutral institutions and infrastructure you can think of given your geographic and historic legacy. Any “natural” advantages a city has in specific sectors can be accommodated by policy (e.g., tourism-friendly policing in a natural tourist area), but trying to create such advantages from scratch seems foolhardy.
Deliberately positioning the city as a competitor against other cities then becomes something of a fool’s errand. The very sort of maneuverable, focused tradeoff-making needed to pursue competitive “good strategy” as an open system with shared objectives (a taxis) in a turbulent environment conflicts with the efficient policy neutrality needed to manage the city’s internal complexity as a cosmos.
Interesting question: How big does a piece of land have to be before planned synergy-mongering and focused strategy should give way to neutral governance? There are large master-planned communities put up by real-estate companies that include residential, commercial, and office components. I conjecture that that size is about the limit of effectiveness for guided, synergy-conscious development strategy.
I’m seeing more and more work using Mechanical Turk as a subject pool. Here’s another piece discussing some of the features, advantages and problems with Mechanical Turk – Rand, D (2011), The promise of mechanical turk: how online labor markets can help theorists run behavioral experiments, Journal of Theoretical Biology.
Combining evolutionary models with behavioral experiments can generate powerful insights into the evolution of human behavior. The emergence of online labor markets such as Amazon Mechanical Turk (AMT) allows theorists to conduct behavioral experiments very quickly and cheaply. The process occurs entirely over the computer, and the experience is quite similar to performing a set of computer simulations. Thus AMT opens the world of experimentation to evolutionary theorists. In this paper, I review previous work combining theory and experiments, and I introduce online labor markets as a tool for behavioral experimentation. I review numerous replication studies indicating that AMT data is reliable. I also present two new experiments on the reliability of self-reported demographics. In the first, I use IP address logging to verify AMT subjects’ self-reported country of residence, and find that 97% of responses are accurate. In the second, I compare the consistency of a range of demographic variables reported by the same subjects across two different studies, and find between 81% and 98% agreement, depending on the variable. Finally, I discuss limitations of AMT and point out potential pitfalls. I hope this paper will encourage evolutionary modelers to enter the world of experimentation, and help to strengthen the bond between theoretical and empirical analyses of the evolution of human behavior.
This article in Forbes argues that a new book by the Dean of Rotman School provides an antidote to the rampant excesses of modern day capitalism. The principle swipe is against the landmark paper (over 29000 Google Scholar citations) by Jensen and Meckling on both the prevalence of the principal agent problem in the governance of firms and the various solutions to overcome it – including creating incentives that maximize shareholder value. Quoting Jack Welch, former CEO of GE, the article says that maximizing shareholder value is the dumbest idea in the world. I my self am not sure if this is THE dumbest idea in the world – in fact there are many more that would easily surpass P-A problem resolution – but I am sure this will ignite a debate about why firm’s exist – what is the best governance mechanism for them and the role of economic theory and action in our lives. I for one need to go back and read the article and then read the book.