I am visiting Lund University this week – and they have conclusively shown that the Resource Based View indeed is useful. Useful for what? As a door stop to their conference room. (I also sent the proof/picture to Jay, one of the originators of the theory.)
Nicolai’s post at O&M made me aware of a new journal, Journal of Organization Design. I definitely think org design deserves to experience a renaissance/comeback, so I welcome a journal dedicated to the topic. (Though, I do think we have far too many journals in strategy/management – many of them are of suspect quality.)
I just now checked out the web site of the journal and they have some killer features, including short videos of the authors describing their papers:
- Here’s John Mathews talking about his paper, Design of Industrial and Supra-Firm Architectures.
- Timothy Carroll on Designing Organizations for Exploration and Exploitation.
- Raymond Miles on Designing the Firm to Fit the Future.
- And, here’s a longer video of Raymond Levitt discussing virtual design teams.
I like the video feature. Nice.
We’re excited to have Sheen Levine join us here at StratetgyProfs.net. Sheen is currently a faculty member at the Institute for Social and Economic Research and Policy at Columbia University. His research focuses on social networks, institutions and markets. You can learn more about his research etc here. You can also follow Sheen on twitter @sslevine.
And a quick plug: if you are going to the Academy of Management in Boston (Aug 3-7), be sure to check out Sheen’s (co-organized with Shayne Gary) cool Behavioral Strategy 3.0 workshop (learn more here). It promises to be a great session with lots of interdisciplinary interaction.
Great to have you here Sheen! We look forward to your posts.
[H/T to several folks on Facebook talking about this: Nicolai Foss, Russ Coff, Marcel Bogers etc.]
We’ve talked about the extensive fraud of Diederik Stapel before, but apparently there have been retractions even closer to home: the journal Strategic Organization retracted an article. And, over at the blog Retraction Watch there is an active discussion about Dirk Smeesters’ retractions and recent resignation. A bit more in a short Wired magazine piece. A Nature journal interview with Uri Simonsohn who discovered the Smeesters fraud.
We’re excited to welcome Melissa Schilling to StrategyProfs.net! Melissa is a Professor of Management and Organizations at the Stern School of Business, NYU. Her research focuses on strategy, innovation, and technology. You can learn more about her work here.
We look forward to Melissa’s posts!
My co-blogger Russ Coff’s 2010 Strategic Management Journal piece on the coevolution of rent appropriation and capability development used Tony Fadell and the development of the iPod as an example. Here Tony Fadell talks about constraints, ignoring experts and embracing self-doubt:
The microfoundations-thing is misunderstood and abused in strategy. I try not to even use the word any more – given that it gets thrown around so loosely (it seemed that every talk at an SMS a few years ago managed to slip the word in). So some quick notes on microfoundations and strategy.
The “microfoundations”-effort in strategy is focused on the notion that individual-level heterogeneity matters – a lot. This means that who an organization is composed of, who self-selects into it, who leaves, matters – a lot. Extant theories of organization and capability don’t recognize this (for example, see Kogut and Zander, Nelson and Winter): in fact they argue that mobility is trivial, or (often implicitly) assume that individuals are homogeneous (by focusing directly on various collective constructs). Organizational effects are said to trump individual effects. But, I think the nested, individual-level effects trump higher level ones (individual>firm>industry effects) – some quick supporting points:
First, mobility: Mobility is the great litmus test for where heterogeneity might lie. If a particular individual leaves (in fact, who it is matters – a lot), does that impact organizational outcomes? Individual mobility is needed since analysis of variance is confounded without it: what is attributed to the collective level might in fact be an individual-level effect. A common mistake.
Second, Lotka distribution and variance within: If you look at productive activity in any setting, you’ll quickly note that it is highly skewed. The statistician Alfred Lotka pointed this out in 1926 in terms of scientific productivity, highlighting how in any population a few people are responsible for a radically disproportionate amount of productivity (measured in various ways): articles, citations, etc. In many settings the 80-20 rule (20% of people are responsible for 80% of the output) doesn’t even begin to cover it. So, look in any setting and you’ll find radical, nested heterogeneity – more heterogeneity within the system (this is often assumed away) than across.
Of course, it could be that highly talented individuals ALL select to be in a particular setting, which then might confound imputing heterogeneity.
Now, some common misunderstandings about microfoundations. First, the above does not mean that there aren’t any collective effects (benefits of interaction, learning, routines, structures, context etc etc): of course there are! But the first-order exercise ought to be to specify, as best we can, the nature and capabilities of the individuals involved. After that collective and emergent effects can be properly tested.
Though, note that–and this is important–collective effects can also be of the variety where the whole is less than the sum of the parts. Social psychology gives us lots of examples: social loafing and free-riding, influence (a la Asch), work on nominal versus interactive brainstorming, etc. This is often not recognized as we tend to ascribe virtues (but not costs) to organizational culture, interaction, community, etc. To complicate matters, individuals of course have much control over how much discretionary effort to put into collective tasks – (ah!) depending on the context.
Second, the microfoundations notion does not mean that we all become psychologists or über-reductionists, where we jump directly to genes or general intelligence (g), or we reduce everything back to the big bang. The aforementioned are quite common caricatures of microfoundations, or points of ridicule unleashed on anyone advocating methodological individualism. Rather, more simply, it is an approach that recognizes that individuals matter – and thus individuals are used as the basis for building models of social interaction and for explaining aggregate and emergent effects at the higher level.
The Economist has a piece on how crowdsourcing and tools like Mechanical Turk are transforming science: “the roar of the crowd.” Here’s the blog dedicated to help scientists set up their experiments on Mechanical Turk, Experimental Turk. I’m guessing it is a matter of time before some strategy-related experiments get done on Mechanical Turk – here’s probably a few such pieces (well, just a very loose search of mechanical turk+smj+mgtsci).
- This appears to be PhD Student collecting dissertation data via a decision-making task on Mechanical Turk.
- Here’s a Cornell marketing scholar collecting data.
- Lots of the “game” tasks appear to be experiments.
Despite only being six years old, it looks like the Strategic Entrepreneurship Journal (SEJ) is off to a flying start. I don’t really read entrepreneurship journals, though have read some good work in SEJ. And, for anyone interested: it looks like there are several special issue call for papers (links are to the pdfs):
- Entrepreneurship and Strategy in the Informal Economy, edited by Duane Ireland et al.
- Business Models, edited by Chris Zott et al.
- Theories of Entrepreneurship, edited by Sharon Alvarez et al.
The “dynamic capabilities” literature, I think, is a bit of a mess: lots of jargon, conflicting arguments (and levels of analysis) and little agreement even on a basic definition. I don’t really like to get involved in definitional debates, though I think the idea of a capability, the ability to do/accomplish something (whether individual or collective), is fundamental for strategy scholars.
Last weekend I was involved in a “microfoundations of strategy” panel (with Jay Barney and Kathy Eisenhardt). One of the questions that I raised, and find quite intriguing, is the question of how we might “grow” a capability. The intuition for “growing” something, as a form of explanation, comes from simulation and agent-based modeling. For example, Epstein has argued, “if you didn’t grow it, you didn’t explain it” (here’s the reference). I like that intuition. As I work with colleagues in engineering and computer science, this “growth” mentality seems to implicitly be there. Things are not taken for granted, but explained by “growing” them. Capabilities aren’t just the result of “history” or “experience” (a common explanation in strategy), but rather that history and experience needs to be unpacked and understood more specifically. What were the choices that led to this history? Who are the central actors? What are the incentives and forms of governance? Etc.
So, if we were to “grow” a capability, I think there are some very basic ingredients. First, I think understanding the nature, capability and choices of the individuals involved is important. Second, the nature of the interactions and aggregation matters. The interaction of individuals and actors can lead to emergent, non-linear and collective outcomes. Third, I think the structural and design-related choices (e.g., markets versus hierarchy) and factors are important in the emergence (or not) of capabilities. Those are a few of the “ingredients.”
I’m not sure that the “how do you grow a capability”-intuition is helpful in all situations. However, I do find that there is a tendency to use short-hand code words (routines, history, experience), and the growth notion requires us to open up these black boxes and to more carefully investigate the constituent parts, mechanisms and interactions that lead to the development or “growth” of capability.
One of my favorite conferences is the BYU-University of Utah Winter Strategy Conference. Nice and small. Great locations. Skiing. The conference is next week in Snowbird. Here’s the lineup.
Via several folks on Facebook (e.g., Marcel Bogers, Der Chao Chen) – here’s a short blog post on how journals are manipulating their impact factors: coerced citations and manipulated impact factors – dirty tricks of academic journals.
Just a reminder to anyone interested (or out of the loop): the deadline to submit something for the October 5-7 Strategic Management Society conference is in two days (Feb 23). This year’s theme is “Strategy in Transition” (here’s the call for proposals). I like SMS’s approach of requesting paper “proposals” (essentially extended abstracts rather than full papers): easier on both the reviewers and authors.
My verdict on SMS? I like the conference. It is quite pricey (the conference fee is a hefty $1000+), but generally the sessions are good. And most of all, it’s fun to interact and meet up with strategy colleagues, co-authors and friends in a somewhat smaller setting (not quite the zoo that the Academy of Management can be — SMS is far more targeted).
And, as a bonus, the locations tend to be excellent. I attended the Rome SMS conference in 2010 and this year’s conference will be held in Prague. Maybe we’ll live blog from the conference this year.
After watching Jeremy Lin (Knicks) score 38 points against the Lakers tonight, I’m now on the Lin bandwagon. I don’t really even follow basketball that closely, but this seems like an intriguing story.
How on earth did someone like this go unnoticed? Seriously. He happened to get an opportunity to show his stuff as Carmelo Anthony and Amare Stoudemire are injured – and boy has he delivered.
Here’s a kid who didn’t get recruited for college ball, despite a tremendous record in high school. He was a superstar at Harvard but went undrafted by the NBA after graduating from Harvard (in economics) in 2010. He played a few games for Golden State and Houston, but was cut by both. He has played D-league basketball this year, until a few weeks ago. As of last week, he did not have a contract.
But come on: is basketball truly this inefficient at identifying and sorting talent? The comparisons and transfer of ability across “levels” (high school-college-professional) of course is tricky, though you would think that with time there would be increased sophistication.
Now, four games of course doesn’t make anyone a star. But even if Lin proves to “just” be a solid bencher, it seems that talent scouts clearly undervalued Lin (who lived in his brother’s apartment until recently). How much latent talent is out there? (I think that at the quarterback position in professional football – there are significant problems in identifying talent, but that’s another story.)
There are of course also some very interesting player-context/team-fit, interaction-type issues here, and I’m not sure that this really gets carefully factored beyond just individual contribution (thus not recognizing emergent positive, or negative, player*player effects). It’ll be interesting to see what happens, for example, when Carmelo Anthony is added back into the mix.
Well, it’ll be interesting to see how all this plays out. There is in fact a sabermetrics-type, stats-heavy, Moneyball-like thing in basketball as well – called ABPRmetrics. I would be curious to know whether there are ways to statistically identify Lin-type undervaluation and potential, and whether phenoms like this lead to better metrics for identifying talent.
UPDATE: Here’s ONE analyst/statistician who saw Lin’s potential in 2010.
I’m seeing more and more work using Mechanical Turk as a subject pool. Here’s another piece discussing some of the features, advantages and problems with Mechanical Turk – Rand, D (2011), The promise of mechanical turk: how online labor markets can help theorists run behavioral experiments, Journal of Theoretical Biology.
Combining evolutionary models with behavioral experiments can generate powerful insights into the evolution of human behavior. The emergence of online labor markets such as Amazon Mechanical Turk (AMT) allows theorists to conduct behavioral experiments very quickly and cheaply. The process occurs entirely over the computer, and the experience is quite similar to performing a set of computer simulations. Thus AMT opens the world of experimentation to evolutionary theorists. In this paper, I review previous work combining theory and experiments, and I introduce online labor markets as a tool for behavioral experimentation. I review numerous replication studies indicating that AMT data is reliable. I also present two new experiments on the reliability of self-reported demographics. In the first, I use IP address logging to verify AMT subjects’ self-reported country of residence, and find that 97% of responses are accurate. In the second, I compare the consistency of a range of demographic variables reported by the same subjects across two different studies, and find between 81% and 98% agreement, depending on the variable. Finally, I discuss limitations of AMT and point out potential pitfalls. I hope this paper will encourage evolutionary modelers to enter the world of experimentation, and help to strengthen the bond between theoretical and empirical analyses of the evolution of human behavior.
Lots of talk about education being disrupted (here are some previous links). Here are a few links:
- How Online Innovators Are Disrupting Education
- Technology Cannot Disrupt Education from the Top Down
- Why Online Education is Ready for Disruption, Now
- More on MITx at Wired
- Chronicle on Disrupting College, Clay Christensen et al on Disrupting College
Here’s some info from a year in EdTech Trends:
Million Dollar Moments
Yes, we saw big M&A deals this year: Pearson plunked down $230 million for Schoolnet and another $400 million for Connections Education. Permira Fund wrestled PlatoLearning for the privilege of paying $455 million for RenaissanceLearning. ePals debuted on the Toronto exchange. Edtech M&A activity was north of $1.6 billion this year.
Here are a dozen top edtech investments this past year:
- $33M Knewton
- $32.5M 2tor
- $30M Kno
- $20M CampusBookRentals
- $17M Inkling
- $17M Zeebo
- $13M Edmodo
- $11M Dreambox
- $10M ConnectEDU
- $10M MyEDU
- $8M Instructure
- $7M Grockit
Via Karim’s twitter feed: Ronald Coase turned 101 years old today. Congratulations!
(My new goal is to be publishing at 101. That’s got to be a record of some sort.)
I’m working on a theory of the firm-related paper over the holiday break. One of the pieces I enjoy revisiting is Herbert Simon’s (1991) article “Organizations and Markets,” Journal of Economic Perspectives. What I like is the intriguing thought experiment in that paper (frankly, I think thought experiments are a VERY under-utilized tool in strategy and organization theory). To illustrate the “ubiquity of organizations” Simon asks us to imagine seeing the globe from above and envisioning market exchanges as red lines and firm-related exchanges as green lines. Clearly, the green dominates. Thus Simon never developed a comparative theory of governance (markets versus hierarchy) and focused on organizations themselves (the basis of the behavioral theory of the firm). I tend to think that the comparative aspects are fundamental, though naturally The Behavioral Theory also has a place in the canon.
For anyone interesting, here’s the first couple paragraphs of the thought experiment:
A mythical visitor from Mars, not having been apprised of the centrality of
markets and contracts, might find the new institutional economics rather
astonishing. Suppose that it (the visitor I’ll avoid the question of its sex)
approaches the Earth from space, equipped with a telescope that reveals social
structures. The firms reveal themselves, say, as solid green areas with faint
interior contours marking out divisions and departments. Market transactions
show as red lines connecting firms, forming a network in the spaces between
them. Within firms (and perhaps even between them) the approaching visitor
also sees pale blue lines, the lines of authority connecting bosses with various
levels of workers. As our visitor looked more carefully at the scene beneath, it
might see one of the green masses divide, as a firm divested itself of one of its
divisions. Or it might see one green object gobble up another. At this distance,
the departing golden parachutes would probably not be visible.
No matter whether our visitor approached the United States or the Soviet
Union, urban China or the European Community, the greater part of the space
below it would be within the green areas, for almost all of the inhabitants would
be employees, hence inside the firm boundaries. Organizations would be the
dominant feature of the landscape. A message sent back home, describing the
scene, would speak of “large green areas interconnected by red lines.” It would
not likely speak of “a network of red lines connecting green spots.”
The title of this post is from the opening line of this article: McGowan, D. 2011. The Tory Anarchism of F/OSS Licensing. University of Chicago Law Review.
The article goes against current academic wisdom (Lessig et al) and argues that freedom actually gets restricted in open source licensing — specifically the freedom of authors (rather than users). An interesting piece, worth reading. Here’s the abstract:
This Article uses the example of free and open-source software licenses to show that granting authors relatively strong control over the modification of their work can increase rather than impede both the creation of future work and the variety of that work. Such licenses show that form agreements that enable authors to condition use of their work on the terms that matter most to them may give authors the incentive and assurance they need to produce work and make it available to others. Such licenses may therefore increase both the amount of expression available for use and the variety of that expression, even if enforcement limits the freedom of downstream users. These facts give reason to oppose recent decisions that make license terms harder to enforce through preliminary or permanent injunctive relief.
Grant McCracken summarizes an Economist post that argues that big companies are better at innovation than small ones (well, he discusses both sides).
But theory says that small companies are actually the winners.
Economists have long wrestled with this, the “diseconomies,” problem: why do smaller organizations outperform large ones? (Todd Zenger’s 1992 Management Sci piece summarizes this work nicely.) Schumpeter indeed went both ways on this (Dick Langlois discusses the “two Schumpeters”-thesis a bit here). But yes, large organizations seemingly have the resources, complementary assets, access to talent etc to outperform small organizations. But small organizations still outperform large ones.
Large organizations have lots of problems (I’ll spare the references, for now). They
- mis-specify incentives,
- suffer from problems of social loafing (free-rider problem),
- engage in unnecessary intervention, etc.
And, if large organizations had such an advantage, why not take this argument to the extreme and simply organize everything under one large firm? That, of course, was one of Coase’s central questions. Obviously the organization-market boundary matters and there are costs associated with hierarchy.
Sure – there are lots of contingencies, caveats and exceptions [insert example from Apple or 3M]. And, definitions matter [what exactly is “small” versus “large”]. But on the whole, the theory says small companies win in the innovation game.