An article in the current edition of the Economist describes Alfred Marshall’s original observation of geographic clusters of activities. They describe four main logics for clustering:
First, some may depend on natural resources, such as a coalfield or a harbour. Second, a concentration of firms creates a pool of specialised labour that benefits both workers and employers: the former are likely to find jobs and the latter are likely to find staff. Third, subsidiary trades spring up to supply specialised inputs. Fourth, ideas spill over from one firm to the next, as Marshall observed.
However, there are also costs to being in a cluster such as higher rent or transportation costs associated with distances to customers or suppliers. The burst of communications and computing power should make it easier since natural resources are less important and workers can live farther away from their offices.
It hasn’t worked this way. Pools of human capital continue to drive clustering as people prefer to work near where they live. Very small distances can make a big difference. The article goes on to describe clusters within clusters in the Bay area for specialized knowledge.
The patent system is “a real chaos”. Its faults were laid bare yesterday in an extensive New York Times article, which quickly reached the “most emailed list” (The Patent, Used as a Sword; and see Melissa Schilling’s review). But the same article also hedged by reminding us “patents are vitally important to protecting intellectual property”. But is intellectual property really essential for innovation? For an answer, look just a little past commercial software and you will see vast open collaboration without patents or copyright. Wikipedia, an open initiative, answers many of our questions. Open source software such as Linux and Android power most commercial websites and mobile devices, respectively. In myriad forums, mailing lists and online communities, users contribute reviews, provide solutions, and share tips with others. Science has been progressing by enlisting thousands of volunteers to classify celestial objects and decipher planetary images. Innovation without patents is real. Researchers estimate that open collaboration and user innovation bring more innovation than than the patented kind. Our legal and commercial system can do more to encourage it.
A terrific paper by Cormac Herley, Microsoft Research, came out entitled, “Why do Nigerian Scammers Say There are from Nigeria.” It turns out that 51% of scam emails mention Nigeria as the source of funds. Given that “Nigerian scammer” now make it regularly into joke punch-lines, why in the world would scammer continue to identify themselves in this way? The paper was mentioned in a news item here, if you want the executive summary version but, really, I can’t imagine readers of this blog not finding the actual paper worthwhile and fun (it contains a terrific little model of scamming).
In a nutshell, the number of people who are gullible enough to fall for an online scam is tiny compared to the population that has to be sampled. This creates a huge false positive problem, that is, people who respond in some way and, hence, require an expenditure of scammer resources but who ultimately do not follow follow through on being duped.
As the author explains, in these situations, false positives (people identified as viable marks but who do not ultimately fall for the scam) must be balanced against false negatives (people who would fall for the scam but who are not targeted by the scammer). Since targeting is essentailly costless, the main concern of scammers is the false positive: someone who responds to an initial email with replies, phone calls, etc. – that require scammer resources to field – but who eventually fails to take the bait. Apparently, it does not take too many false positives before the scam becomes unprofitable. What makes this problem a serious issue is that the size of the vulnerable population relative to the population that is sampled (i.e., with an initial email) is minuscule.
Scammer solution? Give every possible hint – including self-identifying yourself as being from Nigeria – that you are a stereotypical scammer without actually saying so. Anyone replying to such an offer must be incredibly naive and uninformed (to say the least). False positives under this strategy drop considerably!
UPDATE: Josh Gans was blogging about this last week over at Digitopoly. He’s not convinced of the explanation though. To the extent there are “vigilante” types who are willing to expend resources to mess with scammers, the Easy-ID strategy could incur additional costs. As an interesting side note, in discussing this with Josh, he at one point suggested the idea that when legit firms come across scammers, they should counterattack by flooding them with, e.g., millions of fake/worthless credit card numbers (setting of something like a false positive atom bomb). Just one snag: US laws protect scammers from these kinds of malicious attacks.
Freek’s latest post on confirmation bias notes that intellectual commitments can bias which research findings one believes. The tone of the post is that we would all be better off if such biases didn’t exist, but there is definitely a tradeoff here. Greater objectivity tends to go with lower intensity of interest in a subject. (Disinterested and uninterested are correlated, for those old-timers who remember when those words had different definitions.) That’s why you often find that those with strong views on controversial topics–including those with minority or even widely ridiculed opinions–often know more about the topic, the evidence, and the arguments pro and con than “objective” people who can’t be bothered to dig into the matter. Other than partisanship, the only thing that will get people interested enough to seriously assess competing claims is a personal stake in the truth of the matter. (And in all cases, Feynman’s admonition that the easiest person to fool is yourself should be borne in mind.)
Historians of science of all stripes, from romanticists like Paul de Kruif (author of the classic The Microbe Hunters) to sophisticated evolutionists like David Hull in Science as a Process, have reported that intellectual partisanship motivates a great deal of path-breaking research. “I’ll show him!” has spawned a lot of clever experiments. Burning curiosity and bland objectivity are hard to combine.
But how can such partisanship ever lead to intellectual progress? Partisans have committed to high-profile public bets on one or another side of a controversy; their long-term career and immediate emotional payoffs depend not directly on the truth, but on whether or not they “win” in the court of relevant opinion. The key to having science advance is for qualified non-partisan spectators of these disputes be able to act as independent judges to sort out which ideas are better.
Ideally, these adjacent skilled observers would have some skin in the game by virtue of having to bet their own research programs on what they think the truth is. If they choose to believe the wrong side of a dispute, their future research will fail, to their own detriment. That’s the critical form of incentive compatibility for making scientific judgments objective, well-described in Michael Polanyi’s “Republic of Science” article. If, for most observers, decisions about what to believe are closely connected to their own future productivity and scientific reputation, then the partisanship of theory advocates is mostly a positive, motivating exhaustive search for the strengths and weaknesses of the various competing theories. Self-interested observers will sort out the disputes as best they can, properly internalizing the social gains from propounding the truth.
The problem for this system comes when 1) the only scientific interest in a dispute lies among the partisans themselves, or 2) observers’ control over money, public policy, or status flows directly from choosing to believe one side or another regardless of the truth of their findings. Then, if a false consensus forms the only way for it come unstuck is for new researchers to benefit purely from the novelty of their revisionist findings–i.e., enough boredom and disquiet with the consensus sets in that some people are willing to entertain new ideas.
An earlier post described the sclerotic impact of excessive regulatory documentation requirements on real-estate development projects. it turns out that the private sector isn’t the only victim of this tendency:
- The Pentagon got concerned that it might be suffering from hyper-cephalization–too many studies and reports on every topic.
- The Pentagon commissioned a meta-study to estimate the costs of all the studies and reports.
- The Government Accounting Office performed a meta-meta-study saying that the meta-study wasn’t performed correctly according to existing rules and standards.
I think we all know what the logical response to the GAO meta-meta-study is…
The current issue of McKinsey Quarterly features an interesting article on firms crowd-sourcing strategy formulation. This is another way that technology may shake up the strategy field (See also Mike’s discussion of the MBA bubble). The article describes examples in a variety of companies. Some, like Wikimedia and Redhat aren’t much of a surprise given their open innovation focus. However, we should probably take notice when more traditional companies (like 3M, HCL Technologies, and Rite-Solutions) use social media in this way. For example, Rite-Solutions, a software provider for the US Navy, defense contractors and fire departments, created an internal market for strategic initiatives:
Would-be entrepreneurs at Rite-Solutions can launch “IPOs” by preparing an Expect-Us (rather than a prospectus)—a document that outlines the value creation potential of the new idea … Each new stock debuts at $10, and every employee gets $10,000 in play money to invest in the virtual idea market and thereby establish a personal intellectual portfolio Read the rest of this entry »
The “dynamic capabilities” literature, I think, is a bit of a mess: lots of jargon, conflicting arguments (and levels of analysis) and little agreement even on a basic definition. I don’t really like to get involved in definitional debates, though I think the idea of a capability, the ability to do/accomplish something (whether individual or collective), is fundamental for strategy scholars.
Last weekend I was involved in a “microfoundations of strategy” panel (with Jay Barney and Kathy Eisenhardt). One of the questions that I raised, and find quite intriguing, is the question of how we might “grow” a capability. The intuition for “growing” something, as a form of explanation, comes from simulation and agent-based modeling. For example, Epstein has argued, “if you didn’t grow it, you didn’t explain it” (here’s the reference). I like that intuition. As I work with colleagues in engineering and computer science, this “growth” mentality seems to implicitly be there. Things are not taken for granted, but explained by “growing” them. Capabilities aren’t just the result of “history” or “experience” (a common explanation in strategy), but rather that history and experience needs to be unpacked and understood more specifically. What were the choices that led to this history? Who are the central actors? What are the incentives and forms of governance? Etc.
So, if we were to “grow” a capability, I think there are some very basic ingredients. First, I think understanding the nature, capability and choices of the individuals involved is important. Second, the nature of the interactions and aggregation matters. The interaction of individuals and actors can lead to emergent, non-linear and collective outcomes. Third, I think the structural and design-related choices (e.g., markets versus hierarchy) and factors are important in the emergence (or not) of capabilities. Those are a few of the “ingredients.”
I’m not sure that the “how do you grow a capability”-intuition is helpful in all situations. However, I do find that there is a tendency to use short-hand code words (routines, history, experience), and the growth notion requires us to open up these black boxes and to more carefully investigate the constituent parts, mechanisms and interactions that lead to the development or “growth” of capability.
Interesting: synchronized clapping isn’t as loud as loud as unsynchronized clapping. Here’s the article: physics of the rhythmic applause. (Here’s the non-gatex arXiv version: “self-organization in the concert hall: the dynamics of rhythmic applause.”
We report on a series of measurements aimed to characterize the development and the dynamics of the rhythmic applause in concert halls. Our results demonstrate that while this process shares many characteristics of other systems that are known to synchronize, it also has features that are unexpected and unaccounted for in many other systems. In particular, we find that the mechanism lying at the heart of the synchronization process is the period doubling of the clapping rhythm. The characteristic interplay between synchronized and unsynchronized regimes during the applause is the result of a frustration in the system. All results are understandable in the framework of the Kuramoto model.
And, here’s a response by Alexander Rosenberg, how Jerry Fodor slid down the slippery slope of anti-darwinism, and Paul Griffiths on how evolution selects truth, and Richard Boyd on evolutionary psychology.
I’ve been skimming/reading through Michael Nielsen’s (pioneer in quantum computing) new (2012) book Reinventing discovery: the new era of networked science, Princeton University Press. The book chronicles the various open science and open innovation initiatives from the past and present: Torvalds and Linux, Tim Gowers’ polymath project (see his post: is massively collaborative mathematics possible), the failed quantum wiki (qwiki) effort, Galaxy Zoo, collaborative fiction, Sloan Digital Sky Survey (SDSS), Open Architecture Network, Foldit, SPIRES, Paul Ginsbarg’s arXiv, the Public Library of Science (PLoS), of course Innocentive, etc, etc.
My quick take on the book – it is a nice review of the existing forms that open innovation and open science are taking. I’ve read or followed most of the above projects over the years so the book doesn’t cover too much new territory from that perspective. The language in the book isn’t too precise –e.g., “network” isn’t very specific (I suppose in this case it simply means internet, broadly, and more general openness). But then again, this isn’t really an academic book (lots of great footnotes though). But the book is a great review of some of the existing efforts in open innovation and open science.
But beyond detailing the many instances of increased openness in science, the book touches more generally on the possibilities of “citizen science” (David Kirsch posted about citizen science on orgtheory.net, see here). I think there are lots of interesting possibilities: funding, tapping into cognitive ‘surplus,’ perhaps gamification, and many other forms of collaboration. And the book leaves off with some important problems for and questions about open science. How do you get the incentives right for openness? Who should be the gatekeepers? What institutions are needed to support openness? Etc.
Here’s the author speaking at Google a few weeks ago:
Via StrategyProfs reader Andrew Boysen’s Twitter feed (@boysenandrew) — some upcoming, very cool, free online classes: Model Thinking by Scott Page (University of Michigan) and Game Theory by Matt Jackson and Yoav Shoham (Stanford).
Love this trend of free online classes, I’m auditing that mega Artificial Intelligence class just to see how a class with 100,000+ registered students might actually work (I’m guessing a small fraction actually do the work – but still fascinating).
Here’s the pitch for that class by Scott. Sounds fantastic.
An an ongoing research puzzle for me has been how distributed movements, open source|wikipedia, mobilize collective action and get individual incentives and actions aligned. Is the apparent lack of “strategy” a virtue or a vice? For example, Linus Torvalds, founder of Linux, has argued that “brownian motion” drives Linux development:
<From: Linus Torvalds
Subject: Re: Coding style – a non-issue
Date: Fri, 30 Nov 2001 16:50:34 -0800 (PST)
On Fri, 30 Nov 2001, Rik van Riel wrote:
> I’m very interested too, though I’ll have to agree with Larry
> that Linux really isn’t going anywhere in particular and seems
> to be making progress through sheer luck.
Hey, that’s not a bug, that’s a FEATURE!
You know what the most complex piece of engineering known to man in the
whole solar system is?
Guess what – it’s not Linux, it’s not Solaris, and it’s not your car.
It’s you. And me.
And think about how you and me actually came about – not through any
Right. “sheer luck”.
Well, sheer luck, AND:
- free availability and _crosspollination_ through sharing of “source
code”, although biologists call it DNA.
- a rather unforgiving user environment, that happily replaces bad
versions of us with better working versions and thus culls the herd
(biologists often call this “survival of the fittest”)
- massive undirected parallel development (“trial and error”)
I’m deadly serious: we humans have _never_ been able to replicate
something more complicated than what we ourselves are, yet natural
selection did it without even thinking.
<….later in thread…>
A strong vision and a sure hand sound like good things on paper. It’s just
that I have never _ever_ met a technical person (including me) whom I
would trust to know what is really the right thing to do in the long run.
Too strong a strong vision can kill you – you’ll walk right over the edge,
firm in the knowledge of the path in front of you.
I’d much rather have “brownian motion”, where a lot of microscopic
directed improvements end up pushing the system slowly in a direction that
none of the individual developers really had the vision to see on their
And I’m a firm believer that in order for this to work _well_, you have to
have a development group that is fairly strange and random.
To get back to the original claim – where Larry idolizes the Sun
engineering team for their singlemindedness and strict control – and the
claim that Linux seems ot get better “by luck”: I really believe this is
The problem with “singlemindedness and strict control” (or “design”) is
that it sure gets you from point A to point B in a much straighter line,
and with less expenditure of energy, but how the HELL are you going to
consistently know where you actually want to end up? It’s not like we know
that B is our final destination.
In fact, most developers don’t know even what the right _intermediate_
destinations are, much less the final one. And having somebody who shows
you the “one true path” may be very nice for getting a project done, but I
have this strong belief that while the “one true path” sometimes ends up
being the right one (and with an intelligent leader it may _mostly_ be the
right one), every once in a while it’s definitely the wrong thing to do.
And if you only walk in single file, and in the same direction, you only
need to make one mistake to die.
In contrast, if you walk in all directions at once, and kind of feel your
way around, you may not get to the point you _thought_ you wanted, but you
never make really bad mistakes, because you always ended up having to
satisfy a lot of _different_ opinions. You get a more balanced system.
So the question for me has been if this is just an accidental feature of a distributed movement or can we actually drive collective action this way?
The recent emergence of #OWS provides an interesting case study unfolding in real time. Fast Company has a nice entry about how the movement came about:
And not posting clear demands, while essentially a failing, has unintended virtue. Anyone who is at all frustrated with the economy–perhaps even 99% of Americans–can feel that this protest is their own.
So is this the way to develop strategy?
At the recent Strategic Management Society meetings in Miami, I attended a session devoted to creating an SMS strategy certificate. (Apparently this is an ongoing initiative that started a year ago or so, although I hadn’t been paying attention.) The idea is to offer a written exam that consultants can take (for a fee) in order to become SMS-certified strategists. (I would put in links to the SMS website for all this–they even have a forum where members can view the tentative list of exam topics and leave comments–but the hamsters that power the site appear either to be on strike or allergic to Chrome.)
My first reaction to this proposed exam was to be reminded of the old story about the grocer who observes a shopper sniffing the meat for freshness and responds, “Lady, could you pass that test?” They had a laundry list of topics forming a kind of core and then planned “electives” in different specialized areas of strategy. Many of the topics are things I’ve heard of but don’t know much about. Others are things that I know about but believe to be vacuous or fatally flawed. It looked like a flat-file version of one of those giant multicolored management textbooks used by undergraduate business majors, which have always depressed me with their pretension and lack of coherence. I’m not sure if I espied Miles and Snow’s categories among the topics flashing by on the Powerpoint, but they did have SWOT analysis, generic strategies, the BCG matrix, vision/mission statements, and a variety of other forms of management Laetrile. Can you imagine being certified in SWOT? In vision statements? It’s almost as embarrassing as Louisiana’s tests for licensing flower arrangers that were mostly repealed under pressure from the Institute for Justice.
Perhaps to maintain buy-in from the heavily academic constituency of SMS, the program is being sold as having no effect on academic curricula or research. The influence is supposed to go entirely from academia to consulting and practice, with no one’s course being pressured to meet the certification content.
It was a peculiar meeting in the Neptune room of the Loews. A working group had been beavering away on a proposed curriculum for a year and was ostensibly soliciting our feedback, but 1) didn’t want to engage in the specifics of what they had come up with and 2) didn’t really want to address the basic question of whether the whole enterprise makes sense. Those in charge took notes on what the people in the room said but it felt like one of those government “request for public comment” setups, where the fix is in and no meaningful reconsideration of the project is possible. One person told me afterward that he had never been in a meeting with such an undercurrent of fear and suppressed tension. There was indeed a whiff of preference falsification in the air.
I was as diplomatic as possible, but expressed some of my concerns. Afterwards, a few people commented to me that they thought that this was a terrible idea but had expected/hoped that its intrinsic hideousness would have killed it off by now. I see no signs of such a spontaneous abortion. Rather, the meetings keep going on and the “process” keeps rolling forward, despite the instantaneously queasy feeling it causes in everyone with whom I discuss it.
Why would the SMS want to do this?
The article recounts the winning team’s strategy in the DARPA network challenge - a challenge where 10 red weather balloons were placed in locations throughout the US. The winning MIT team found them all in less than 9 hours: check out their use of the web, tweets, the use of incentives ($40,000), etc.
In terms of incentives, the MIT team used the promised prize money as the incentive — $4,000 for each of the 10 balloons. $2000 per balloon was promised to the first person sending the balloon coordinates, $1000 to the person who recruited the finder onto the team, $500 to whoever invited the inviter, $250 to whoever invited that person, etc.
Here are some of the other strategies:
- The second team, from Georgia Tech, used an altruism-based approach (the money would be donated to the Red Cross) – they found nine of the ten balloons.
- George Hotz, a Twitter celebrity, recruited his followers – he found eight of the ten balloons.
Check out the paper for additional details (lots of cool stuff on networks, recruitment, etc).
Here’s the abstract:
The World Wide Web is commonly seen as a platform that can harness the collective abilities of large numbers of people to accomplish tasks with unprecedented speed, accuracy, and scale. To explore the Web’s ability for social mobilization, the Defense Advanced Research Projects Agency (DARPA) held the DARPA Network Challenge, in which competing teams were asked to locate 10 red weather balloons placed at locations around the continental United States. Using a recursive incentive mechanism that both spread information about the task and incentivized individuals to act, our team was able to find all 10 balloons in less than 9 hours, thus winning the Challenge. We analyzed the theoretical and practical properties of this mechanism and compared it with other approaches.
Here’s where the balloons were located: