After all this hardware was installed, an even larger problem was tuning the AGS. In 1988, when we accelerated polarized protons to 22 GeV, we needed 7 weeks of exclusive use of the AGS; this was difficult and expensive. Once a week, Nicholas Samios, Brookhaven’s Director, would visit the AGS Control Room to politely ask how long the tuning wouldcontinue and to note that it was costing $1 Million a week. Moreover, it was soon clear that, except for Larry Ratner (then at Brookhaven) and me, no one could tune through these 45 resonances; thus, for some weeks, Larry and I worked 12-hourshifts 7-days each week. After 5 weeks Larry collapsed. While I was younger than Larry, I thought it unwise to try to work 24-hour shifts every day. Thus, I asked our Postdoc, Thomas Roser, who until then had worked mostly on polarized targets and scattering experiments, if he wanted to learn accelerator physics in a hands-on way for 12 hours every day. Apparently, he learned well, and now leads Brookhaven’s Collider-Accelerator Division.
An article in the current edition of the Economist describes Alfred Marshall’s original observation of geographic clusters of activities. They describe four main logics for clustering:
First, some may depend on natural resources, such as a coalfield or a harbour. Second, a concentration of firms creates a pool of specialised labour that benefits both workers and employers: the former are likely to find jobs and the latter are likely to find staff. Third, subsidiary trades spring up to supply specialised inputs. Fourth, ideas spill over from one firm to the next, as Marshall observed.
However, there are also costs to being in a cluster such as higher rent or transportation costs associated with distances to customers or suppliers. The burst of communications and computing power should make it easier since natural resources are less important and workers can live farther away from their offices.
It hasn’t worked this way. Pools of human capital continue to drive clustering as people prefer to work near where they live. Very small distances can make a big difference. The article goes on to describe clusters within clusters in the Bay area for specialized knowledge.
The patent system is “a real chaos”. Its faults were laid bare yesterday in an extensive New York Times article, which quickly reached the “most emailed list” (The Patent, Used as a Sword; and see Melissa Schilling’s review). But the same article also hedged by reminding us “patents are vitally important to protecting intellectual property”. But is intellectual property really essential for innovation? For an answer, look just a little past commercial software and you will see vast open collaboration without patents or copyright. Wikipedia, an open initiative, answers many of our questions. Open source software such as Linux and Android power most commercial websites and mobile devices, respectively. In myriad forums, mailing lists and online communities, users contribute reviews, provide solutions, and share tips with others. Science has been progressing by enlisting thousands of volunteers to classify celestial objects and decipher planetary images. Innovation without patents is real. Researchers estimate that open collaboration and user innovation bring more innovation than than the patented kind. Our legal and commercial system can do more to encourage it.
A great New York Times article this morning (link below) details ways in which the patent system gets used as both an offensive and defensive weapon, with billions of dollars of collateral damage to start-ups, consumers (see the “patent tax”), and innovation in general. The victim in the opening Vignette (Vlingo, a voice-recognition software start-up) might have been saved by a simple change in the rules: make the losers of patent lawsuits pay the legal costs of the winner. It turns out that it’s rather easy to kill small firms (or force them to sell to you) by launching a patent lawsuit against them that bleeds them dry with legal fees. You don’t have to win — you just have to force them to fight until they no longer have any money. Vlingo ultimately won the patent lawsuit that had been filed by a much larger rival, but had to loot its own meager coffers to pay the legal fees of doing so. Vlingo slumped home with its patent lawsuit victory and shut its doors for good. If losers of such battles paid the legal fees of winners, such fights might both be less common, and less likely to be fatal.
The article also points out that software patents have proven particularly dangerous because they are prone to protecting vague claims like “a software algorithm for calculating online prices,” thereby granting the patent holder vast tracks of technological real estate. An interesting talk by Tilo Peters at the Strategic Management Society conference yesterday points to another useful tool for rationalizing some of this misuse of the patent system: Strategic disclosure. If, for example, you decided to publish a manifesto about all of the things you might do with software in the reasonable future (remember patents have a “usefulness” condition so you’re not allowed to claim something deemed non-feasible), you might be able to essentially proclaim that technological territory as unpatentable. It wouldn’t prevent competitors from developing in those areas, but it could keep them from patenting in those areas. In essence, it transforms a space in which property rights may be allocated into one in which property rights may not. I’ve left out some details but you get the idea.
Now it occurs to me that a fair amount of strategic disclosure in the smart phone space took place in the form of Star Trek episodes. I’m going to go look for references to prior art…
Alex Tabarrok’s pictorial commentary on patent policy, drawn on a napkin, posits that the current patent system is somewhat too strong and thereby decreases innovation (the link to his original post is below). I have to say, however, that I don’t think patent strength is the problem. The problem is that the growth in patent applications over the last two decades has vastly exceeded the growth in resources available to the patent office, resulting in 1) long delays between patent application and granting (which can render patents completely pointless in fast moving industries), and 2) inadequate ability to examine the patent applications for novelty, usefulness and non-obviousness. This lowers the value of good patents (because they aren’t granted quick enough or may be fallaciously challenged) and increases the likelihood of bad patents being granted. As a result, for many individuals and firms, the expected net gains from manipulating the patent system for the purposes of extortion (hostage taking, patent trolling) now exceeds the expected net gains from using the patent system to actually innovate.
It’s difficult to assess how patent strength affects innovation without first making sure that patents are being granted and used the way the system had originally intended.
Alex Tabarrok’s original post can be found here: http://marginalrevolution.com/marginalrevolution/2012/09/patent-theory-on-the-back-of-a-napkin.html?utm_source=feedburner&utm_medium=email&utm_campaign=Feed%3A+marginalrevolution%2Ffeed+%28Marginal+Revolution%29
The current issue of McKinsey Quarterly features an interesting article on firms crowd-sourcing strategy formulation. This is another way that technology may shake up the strategy field (See also Mike’s discussion of the MBA bubble). The article describes examples in a variety of companies. Some, like Wikimedia and Redhat aren’t much of a surprise given their open innovation focus. However, we should probably take notice when more traditional companies (like 3M, HCL Technologies, and Rite-Solutions) use social media in this way. For example, Rite-Solutions, a software provider for the US Navy, defense contractors and fire departments, created an internal market for strategic initiatives:
Would-be entrepreneurs at Rite-Solutions can launch “IPOs” by preparing an Expect-Us (rather than a prospectus)—a document that outlines the value creation potential of the new idea … Each new stock debuts at $10, and every employee gets $10,000 in play money to invest in the virtual idea market and thereby establish a personal intellectual portfolio Read the rest of this entry »
This online education thing seems to be picking up steam: Stanford Professors Daphne Koller & Andrew Ng Also Launching a Massive Online Learning Startup. The missing piece is still certification. Once that exists, bricks-and-mortar delivery of higher ed will face some nasty competition … and we’ve seen how people feel about BaM — just ask Best Buy, Borders, or Blockbuster.
Of course, retail shopping is not the same thing as getting educated. There are similarities. For example, BaM is expensive and inconvenient in both cases. Also, in both cases, the younger generations are extremely comfortable with the online technology. Yet, it’s the differences that should be most concerning. Education-on-demand has the potential to solve many problems. This feature will be highly appealing to most potential students. Even more threatening to the traditional model: the price of online education taught by professors from top schools is not just lower by the savings in BaM distribution costs — it’s zero. Think about that – zero.
Most of the colleagues with whom I discuss these developments argue that there is simply no substitute for the real-time, in-person, interactions available in the traditional classroom setting. They believe that this will continue to motivate students to pay a premium for the experience. I wonder. It is not obvious to me that students get some special utility premium from classroom interactions. Ask yourself this: do your students consider “cold-calling” a welcome feature of sitting in your class? In my judgment, most students would actually pay to avoid it.
Besides the assessment problem, there is another hurdle for the online education model. Clearly, no professor can answer the specific questions of 100,000 students. The online institutions are going to have to find a way to staff some form of virtual office hours in which students can get answers to their questions. My sense is that there is plenty of well-trained talent in India to staff office hours for these courses. Heck, in ten years, online course providers will be able to pick up highly experienced, unemployed domestic PhDs to man the chat rooms on the cheap.
I’m seeing more and more work using Mechanical Turk as a subject pool. Here’s another piece discussing some of the features, advantages and problems with Mechanical Turk – Rand, D (2011), The promise of mechanical turk: how online labor markets can help theorists run behavioral experiments, Journal of Theoretical Biology.
Combining evolutionary models with behavioral experiments can generate powerful insights into the evolution of human behavior. The emergence of online labor markets such as Amazon Mechanical Turk (AMT) allows theorists to conduct behavioral experiments very quickly and cheaply. The process occurs entirely over the computer, and the experience is quite similar to performing a set of computer simulations. Thus AMT opens the world of experimentation to evolutionary theorists. In this paper, I review previous work combining theory and experiments, and I introduce online labor markets as a tool for behavioral experimentation. I review numerous replication studies indicating that AMT data is reliable. I also present two new experiments on the reliability of self-reported demographics. In the first, I use IP address logging to verify AMT subjects’ self-reported country of residence, and find that 97% of responses are accurate. In the second, I compare the consistency of a range of demographic variables reported by the same subjects across two different studies, and find between 81% and 98% agreement, depending on the variable. Finally, I discuss limitations of AMT and point out potential pitfalls. I hope this paper will encourage evolutionary modelers to enter the world of experimentation, and help to strengthen the bond between theoretical and empirical analyses of the evolution of human behavior.
An interesting development in venture financing is the creation of the “lean finance” model. This is an adaptation to winner-take-all markets; i.e., markets in which the best performer captures a massive share of the market. The funding model is to provide the minimum funding necessary to reach the point at which it becomes apparent who the winner is likely to be. Then, investors do a huge, “shovel-in” round of funding to seal it. On Friday, the Swedish commerce startup Klarna raised $155m following its May, 2010 round of only $9m.
Dropbox, a company whose product is well-known in academic circles, similarly raised $250m at the point it boasted 45m users, following a previous round of only $7m. An intriguing wrinkle is that different experts may have different opinions about who the winner is going to be. Around the same time Dropbox raised its shovel-in round of funding, so did one of its primary competitors, Box.net, which raised $81m at the point it hit 7m users. The solution to the puzzle may be that Box.net is viewed as the likely winner of enterprise segment (having turned down a $500m acquisition offer), while Dropbox is poised to take the personal user segment.
Another class to add to the mix (here’s the previous post) — Chuck Eesley is teaching a free online Technology Entrepreneurship class. I exchanged emails with Chuck and a mere 33,000 people have signed up for the course. So far.
Twitter is emerging as a popular source of data for scientists — see various twitter-related arXiv articles here. For example, here’s a piece validating the Dunbar number by looking at social interactions among 1.7 million people on Twitter (now published in PLoS ONE). At orgtheory.net I posted about a recently published Science piece attempting to measure aggregate mood by analyzing millions of tweets.
Here’s a set of papers studying twitter and health-related issues. One paper suggests that monitoring the Twittersphere makes “bio-surveillance” possible – OMG U got flu? Analysis of shared health messages for bio-surveillance.
Here’s the abstract:
Background: Micro-blogging services such as Twitter offer the potential to crowdsource epidemics in real-time. However, Twitter posts (‘tweets’) are often ambiguous and reactive to media trends. In order to ground user messages in epidemic response we focused on tracking reports of self-protective behaviour such as avoiding public gatherings or increased sanitation as the basis for further risk analysis. Results: We created guidelines for tagging self protective behaviour based on Jones and Salath\’e (2009)’s behaviour response survey. Applying the guidelines to a corpus of 5283 Twitter messages related to influenza like illness showed a high level of inter-annotator agreement (kappa 0.86). We employed supervised learning using unigrams, bigrams and regular expressions as features with two supervised classifiers (SVM and Naive Bayes) to classify tweets into 4 self-reported protective behaviour categories plus a self-reported diagnosis. In addition to classification performance we report moderately strong Spearman’s Rho correlation by comparing classifier output against WHO/NREVSS laboratory data for A(H1N1) in the USA during the 2009-2010 influenza season. Conclusions: The study adds to evidence supporting a high degree of correlation between pre-diagnostic social media signals and diagnostic influenza case data, pointing the way towards low cost sensor networks. We believe that the signals we have modelled may be applicable to a wide range of diseases.
I grew up in Finland where winter sports are huge. Several winter sports saw quite significant transformations while I was following them in the late 80s. One was skijumping, the other cross-country skiing.
In skijumping one of the big rivalries during the 1988-1989 season was between Sweden’s Jan Boklov and Finland’s Matti Nykanen. Nykanen was a skijumping phenom – by the late 80s he was a veteran who had already won four previous world cup titles. But Boklov introduced a style of skijumping that radically changed the physics and even aesthetics of the sport. His V-style jumps carried him further and eventually led to a “paradigm shift” of sorts in the sport (judges at first discounted the technique, to an extreme). “Style” points were quite important in skijumping (see Nykanen’s style versus Boklov’s style in the clip below). But the “uglier” V-style eventually had to be integrated given its clear superiority. The V-style, introduced by Boklov (and a few others) in the late 80s, is now the exclusive approach in skijumping.
A similar, stylistic innovation also radically shaped cross country skiing. Traditional cross country skiing was largely about a gliding motion on an established track. But in the 70s and 80s it became increasingly clear that “skating” was actually a far better and faster approach to skiing (the Finn Pauli Siiltonen apparently gets some credit, though the technique was used by many in different forms). The skating technique, by the late 1980s, led to the creation of two separate sports: classic cross country and skate skiing (separate events in the olympics as well).
There you go: a bit of random trivia you might need in Trivial Pursuit, or to impress your friends over Thanksgiving dinner or if you need a sporting-related example for a class discussion.
So, here’s the argument: inventions (including theories and technologies) are inevitably invented. (This links nicely with Sid Winter’s thesis.) Thus we shouldn’t focus on or celebrate mythic “heroes” who happen to get credit for inventions that are inevitable – someone else would have invented them if the hero wasn’t around (Simonton highlights the increased instance of simultaneous discovery, here’s a wiki site cataloguing simultaneous discoveries). As Robert Merton put it – “discoveries become virtually inevitable when prerequisite kinds of knowledge and tool accumulate.”
Kevin Kelly talks about this in his book What Technology Wants. He pulls in examples from mathematics and physics. For example, Einstein was ahead of his time with the theory of relativity but some scholars were concurrently looking at similar questions and would inevitably have come up with the same theory.
Sort of an interesting issue embedded in here. That is, discovering the realities and truths of nature is one thing – but clearly the possibilities and forms that technologies might take is a very different issue. This is the space that the STS folks (Science & Technology Studies) have carved out – though they employ a confused epistemology and frequently overstep their bounds (Latour/Woolgar’s Laboratory Life is an example of this problem). More perhaps on this later.
Here’s a figure from Kevin Kelly’s book (sorry, the quality isn’t the hottest).
Last week there was a very useful WSJ article reporting on an analysis of the supplier relationships at the core of the new iPhone 4S (here … while it lasts). This seems like a nice mini-case analysis to see how our theories seem to explain actual outcomes.
They note that Qualcomm “is the big winner” because it is supplying a suite of chips that adds up to $15 per phone. Intel is a loser because it acquired Infineon and then those chips were dropped from the product.
Samsung lost out on the memory chips to its Korean rival Hynix — a surprise since Samsung is known to have a more reliable product. However, interestingly, Samsung did retain its role as the manufacturer of Apple’s proprietary A5 processor which provides the iPhone 4S and the iPad 2 with the bulk of its computing power.
The most recent issue of Harvard Law Review has an interesting piece on open source and technology strategy – “The host’s dilemma: strategic forfeiture in platform markets for informational goods.”
Voluntary forfeiture of intellectual assets — often, exceptionally valuable assets — is surprisingly widespread in information technology markets. A simple economic rationale can account for these practices. By giving away access to core technologies, a platform holder commits against expropriating (and thereby induces) user investments that support platform value. To generate revenues that cover development and maintenance costs, the platform holder must regulate access to other goods and services within the total consumption bundle. The trade-off between forfeiting access (to induce adoption) and regulating access (to recover costs) anticipates the substantial convergence of open and closed innovation models. Organizational patterns in certain software and operating system markets are consistent with this hypothesis: open and closed structures substantially converge across a broad range of historical and contemporary settings and commercial and noncommercial environments. In particular, this Article shows that (i) contrary to standard characterizations in the legal literature, leading “open source” software projects are now primarily funded and substantially governed and staffed by corporate sponsors, and (ii) proprietary firms have formed nonprofit consortia and other cooperative arrangements and adopted “open source” licensing strategies in order to develop operating systems for the smartphone market.
I just searched for Occupy Wall Street apps on my phone – and the result is below. I think there will possibly be novel organizational innovations that will emerge (structure of protests, communications, organizational forms etc), but these apps also have elements of novelty. BusinessWeek talks about a few of these apps, under the title million app march.