After all this hardware was installed, an even larger problem was tuning the AGS. In 1988, when we accelerated polarized protons to 22 GeV, we needed 7 weeks of exclusive use of the AGS; this was difficult and expensive. Once a week, Nicholas Samios, Brookhaven’s Director, would visit the AGS Control Room to politely ask how long the tuning wouldcontinue and to note that it was costing $1 Million a week. Moreover, it was soon clear that, except for Larry Ratner (then at Brookhaven) and me, no one could tune through these 45 resonances; thus, for some weeks, Larry and I worked 12-hourshifts 7-days each week. After 5 weeks Larry collapsed. While I was younger than Larry, I thought it unwise to try to work 24-hour shifts every day. Thus, I asked our Postdoc, Thomas Roser, who until then had worked mostly on polarized targets and scattering experiments, if he wanted to learn accelerator physics in a hands-on way for 12 hours every day. Apparently, he learned well, and now leads Brookhaven’s Collider-Accelerator Division.
Where do great ideas come from? A popular notion among creativity experts is that recombination of preexisting ideas in a new context is the form that most if not all creativity takes. One more datum: Courtesy of my lovely wife, it seems that George Lucas may have been voguing, so to speak, when he came up with one of his most iconic images.
I just saw a recent article in the Chronicle of Higher Education on the emerging field of neuroeconomics. Unlike behavioral economics, where ideas from psychology have been ported over to economics to explain various individual “anomalies” in choice behavior, in neuroeconomics much of the intellectual traffic has gone in the other direction–economic modeling tools are helpful in understanding psychological processes (including where those processes deviate from classic economic theory). The axiomatic approach to choice makes it a lot easier to parse out how the brain’s actual mechanisms do or don’t obey these axioms.
An important guy to watch in this area is Paul Glimcher, who mostly stays out of the popular press but is a hardcore pioneer in trying to create a unified (or “consilient”) science encompassing neuroscience, psychology, and economics. I’ve learned a lot from reading his Foundations of Neuroeonomics (2010) and Decisions, Uncertainty, and the Brain (2004): why reference points (as in prospect theory) are physiologically required; how evolutionary theory makes a functionalist and optimizing account of brain behavior more plausible than a purely mechanical, piecemeal, reflex-type theory; why complementarity of consumption goods presents a difficult puzzle for neuroscience; and much more.
In the article below, Wired reports on a study of when researchers download articles (middle of the night? Yep! Weekends? Yep!) and concludes that scientists are workaholics. The article also opines that it is the intense competition and stress of the scientists’ jobs that cause them to engage in such obviously self-destructive behavior. I think they could have the causal mechanism wrong here. I believe many researchers work at odd hours, at least in part, because they find it pleasurable — not because of external pressure. People end up in these fields (and successful in these fields) because studying something is what they like to do and are good at. Information technology just enables them to more liberally indulge in this rewarding (and rewarded) behavior.
I was scolded just last weekend for the fact that I almost never read fiction anymore. I was afraid to admit that I am often too busy on non-fiction endeavors — like an internet scavenger hunt to figure out just why lobsters maintain telomerase activation throughout their lives, and may thus have a potential lifespan of…wait for it…FOREVER. That is seriously cool — how could a Grisham novel ever compete? But I might be biased because I like researching things, at any hour of the day. If you’re reading this, I bet you do too.
Freek’s latest post on confirmation bias notes that intellectual commitments can bias which research findings one believes. The tone of the post is that we would all be better off if such biases didn’t exist, but there is definitely a tradeoff here. Greater objectivity tends to go with lower intensity of interest in a subject. (Disinterested and uninterested are correlated, for those old-timers who remember when those words had different definitions.) That’s why you often find that those with strong views on controversial topics–including those with minority or even widely ridiculed opinions–often know more about the topic, the evidence, and the arguments pro and con than “objective” people who can’t be bothered to dig into the matter. Other than partisanship, the only thing that will get people interested enough to seriously assess competing claims is a personal stake in the truth of the matter. (And in all cases, Feynman’s admonition that the easiest person to fool is yourself should be borne in mind.)
Historians of science of all stripes, from romanticists like Paul de Kruif (author of the classic The Microbe Hunters) to sophisticated evolutionists like David Hull in Science as a Process, have reported that intellectual partisanship motivates a great deal of path-breaking research. “I’ll show him!” has spawned a lot of clever experiments. Burning curiosity and bland objectivity are hard to combine.
But how can such partisanship ever lead to intellectual progress? Partisans have committed to high-profile public bets on one or another side of a controversy; their long-term career and immediate emotional payoffs depend not directly on the truth, but on whether or not they “win” in the court of relevant opinion. The key to having science advance is for qualified non-partisan spectators of these disputes be able to act as independent judges to sort out which ideas are better.
Ideally, these adjacent skilled observers would have some skin in the game by virtue of having to bet their own research programs on what they think the truth is. If they choose to believe the wrong side of a dispute, their future research will fail, to their own detriment. That’s the critical form of incentive compatibility for making scientific judgments objective, well-described in Michael Polanyi’s “Republic of Science” article. If, for most observers, decisions about what to believe are closely connected to their own future productivity and scientific reputation, then the partisanship of theory advocates is mostly a positive, motivating exhaustive search for the strengths and weaknesses of the various competing theories. Self-interested observers will sort out the disputes as best they can, properly internalizing the social gains from propounding the truth.
The problem for this system comes when 1) the only scientific interest in a dispute lies among the partisans themselves, or 2) observers’ control over money, public policy, or status flows directly from choosing to believe one side or another regardless of the truth of their findings. Then, if a false consensus forms the only way for it come unstuck is for new researchers to benefit purely from the novelty of their revisionist findings–i.e., enough boredom and disquiet with the consensus sets in that some people are willing to entertain new ideas.
The “dynamic capabilities” literature, I think, is a bit of a mess: lots of jargon, conflicting arguments (and levels of analysis) and little agreement even on a basic definition. I don’t really like to get involved in definitional debates, though I think the idea of a capability, the ability to do/accomplish something (whether individual or collective), is fundamental for strategy scholars.
Last weekend I was involved in a “microfoundations of strategy” panel (with Jay Barney and Kathy Eisenhardt). One of the questions that I raised, and find quite intriguing, is the question of how we might “grow” a capability. The intuition for “growing” something, as a form of explanation, comes from simulation and agent-based modeling. For example, Epstein has argued, “if you didn’t grow it, you didn’t explain it” (here’s the reference). I like that intuition. As I work with colleagues in engineering and computer science, this “growth” mentality seems to implicitly be there. Things are not taken for granted, but explained by “growing” them. Capabilities aren’t just the result of “history” or “experience” (a common explanation in strategy), but rather that history and experience needs to be unpacked and understood more specifically. What were the choices that led to this history? Who are the central actors? What are the incentives and forms of governance? Etc.
So, if we were to “grow” a capability, I think there are some very basic ingredients. First, I think understanding the nature, capability and choices of the individuals involved is important. Second, the nature of the interactions and aggregation matters. The interaction of individuals and actors can lead to emergent, non-linear and collective outcomes. Third, I think the structural and design-related choices (e.g., markets versus hierarchy) and factors are important in the emergence (or not) of capabilities. Those are a few of the “ingredients.”
I’m not sure that the “how do you grow a capability”-intuition is helpful in all situations. However, I do find that there is a tendency to use short-hand code words (routines, history, experience), and the growth notion requires us to open up these black boxes and to more carefully investigate the constituent parts, mechanisms and interactions that lead to the development or “growth” of capability.
It’s the end of the year and many profs are writing letters of recommendation. Here are a few links. First, know that there are differing codes (also some discussion at orgtheory on this). And, biases (e.g., linked to the attractiveness of the student) may play a role in receiving a positive recommendation (but don’t worry, being attractive isn’t always a good thing - sometimes it’s a disadvantage). In short, the signal from letters of recommendation is hard to read. Here’s a piece on the Big 5 personality characteristics and letters of recommendation. Here’s a paper that says letters of recommendation are helpful for medical school admission. This paper says no. And, no, I haven’t read all of the above papers (they were published in journals of varying quality) – I just quickly searched google scholar for various papers related to letters of recommendation.
And an off-topic, end-of-year bonus tip: if you’re already behind on grading student papers – the ol’ staircase method can quickly fix things.
This morning, my colleague Josh Gans and I sat in on a general audience talk by Daniel Kahneman about his new book Thinking Fast and Slow. It was interesting to see how the research agenda has progressed and evolved over the past couple of decades. This idea explored in this book, and in the talk, is that cognition can be broken into two “systems” — one that responds instantly and without effort and another that responds with will and effort. The example given to distinguish between the two was being asked to answer the following questions: (a) what is 2 + 2? and, (b) what is 17 x 24? The first comes unbidden and effortlessly to mind. The second requires conscious effort (which has several physiological traits associated with it, such as significant pupil dilation).
Kahneman is a terrific speaker and these issues are inherently fascinating. One of the examples raised a puzzle in Josh’s mind. The example is asking air travelers whether they want to buy insurance. When asked how much they are willing to pay for $100,000 worth of life insurance for an upcoming flight covering death due to any reason, subjects report a number. When asked how much they are willing to pay for $100,000 worth of insurance for death due to a terrorist attack (only), they report a substantially higher number. The reason given for this is that the “fast” system associates terrorism with fear and fear motivates higher willingness to pay for insurance.
The puzzle is: why do insurance companies specifically exclude terrorist acts from life insurance policies? Presumably, a”slow” thinking group of insurance executives could cash in on the “fast” thinking bias of travelers by inducing impulse purchases of terrorist insurance at ticket kiosks at the time of check-in. Yet they don’t. Having recently had some problematic insurance company dealings, Josh’s “fast” thinking answer was that insurance company execs are not very skilled decision makers. I am open to a more rational reason, though I cannot think of what it would be.
Glenn Hoetker recently gave me the opportunity to consider what new contributions the field of psychology could offer to the strategy literature (see the description here). The video illustrates how behavior often depends more on perception than on reality — does it matter if the steering wheel is attached or not if the other driver acts as if it is? Often, researchers are interested in organizational outcomes and theorize that the underlying behaviors are driven by objective reality. What research opportunities are highlighted as we take seriously the subjective nature of our most central constructs?
In this installment, we explore the question, “what is a firm?” This is so taken for granted in the field that most of you will probably stop reading here. Read the rest of this entry »
The latest issue of Strategic Management Journal is a special issue on “behavioral strategy.” The special issue has a piece on “neurostrategy” by Thomas Powell, Dan Levinthal discusses whether there’s even an alternative to behavioral strategy, Chris Bingham and Kathy Eisenhardt write about “rational heuristics,” Bardolet, Fox and Lovallo develop a behavioral perspective on corporate capital allocation, etc. Check it out.
“I am not that surprised that an academic of entrepreneurship (are you kidding me?) would lead a story about one of the world’s best innovators and CEO’s about that he actually and in fact ! OMG had body odour as a teenager because of his diet, not to mention the rest of your embarrassing piece. Forbes would be best sticking with writers that are inspired by such great entrepreneurs as Steve Jobs, and not with writers such as this, who are unhappy they have not had the courage to ‘live the life they love and not settle’ and so sit in front of their computer with not much else to do but trying to bring others down. Shame on you Mr Vermeulen”.
This is just one of the comments I received on my earlier piece “Steve Jobs – the man was fallible” (also published on my Forbes blog). Of course, this was not unanticipated; having the audacity to suggest that, in fact, the great man did not possess the ability to walk on water was the closest thing to business blasphemy. And indeed a written stoning duly followed.
But why is suggesting that a human being like Steve Jobs was in fact fallible – who, in the same piece, I also called “a management phenomenon”, “fantastically able”, “a legend”, and “a great leader” – by some considered to be such an act of blasphemy? All I did was claim that he was “fallible”, “not omnipotent”, and “not always right”, which as far as I can see comes with the definition of being human?
And I guess that’s exactly it; in life and certainly in death Steve Jobs transcended the status of being human and reached the status of deity. A journalist of the Guardian compared the reaction (especially in the US) to the death of Steve Jobs with the reaction in England to the death of Princess Diana; a collective outpour of almost aggressive emotion by people who only ever saw the person they are grieving about briefly on television or at best in a distance. Suggesting Princess Diana was fallible was not a healthy idea immediately following her death (and still isn’t); nor was suggesting Steve Jobs was human.
We are inclined to deify successful people in the public eye, and in our time that certainly includes CEOs. In the past, in various cultures, it may have been ancient warriors, Olympians, or saints. They became mythical and transcended humanity, quite literally reaching God-like status.
Historians and geneticists argue that this inclination for deification is actually deeply embedded in the human psyche, and we have evolved to be prone to worship. There is increasing consensus that man came to dominate the earth – and for instance drive out Neanderthalers, who were in fact stronger, likely more intelligent, and had more sophisticated tools – because of our superior ability to organize into larger social systems. And a crucial role in this, fostering social cohesion, was religion, which centers on myths and deities. This inclination for worship very likely became embedded into our genetic system, and it is yearning to come out and be satisfied, and great people such as Jack Welch, Steve Jobs, and Lady Di serve to fulfill this need.
But that of course does not mean that they were infallible and could in fact walk on water. We just don’t want to hear it. Great CEOs realize that their near deification is a gross exaggeration, and sometimes even get annoyed by its suggestion – Amex’s Ken Chenault told me that he did not like it at all, and I have seen that same reaction in Southwest’s Herb Kelleher. Slightly less-great CEOs do start to believe their own status, and people like Enron’s Jeff Skilling or Ahold’s Cees van der Hoeven come to mind; not coincidentally they are often associated with spectacular business downfalls. I have never spoken to Steve Jobs, but I am guessing he might not have disagreed with the qualifications “not omnipotent”, “not always right” and, most of all, “human”.