Great Moments in Tacit Knowledge, High-Energy Physics Division

From Alan Krisch’s 2010 account of research involving the (still-unresolved) anomalous behavior of tranversely polarized colliding protons:
After all this hardware was installed, an even larger problem was tuning the AGS. In 1988, when we accelerated polarized protons to 22 GeV, we needed 7 weeks of exclusive use of the AGS; this was difficult and expensive. Once a week, Nicholas Samios, Brookhaven’s Director, would visit the AGS Control Room to politely ask how long the tuning would
continue and to note that it was costing $1 Million a week. Moreover, it was soon clear that, except for Larry Ratner (then at Brookhaven) and me, no one could tune through these 45 resonances; thus, for some weeks, Larry and I worked 12-hourshifts 7-days each week. After 5 weeks Larry collapsed. While I was younger than Larry, I thought it unwise to try to work 24-hour shifts every day. Thus, I asked our Postdoc, Thomas Roser, who until then had worked mostly on polarized targets and scattering experiments, if he wanted to learn accelerator physics in a hands-on way for 12 hours every day. Apparently, he learned well, and now leads Brookhaven’s Collider-Accelerator Division.
Score a data point for the individualist view of organizational capability.

Creativity=recombination, Jedi edition

Where do great ideas come from? A popular notion among creativity experts is that recombination of preexisting ideas in a new context is the form that most if not all creativity takes. One more datum: Courtesy of my lovely wife, it seems that George Lucas may have been voguing, so to speak, when he came up with one of his most iconic images. 


Neuroeconomic imperialism?

I just saw a recent article in the Chronicle of Higher Education on the emerging field of neuroeconomics. Unlike behavioral economics, where ideas from psychology have been ported over to economics to explain various individual “anomalies” in choice behavior, in neuroeconomics much of the intellectual traffic has gone in the other direction–economic modeling tools are helpful in understanding psychological processes (including where those processes deviate from classic economic theory). The axiomatic approach to choice makes it a lot easier to parse out how the brain’s actual mechanisms do or don’t obey these axioms.

An important guy to watch in this area is Paul Glimcher, who mostly stays out of the popular press but is a hardcore pioneer in trying to create a unified (or “consilient”) science encompassing neuroscience, psychology, and economics. I’ve learned a lot from reading his Foundations of Neuroeonomics (2010) and Decisions, Uncertainty, and the Brain (2004): why reference points (as in prospect theory) are physiologically required; how evolutionary theory makes a functionalist and optimizing account of brain behavior more plausible than a purely mechanical, piecemeal, reflex-type theory; why complementarity of consumption goods presents a difficult puzzle for neuroscience; and much more.

Read the rest of this entry »


Scientists as workaholics…

In the article below, Wired reports on a study of when researchers download articles (middle of the night? Yep! Weekends? Yep!) and concludes that scientists are workaholics. The article also opines that it is the intense competition and stress of the scientists’ jobs that cause them to engage in such obviously self-destructive behavior. I think they could have the causal mechanism wrong here. I believe many researchers work at odd hours, at least in part, because they find it pleasurable — not because of external pressure. People end up in these fields (and successful in these fields) because studying something is what they like to do and are good at. Information technology just enables them to more liberally indulge in this rewarding (and rewarded) behavior.

I was scolded just last weekend for the fact that I almost never read fiction anymore. I was afraid to admit that I am often too busy on non-fiction endeavors — like  an internet scavenger hunt to figure out just why lobsters maintain telomerase activation throughout their lives, and may thus have a potential lifespan of…wait for it…FOREVER. That is seriously cool — how could a Grisham novel ever compete? But I might be biased because I like researching things, at any hour of the day. If you’re reading this, I bet you do too.

-Melissa

http://www.wired.com/wiredscience/2012/08/the-results-are-in-scientists-are-workaholics/


Individual Bias and Collective Truth?

Freek’s latest post on confirmation bias notes that intellectual commitments can bias which research findings one believes. The tone of the post is that we would all be better off if such biases didn’t exist, but there is definitely a tradeoff here. Greater objectivity tends to go with lower intensity of interest in a subject. (Disinterested and uninterested are correlated, for those old-timers who remember when those words had different definitions.) That’s why you often find that those with strong views on controversial topics–including those with minority or even widely ridiculed opinions–often know more about the topic, the evidence, and the arguments pro and con than “objective” people who can’t be bothered to dig into the matter. Other than partisanship, the only thing that will get people interested enough to seriously assess competing claims is a personal stake in the truth of the matter. (And in all cases, Feynman’s admonition that the easiest person to fool is yourself should be borne in mind.)

Historians of science of all stripes, from romanticists like Paul de Kruif (author of the classic The Microbe Hunters) to sophisticated evolutionists like David Hull in Science as a Process, have reported that intellectual partisanship motivates a great deal of path-breaking research. “I’ll show him!” has spawned a lot of clever experiments. Burning curiosity and bland objectivity are hard to combine.

But how can such partisanship ever lead to intellectual progress? Partisans have committed to high-profile public bets on one or another side of a controversy; their long-term career and immediate emotional payoffs depend not directly on the truth, but on whether or not they “win” in the court of relevant opinion. The key to having science advance is for qualified non-partisan spectators of these disputes be able to act as independent judges to sort out which ideas are better.

Ideally, these adjacent skilled observers would have some skin in the game by virtue of having to bet their own research programs on what they think the truth is. If they choose to believe the wrong side of a dispute, their future research will fail, to their own detriment. That’s the critical form of incentive compatibility for making scientific judgments objective, well-described in Michael Polanyi’s “Republic of Science” article. If, for most observers, decisions about what to believe are closely connected to their own future productivity and scientific reputation, then the partisanship of theory advocates is mostly a positive, motivating exhaustive search for the strengths and weaknesses of the various competing theories. Self-interested observers will sort out the disputes as best they can, properly internalizing the social gains from propounding the truth.

The problem for this system comes when 1) the only scientific interest in a dispute lies among the partisans themselves, or 2) observers’ control over money, public policy, or status flows directly from choosing to believe one side or another regardless of the truth of their findings. Then, if a false consensus forms the only way for it come unstuck is for new researchers to benefit purely from the novelty of their revisionist findings–i.e., enough boredom and disquiet with the consensus sets in that some people are willing to entertain new ideas.


How do you grow a capability?

The “dynamic capabilities” literature, I think, is a bit of a mess: lots of jargon, conflicting arguments (and levels of analysis) and little agreement even on a basic definition.  I don’t really like to get involved in definitional debates, though I think the idea of a capability, the ability to do/accomplish something (whether individual or collective), is fundamental for strategy scholars.

Last weekend I was involved in a “microfoundations of strategy” panel (with Jay Barney and Kathy Eisenhardt).  One of the questions that I raised, and find quite intriguing, is the question of how we might “grow” a capability.  The intuition for “growing” something, as a form of explanation, comes from simulation and agent-based modeling.   For example, Epstein has argued, “if you didn’t grow it, you didn’t explain it” (here’s the reference).   I like that intuition.  As I work with colleagues in engineering and computer science, this “growth” mentality seems to implicitly be there.  Things are not taken for granted, but explained by “growing” them.  Capabilities aren’t just the result of “history” or “experience” (a common explanation in strategy), but rather that history and experience needs to be unpacked and understood more specifically.  What were the choices that led to this history?  Who are the central actors?  What are the incentives and forms of governance?  Etc.

So, if we were to “grow” a capability, I think there are some very basic ingredients.  First, I think understanding the nature, capability and choices of the individuals involved is important.  Second, the nature of the interactions and aggregation matters.  The interaction of  individuals and actors can lead to emergent, non-linear and collective outcomes.  Third, I think the structural and design-related choices (e.g., markets versus hierarchy) and factors are important in the emergence (or not) of capabilities. Those are a few of the “ingredients.”

I’m not sure that the “how do you grow a capability”-intuition is helpful in all situations.  However, I do find that there is a tendency to use short-hand code words (routines, history, experience), and the growth notion requires us to open up these black boxes and to more carefully investigate the constituent parts, mechanisms and interactions that lead to the development or “growth” of capability.


Letters of recommendation and a bonus end-of-year tip

It’s the end of the year and many profs are writing letters of recommendation.  Here are a few links.  First, know that there are differing codes (also some discussion at orgtheory on this).  And, biases (e.g., linked to the attractiveness of the student) may play a role in receiving a positive recommendation (but don’t worry, being attractive isn’t always a good thing – sometimes it’s a disadvantage).  In short, the signal from letters of recommendation is hard to read. Here’s a piece on the Big 5 personality characteristics and letters of recommendation.  Here’s a paper that says letters of recommendation are helpful for medical school admission.  This paper says no.  And, no, I haven’t read all of the above papers (they were published in journals of varying quality) – I just quickly searched google scholar for various papers related to letters of recommendation.

And an off-topic, end-of-year bonus tip: if you’re already behind on grading student papers – the ol’ staircase method can quickly fix things.


Follow

Get every new post delivered to your Inbox.

Join 140 other followers