Happy Talk About Published Scientific Error and Fraud
Posted: January 4, 2013 Filed under: ethics, research 2 CommentsOver at the American Scientist (in an overall interesting Jan-Feb. 2013 issue) we have a column arguing that there’s no need to worry about a contagion of fraud and error in scientific publication, even though the number of publications has exploded and the number of retractions has exploded along with them. The basic pitch: the scientific literature is wonderfully self-correcting. The evidence given: the ratio of voluntary corrections to retractions for fraud looks kind of high, and journals with more aggressive and welcoming policies toward corrections have more of them. I kid you not.
But wait, you say. How is that evidence at all probative? Good question, as one says when the student goes right where we want to take the discussion. At the very least, we’d want to see if the rate of retractions is going up over time, but somehow those figures and graphs don’t appear in the article. But what we’d really like to know is how many non-retracted, non-corrected, and non-commented articles are in fact erroneous or misleading despite peer review, and here the article is silent. It’s evidence is almost completely non-responsive to the question it purports to address. But the problem goes deeper.
Recent public concerns, including on this blog, have noted pressures for sensationalism, publication bias, data snooping and experimental tuning bias, and many similar causally based arguments. John Ionnadis has made a pretty good career pounding on these issues and trying to place upper and lower bounds on the problem. The devastating Begley and Ellis study of “landmark” papers in preclinical cancer research found that only 6 of 53 had reproducible results, even after going back to the original investigators and sometimes even after the original investigators themselves tried to reproduce their published results. Here is what the latter authors think about the health of the peer-reviewed publishing system in pre-clinical cancer research:
The academic system and peer-review process tolerates and perhaps even inadvertently encourages such conduct5. To obtain funding, a job, promotion or tenure, researchers need a strong publication record, often including a first-authored high-impact publication. Journal editors, reviewers and grant-review committees often look for a scientific finding that is simple, clear and complete — a ‘perfect’ story. It is therefore tempting for investigators to submit selected data sets for publication, or even to massage data to fit the underlying hypothesis.
Of this substantial and growing literature on the prevalence of error and publication of invalid results, the American Scientist article is entirely innocent. Instead, it uses a single Wall Street Journal article as its target for attack, and even there ignores the non-anecdotal parts of the story–evidence that retractions have been growing faster than publications since 2001 (up 1500% vs. a 44% increase in papers), that the time lag between publication and retraction is growing, and that retractions in biomedicine related to fraud have been growing faster than those due to error and constitute about 75% of the total retractions.
Perhaps a corrigendum is in order over at the Am Sci.
UPDARE:
A September 2012 article in PNAS found that most retractions are caused by misconduct rather than error:
A detailed review of all 2,047 biomedical and life-science research articles indexed by PubMed as retracted on May 3, 2012 revealed that only 21.3% of retractions were attributable to error. In contrast, 67.4% of retractions were attributable to misconduct, including fraud or suspected fraud (43.4%), duplicate publication (14.2%), and plagiarism (9.8%). Incomplete, uninformative or misleading retraction announcements have led to a previous underestimation of the role of fraud in the ongoing retraction epidemic. The percentage of scientific articles retracted because of fraud has increased ∼10-fold since 1975. Retractions exhibit distinctive temporal and geographic patterns that may reveal underlying causes.
Scientists as workaholics…
Posted: October 15, 2012 Filed under: psychology, research | Tags: professors, researchers, scientists, workaholids 2 CommentsIn the article below, Wired reports on a study of when researchers download articles (middle of the night? Yep! Weekends? Yep!) and concludes that scientists are workaholics. The article also opines that it is the intense competition and stress of the scientists’ jobs that cause them to engage in such obviously self-destructive behavior. I think they could have the causal mechanism wrong here. I believe many researchers work at odd hours, at least in part, because they find it pleasurable — not because of external pressure. People end up in these fields (and successful in these fields) because studying something is what they like to do and are good at. Information technology just enables them to more liberally indulge in this rewarding (and rewarded) behavior.
I was scolded just last weekend for the fact that I almost never read fiction anymore. I was afraid to admit that I am often too busy on non-fiction endeavors — like an internet scavenger hunt to figure out just why lobsters maintain telomerase activation throughout their lives, and may thus have a potential lifespan of…wait for it…FOREVER. That is seriously cool — how could a Grisham novel ever compete? But I might be biased because I like researching things, at any hour of the day. If you’re reading this, I bet you do too.
-Melissa
http://www.wired.com/wiredscience/2012/08/the-results-are-in-scientists-are-workaholics/
Twittering Strategy Profs
Posted: August 22, 2012 Filed under: Corporate strategy, research Leave a commentFor those of you who also follow twitter, LDRLB, an “online think tank that shares insights from research on leadership, innovation, and strategy” has just posted a list of Top Professors on Twitter. The categories are Leadership, Innovation, and Strategy (15 profs in each category). Good lists — all good folks with thoughtful views on the world of strategy. Nice to see a number of StrategyProfs bloggers listed.
“Big Data” Business Strategy for Scammers
Posted: June 25, 2012 Filed under: collective behavior, Corporate strategy, economics, game theory, research 2 CommentsA terrific paper by Cormac Herley, Microsoft Research, came out entitled, “Why do Nigerian Scammers Say There are from Nigeria.” It turns out that 51% of scam emails mention Nigeria as the source of funds. Given that “Nigerian scammer” now make it regularly into joke punch-lines, why in the world would scammer continue to identify themselves in this way? The paper was mentioned in a news item here, if you want the executive summary version but, really, I can’t imagine readers of this blog not finding the actual paper worthwhile and fun (it contains a terrific little model of scamming).
In a nutshell, the number of people who are gullible enough to fall for an online scam is tiny compared to the population that has to be sampled. This creates a huge false positive problem, that is, people who respond in some way and, hence, require an expenditure of scammer resources but who ultimately do not follow follow through on being duped.
As the author explains, in these situations, false positives (people identified as viable marks but who do not ultimately fall for the scam) must be balanced against false negatives (people who would fall for the scam but who are not targeted by the scammer). Since targeting is essentailly costless, the main concern of scammers is the false positive: someone who responds to an initial email with replies, phone calls, etc. – that require scammer resources to field – but who eventually fails to take the bait. Apparently, it does not take too many false positives before the scam becomes unprofitable. What makes this problem a serious issue is that the size of the vulnerable population relative to the population that is sampled (i.e., with an initial email) is minuscule.
Scammer solution? Give every possible hint – including self-identifying yourself as being from Nigeria – that you are a stereotypical scammer without actually saying so. Anyone replying to such an offer must be incredibly naive and uninformed (to say the least). False positives under this strategy drop considerably!
Geeeeenius!
UPDATE: Josh Gans was blogging about this last week over at Digitopoly. He’s not convinced of the explanation though. To the extent there are “vigilante” types who are willing to expend resources to mess with scammers, the Easy-ID strategy could incur additional costs. As an interesting side note, in discussing this with Josh, he at one point suggested the idea that when legit firms come across scammers, they should counterattack by flooding them with, e.g., millions of fake/worthless credit card numbers (setting of something like a false positive atom bomb). Just one snag: US laws protect scammers from these kinds of malicious attacks.
Individual Bias and Collective Truth?
Posted: May 30, 2012 Filed under: cognition, collective behavior, incentives, philosophy, psychology, research, Uncategorized Leave a commentFreek’s latest post on confirmation bias notes that intellectual commitments can bias which research findings one believes. The tone of the post is that we would all be better off if such biases didn’t exist, but there is definitely a tradeoff here. Greater objectivity tends to go with lower intensity of interest in a subject. (Disinterested and uninterested are correlated, for those old-timers who remember when those words had different definitions.) That’s why you often find that those with strong views on controversial topics–including those with minority or even widely ridiculed opinions–often know more about the topic, the evidence, and the arguments pro and con than “objective” people who can’t be bothered to dig into the matter. Other than partisanship, the only thing that will get people interested enough to seriously assess competing claims is a personal stake in the truth of the matter. (And in all cases, Feynman’s admonition that the easiest person to fool is yourself should be borne in mind.)
Historians of science of all stripes, from romanticists like Paul de Kruif (author of the classic The Microbe Hunters) to sophisticated evolutionists like David Hull in Science as a Process, have reported that intellectual partisanship motivates a great deal of path-breaking research. “I’ll show him!” has spawned a lot of clever experiments. Burning curiosity and bland objectivity are hard to combine.
But how can such partisanship ever lead to intellectual progress? Partisans have committed to high-profile public bets on one or another side of a controversy; their long-term career and immediate emotional payoffs depend not directly on the truth, but on whether or not they “win” in the court of relevant opinion. The key to having science advance is for qualified non-partisan spectators of these disputes be able to act as independent judges to sort out which ideas are better.
Ideally, these adjacent skilled observers would have some skin in the game by virtue of having to bet their own research programs on what they think the truth is. If they choose to believe the wrong side of a dispute, their future research will fail, to their own detriment. That’s the critical form of incentive compatibility for making scientific judgments objective, well-described in Michael Polanyi’s “Republic of Science” article. If, for most observers, decisions about what to believe are closely connected to their own future productivity and scientific reputation, then the partisanship of theory advocates is mostly a positive, motivating exhaustive search for the strengths and weaknesses of the various competing theories. Self-interested observers will sort out the disputes as best they can, properly internalizing the social gains from propounding the truth.
The problem for this system comes when 1) the only scientific interest in a dispute lies among the partisans themselves, or 2) observers’ control over money, public policy, or status flows directly from choosing to believe one side or another regardless of the truth of their findings. Then, if a false consensus forms the only way for it come unstuck is for new researchers to benefit purely from the novelty of their revisionist findings–i.e., enough boredom and disquiet with the consensus sets in that some people are willing to entertain new ideas.
Fungibility v. Fetishes
Posted: May 28, 2012 Filed under: behavioral economics, cognition, economics, incentives, research 3 CommentsFor an economist studying business strategy, an interesting puzzle is why businesspeople, analysts, and regulators often don’t seem to perceive the fungibility of payments. Especially in dealing with bargaining issues, a persistent “optical illusion” causes them to fetishize particular transaction components without recognizing that the share of total gain accruing to a party is the sum of these components, regardless of the mix. Proponents of the “value-based” approach to strategy, which stresses unrestricted bargaining and the core solution concept, ought to be particularly exercised about this behavior, but even the less hard-edged V-P-C framework finds it difficult to accommodate.
Some examples:
- There’s been some noise lately about U.S. telecom providers cutting back on the subsidies they offer users who buy smartphones. None of the articles address the question of whether the telecom firms can thereby force some combination of a) Apple and Samsung cutting their wholesale prices and b) end users coughing up more dough for (smartphone + service). The possibility that competition among wireless providers fixes the share of surplus that they can collect, so that cutting the phone subsidy will also require them to cut their monthly service rates, is never raised explicitly. There is a pervasive confusion between the form of payments and the total size of payments.
Crowdsourcing Experiments, Mechanical Turk, ‘n stuff
Posted: May 25, 2012 Filed under: research 1 CommentThe Economist has a piece on how crowdsourcing and tools like Mechanical Turk are transforming science: “the roar of the crowd.” Here’s the blog dedicated to help scientists set up their experiments on Mechanical Turk, Experimental Turk. I’m guessing it is a matter of time before some strategy-related experiments get done on Mechanical Turk – here’s probably a few such pieces (well, just a very loose search of mechanical turk+smj+mgtsci).
- This appears to be PhD Student collecting dissertation data via a decision-making task on Mechanical Turk.
- Here’s a Cornell marketing scholar collecting data.
- Lots of the “game” tasks appear to be experiments.
A Causally Ambiguous Research Stream
Posted: May 17, 2012 Filed under: cognition, conferences, economics, evolutionary theory, research 3 CommentsI’m reporting from another great ACAC conference. This conference featured retrospectives marking the 30 year anniversaries for Nelson and Winter’s book and Lippman and Rumelt’s article. Kudos to Bill and the organizing committee for putting it together.
A starting similarity in the two is that they were both directly intended to influence conversations in economics and both missed their marks. For example, about 1% of the cites for Lippman and Rumelt were in top Econ journals – despite the fact that the article appeared in the Bell Journal of Economics. Lippman & Rumelt recorded a video specifically for the occasion Read the rest of this entry »
Crowd-sourcing Strategy Formulation
Posted: May 10, 2012 Filed under: collective behavior, competitive advantage, current events, entrepreneurship, incentives, innovation, networks, open innovation, organization design, research, Stakeholders, technology 24 CommentsThe current issue of McKinsey Quarterly features an interesting article on firms crowd-sourcing strategy formulation. This is another way that technology may shake up the strategy field (See also Mike’s discussion of the MBA bubble). The article describes examples in a variety of companies. Some, like Wikimedia and Redhat aren’t much of a surprise given their open innovation focus. However, we should probably take notice when more traditional companies (like 3M, HCL Technologies, and Rite-Solutions) use social media in this way. For example, Rite-Solutions, a software provider for the US Navy, defense contractors and fire departments, created an internal market for strategic initiatives:
Would-be entrepreneurs at Rite-Solutions can launch “IPOs” by preparing an Expect-Us (rather than a prospectus)—a document that outlines the value creation potential of the new idea … Each new stock debuts at $10, and every employee gets $10,000 in play money to invest in the virtual idea market and thereby establish a personal intellectual portfolio Read the rest of this entry »
Manipulating journal impact factors
Posted: February 28, 2012 Filed under: ethics, incentives, research 3 CommentsVia several folks on Facebook (e.g., Marcel Bogers, Der Chao Chen) – here’s a short blog post on how journals are manipulating their impact factors: coerced citations and manipulated impact factors – dirty tricks of academic journals.
Here’s the Science piece that the above blog post refers to: Coercive citation in academic publishing. Here’s the data (pdf).
B-School Disruption Update
Posted: February 21, 2012 Filed under: business school, education, research | Tags: Academic degree, Business school, Education, Entrepreneur, Entrepreneurship, MBA, Research 1 CommentWant to be an entrepreneur? Enstitute is bringing back apprenticeships
This is the answer to those who think we will keep our research-based MBAs above water by making the curriculum more “relevant in the real world” … by which people seem to mean sacrificing academic content for: external projects with business sponsors, “living” case studies, 1st summer internships, support services for personal grooming, etc. As I have long argued, research faculty are not efficient providers of substitute “real world” experiences.
Apropos this discussion, last week, E[nstitute] launched in NYC by founders Kane Sarhan and Shaila Ittycheria. The idea is to pick up promising candidates with a high school diploma and put them through a two-year apprenticeship program mentored by some of NYC’s top entrepreneurs. Impressive.
And, it isn’t just business schools this program threatens — in a recent article, Brad Mcarty, editor at Insider points out, “… the average public university (in the US) will set you back nearly $80,000 for a 4-year program. And a private school will cost in excess of $150,000. At the end of that time, you have a bellybutton,” he writes. “Oh sure, you might have a piece of paper that says you have a Bachelor of Science or Art degree but what you actually have is something that has become so ubiquitous that it’s really not worth much more than the lint inside your own navel.”
That’s strong stuff and, sadly, uncomfortably close to the truth. Moreover, it speaks to strong potential demand for apprenticeship-style entrepreneurship programs like the one mentioned above. Personally, I think it’s terrific. The existence of programs like this create more value at the society level. From the b-school foxhole, they also force research-based MBA providers to think more carefully about what, if any, comparative advantage we have vis the many non-traditional competitors we now see invading our industry.
Hint: the answer will have to involve our research. This is what we do. And, contrary to the whining and hand-wringing of so many traditional MBA providers, teaching young people cutting-edge general principles (i.e., research-based knowledge) has substantial market value. We just stopped doing it a couple of decades ago.
How much rationality is enough?
Posted: January 31, 2012 Filed under: behavioral economics, cognition, conferences, economics, game theory, research, theory of the firm 11 CommentsLast week I had the great good fortune to attend the Max Planck Institute at Leipzig’s first conference on Rigorous Theories of Business Strategies in a World of Evolving Knowledge. The conference spanned and intense four days of presentations, exploration, and discussion on formal approaches to business strategy. Participants were terrific and covered the scholarly spectrum: philosophers, psychologists, game theorists, mathematicians, and physicists. Topics included cooperative game theory, unawareness games, psychological micro-foundations of decision making, and information theory. It was heartening to see growth in the community of formal theorists interested in strategy and my guess is that the event will spawn interesting new research projects and productive coauthoring partnerships. (Thanks to our hosts, Jurgen Jost and Timo Ehrig for organizing and sponsoring the conference!)
If one had to pick a single, overarching theme, it would have to be the exploration of formal approaches to modeling agents with bounded rationality. For example, I presented on subjective equilibrium in repeated games and its application to strategy. Others discussed heuristic-based decision making, unawareness, ambiguity, NK-complexity, memory capacity constraints, the interaction of language and cognition, and dynamic information transmission.
Over the course of the conference, it struck me just how offensive so many of my colleagues find the rationality assumptions so commonly used in economic theory. Of course, rational expectations models are the most demanding of their agents and, as such, seem to generate the greatest outrage. What I mean to convey is the sense that displeasure with these kinds of modeling choices go beyond dispassionate, objective criticism and into indignation and even anger. If you are a management scholar, you know what I mean.
Thus, at a conference such as this, we spend a lot of time reminding ourselves of all the research that points to all the limitations of human cognition. We detail how humans suffer from decision processes that are emotional, memory constrained, short-sighted, logically inconsistent, biased, bad at even rudimentary probability assessment, and so on. Then, we explore ways to build formal models in which our agents are endowed with “more realistic” cognitive abilities.
Perhaps contrary to your intuition, this is heady stuff from a modeler’s point-of-view: formalizing stylized facts about real cognition is seen as a worthy challenge … and discovering where the new assumptions lead is always amusing. From the perspective of many management scholars, such theories are more realistic, better able to explain observations of shockingly stupid decisions by business practitioners and, hence, superior to the silly, overly simplistic models that employ a false level rationality.
I am not mocking the sentiment. In fact, I agree with it. Indeed, none of the economists I know dispute the fact that human cognition is quite limited or that perfect rationality is an extreme and unrealistic assumption. (This isn’t to say there aren’t those who believe otherwise but, if there are, they are not acquaintances of mine.) On the contrary, careers have been made in game theory by finding clever ways to model some observed form of irrationality and using it to explain some observed form of decision failure. If this is the research agenda then, surely, we have hardly scratched the surface.
Yet, as I thought about it during the MPI conference last week, it dawned on me that our great preoccupation with irrational agents is misdirected. That animals as cognitively limited as us often, if not typically, fail to achieve rational consistency in our endeavors is no puzzle. What else would you expect? Rather, the deep mystery is how agents so limited in rational thought invent democracy, create the internet, land on the moon, and run purposeful organizations that succeed in a free market. Casual empiricism suggests that the pattern of objective-oriented progress in the history of mankind is too pervasive to ascribe to dumb luck. Even at the individual level, in spite of their many cognitive failings, the majority of people lead purposeful, productive lives.
This leads me to remind readers that economists invented the rational expectations model precisely because it was the only option that came anywhere close to explaining observed patterns in economy-level reactions to changes in government policies. This, even though the perfect rationality assumption is axiomatically false. There you have it.
Which leaves open the challenge of identifying which features of human cognition lead to persistent patterns of success in highly unstable environments. I conjecture that our refined pattern recognition abilities play a role in this apparent miracle. Other candidates include our determination to see causality everywhere we look as well as our incredible mental flexibility. Social factors and institutions must be involved — and, somewhere in there, a modicum of rationality and logic. After all, we did invent math.
Why you really can’t trust any of the research you read
Posted: January 6, 2012 Filed under: research | Tags: Data Collection, Publication bias, Research 8 CommentsResearchers in Management and Strategy worry a lot about bias – statistical bias. In case you’re not such an academic researcher, let me briefly explain.
Suppose you want to find out how many members of a rugby club have their nipples pierced (to pick a random example). The problem is, the club has 200 members and you don’t want to ask them all to take their shirts off. Therefore, you select a sample of 20 of them guys and ask them to bare their chests. After some friendly bantering they agree, and then it appears that no fewer than 15 of them have their nipples pierced, so you conclude that the majority of players in the club likely have undergone the slightly painful (or so I am told) aesthetic enhancement.
The problem is, there is a chance that you’re wrong. There is a chance that due to sheer coincidence you happened to select 15 pierced pairs of nipples where among the full set of 200 members they are very much the minority. For example, if in reality out of the 200 rugby blokes only 30 have their nipples pierced, due to sheer chance you could happen to pick 15 of them in your sample of 20, and your conclusion that “the majority of players in this club has them” is wrong.
Now, in our research, there is no real way around this. Therefore, the convention among academic researchers is that it is ok, and you can claim your conclusion based on only a sample of observations, as long as the probability that you are wrong is no bigger than 5%. If it ain’t – and one can relatively easily compute that probability – we say the result is “statistically significant”. Out of sheer joy, we then mark that number with a cheerful asterisk * and say amen.
Now, I just said that “one can relatively easily compute that probability” but that is not always entirely true. In fact, over the years statisticians have come up with increasingly complex procedures to correct for all sorts of potential statistical biases that can occur in research projects of various natures. They treat horrifying statistical conditions such as unobserved heterogeneity, selection bias, heteroscedasticity, and autocorrelation. Let me not try to explain to you what they are, but believe me they’re nasty. You don’t want to be caught with one of those.
Fortunately, the life of the researcher is made easy by standard statistical software packages. They offer nice user-friendly menus where one can press buttons to solve problems. For example, if you have identified a heteroscedasticity problem in your data, there are various buttons to press that can cure it for you. Now, note that it is my personal estimate (but notice, no claims of an asterisk!) that about 95 out of a 100 researchers have no clue what happens within their computers when they press one of those magical buttons, but that does not mean it does not solve the problem. Professional statisticians will frown and smirk at the thought alone, but if you have correctly identified the condition and the way to treat it, you don’t necessarily have to fully understand how the cure works (although I think it often would help selecting the correct treatment). So far, so good.
Here comes the trick: All of those statistical biases are pretty much irrelevant. They are irrelevant because they are all dwarfed by another bias (for which there is no life-saving cure available in any of the statistical packages): publication bias.
The problem is that if you have collected a whole bunch of data and you don’t find anything or at least nothing really interesting and new, no journal is going to publish it. For example, the prestigious journal Administrative Science Quarterly proclaims in its “Invitation to Contributors” that it seeks to publish “counterintuitive work that disconfirms prevailing assumptions”. And perhaps rightly so; we’re all interested in learning something new. So if you, as a researcher, don’t find anything counterintuitive that disconfirms prevailing assumptions, you are usually not even going to bother writing it up. And in case you’re dumb enough to write it up and send it to a journal requesting them to publish it, you will swiftly (or less swiftly, dependent on what journal you sent it to) receive a reply that has the word “reject” firmly embedded in it.
Yet, unintended, this publication reality completely messes up the “5% convention”, i.e. that you can only claim a finding as real if there is only a 5% chance that what you found is sheer coincidence (rather than a counterintuitive insight that disconfirms prevailing assumptions). In fact, the chance that what you are reporting is bogus is much higher than the 5% you so cheerfully claimed with your poignant asterisk. Because journals will only publish novel, interesting findings – and therefore researchers only bother to write up seemingly intriguing counterintuitive findings – the chance that what they eventually are publishing is BS unwittingly is vast.
A recent article by Simmons, Nelson, and Simonsohn in Psychological Science (cheerfully entitled “False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant”) summed it up prickly clearly. If a researcher, running a particular experiment, does not find the result he was expecting, he may initially think “that’s because I did not collect enough data” and collect some more. He can also think “I used the wrong measure; let me use the other measure I also collected” or “I need to correct my models for whether the respondent was male or female” or “examine a slightly different set of conditions”. Yet, taking these (extremely common) measures raises the probability that what the researcher finds in his data is due to sheer chance from the conventional 5% to a whopping 60.7%, without the researcher realising it. He will still cheerfully put the all-important asterisk in his table and declare that he has found a counterintuitive insight that disconfirms some important prevailing assumption.
In management and strategy research we do highly similar things. We for instance collect data with two or three ideas in mind in terms of what we want to examine and test with them. If the first idea does not lead to a desired result, the researcher moves on to his second idea and then one can hear a sigh of relief behind a computer screen that “at least this idea was a good one”. In fact, you might only be moving on to “the next good idea” till you have hit on a purely coincidental result: 15 bulky guys with pierced nipples.
Things get really “funny” when one realises that what is considered interesting and publishable is different in different fields in Business Studies. For example, in fields like Finance and Economics, academics are likely to be fairly skeptical whether Corporate Social Responsibility is good for a firm’s financial performance. In the subfield of Management people are much more receptive to the idea that Corporate Social Responsibility should also benefit a firm in terms of its profitability. Indeed, as shown by a simple yet nifty study by Marc Orlitzky, recently published in Business Ethics Quarterly, articles published on this topic in Management journals report a statistical relationship between the two variables which is about twice as big as the ones reported in Economics, Finance, or Accounting journals. Of course, who does the research and where it gets printed should not have any bearing on what the actual relationship is but, apparently, preferences and publication bias do come into the picture with quite some force.
Hence, publication bias vastly dominates any of the statistical biases we get so worked up about, making them pretty much irrelevant. Is this a sad state of affairs? Ehm…. I think yes. Is there an easy solution for it? Ehm… I think no. And that is why we will likely all be suffering from publication bias for quite some time to come.
Using Mechanical Turk for behavioral experiments
Posted: January 1, 2012 Filed under: behavioral economics, economics, productivity, research, technology Leave a commentI’m seeing more and more work using Mechanical Turk as a subject pool. Here’s another piece discussing some of the features, advantages and problems with Mechanical Turk – Rand, D (2011), The promise of mechanical turk: how online labor markets can help theorists run behavioral experiments, Journal of Theoretical Biology.
Abstract
Combining evolutionary models with behavioral experiments can generate powerful insights into the evolution of human behavior. The emergence of online labor markets such as Amazon Mechanical Turk (AMT) allows theorists to conduct behavioral experiments very quickly and cheaply. The process occurs entirely over the computer, and the experience is quite similar to performing a set of computer simulations. Thus AMT opens the world of experimentation to evolutionary theorists. In this paper, I review previous work combining theory and experiments, and I introduce online labor markets as a tool for behavioral experimentation. I review numerous replication studies indicating that AMT data is reliable. I also present two new experiments on the reliability of self-reported demographics. In the first, I use IP address logging to verify AMT subjects’ self-reported country of residence, and find that 97% of responses are accurate. In the second, I compare the consistency of a range of demographic variables reported by the same subjects across two different studies, and find between 81% and 98% agreement, depending on the variable. Finally, I discuss limitations of AMT and point out potential pitfalls. I hope this paper will encourage evolutionary modelers to enter the world of experimentation, and help to strengthen the bond between theoretical and empirical analyses of the evolution of human behavior.
Crowdfunding Research
Posted: November 26, 2011 Filed under: finance, research 3 CommentsI’ve been following the crowdfunding trend – I like the effort to democratize the financing of various types of projects and initiatives. Kickstarter projects are fun to look through, Kiva is great, Crowd Cube is now successfully funding startups (including equity stakes), and there are many many more such efforts.
I’ve wondered about the possibilities of crowdfunding research (here’s the orgtheory post on that), and there indeed seem to be some successful efforts: here are dozens of #scifund projects looking for funding (here’s their blog).