A review of George Szpiro’s 2011 book on the history of the Black-Scholes option-pricing formula uses Southwest Airlines’famous fuel-price-hedging strategy as a key piece of its explanation for why firms might want to use options. Southwest’s hedging has received a lot of attention; the gains and losses on these financial trades have rivaled operating profits and losses on its income statement. Most commentators have applauded this aggressive trading activity, merely cautioning that sometimes Southwest guesses wrong about future oil prices and loses a lot of money.
What no one seems to ask is why Southwest shareholders would want the firm to be speculating in the fuel market in the first place. Unless these hedges materially reduced the risk of bankruptcy–and Southwest’s balance sheet is typically stronger than its rivals’–the classic argument applies: Shareholders should not want corporate managers to hedge industry-specific risks, such as swings in fuel prices, because they can very easily deal with these risks themselves by holding a diversified portfolio of stocks (including oil firms) or even by buying their own options on oil prices. Southwest’s financial risk reduction via hedging conveys little or no benefit to the owners of the firm.
But wait, many will object–doesn’t hedging give Southwest a cost advantage over its rivals when oil prices go up? And since these hedges are often accomplished by options, isn’t there an asymmetry, since when Southwest guesses wrong, it only loses the price it paid for the option? Doesn’t the airline therefore lower its costs by these trades, gaining a leg up on its rivals?
The answer is No. These hedges have no impact whatsoever on Southwest’s cost of being an airline operator. They constitute an independent, speculative financial side business, a business that is exactly as good for Southwest shareholders as the CFO’s team is at outguessing the fuel market. Even when Southwest guesses right, it is not improving the airline business’s competitiveness.
To see why this is true, think about the incremental fuel cost to Southwest of running a flight with or without the hedge. If the spot price of fuel is $x/gallon at the time of the flight and it consumes y gallons, then the fuel cost is xy. If Southwest has successfully hedged the oil price, then it will make a bunch of money after closing out its position, but it would still independently save $xy by not running the flight. If Southwest has guessed wrong and lost money on the hedge, it would also save $xy by not running the flight. So the cost of operation–the increment in expenditure caused by producing another unit–is unaltered by the hedging strategy.
This situation should be easy to visualize because the hedges are on oil rather than jet fuel and because they are settled for cash rather than physical delivery. But even if the hedges were denominated in physically delivered jet fuel, successful or unsuccessful hedging would have no impact on airline operating costs. If Southwest just bought fuel early for $(x-a)/gallon and stored it until the spot price was $x/gallon, the opportunity cost of the flight would still be $xy, since the airline could cancel the flight and sell y gallons for that amount. The incremental expenditure difference between flying and not flying is exactly the same. (If opportunity cost confuses you, visualize that Southwest has some fuel on hand purchased at the lower hedged price and some at the spot price, and note that it doesn’t matter which barrel of gas goes into which plane–all the fuel is fungible, and it is all worth $x/gallon if that’s what it could be sold for.)
Now, risk-averse behavior by managers may be in their own interest, depending on the form of their compensation, the structure of the labor market, and their perceived ability differential over their peers. But it is of little help to the owners of public firms that are far from bankruptcy. That’s a point that should not be hedged.
Over at Reason.com they have interesting text and video on the sad tale of 38Studios, New England baseball hero Curt Schillng’s collapsed videogame venture that attracted nary an independent private investor but sucked up $100 million from Rhode Island taxpayers. Some takeaways from the story:
1) When inexperienced and undermanaged quasi-public economic development corporations go chasing glamour ventures to try to cover up their state’s abysmal business climate, bad things are likely to happen.
2) When the glamour venture is headed by a star athlete with zero experience or expertise in his chosen field, and appears to have no experienced management at all, the odds go down.
3) When a venture making a totally conventional product, such as a massive multiplayer game, can’t get any private investors, there’s probably no conceivable public policy justification for a subsidy.
4) People like Schilling who claim to be against big government but then reach their hands into the taxpayers’ pockets to fund their own dreams are, at best, intellectually stunted.
5) Schillings’s pro-Bush political views may helped save the taxpayers of Massachusetts, because Democratic governor Deval Patrick turned Schilling down flat even though the pitcher is an immensely popular legend among Boston Red Sox fans.
Over at the American Scientist (in an overall interesting Jan-Feb. 2013 issue) we have a column arguing that there’s no need to worry about a contagion of fraud and error in scientific publication, even though the number of publications has exploded and the number of retractions has exploded along with them. The basic pitch: the scientific literature is wonderfully self-correcting. The evidence given: the ratio of voluntary corrections to retractions for fraud looks kind of high, and journals with more aggressive and welcoming policies toward corrections have more of them. I kid you not.
But wait, you say. How is that evidence at all probative? Good question, as one says when the student goes right where we want to take the discussion. At the very least, we’d want to see if the rate of retractions is going up over time, but somehow those figures and graphs don’t appear in the article. But what we’d really like to know is how many non-retracted, non-corrected, and non-commented articles are in fact erroneous or misleading despite peer review, and here the article is silent. It’s evidence is almost completely non-responsive to the question it purports to address. But the problem goes deeper.
Recent public concerns, including on this blog, have noted pressures for sensationalism, publication bias, data snooping and experimental tuning bias, and many similar causally based arguments. John Ionnadis has made a pretty good career pounding on these issues and trying to place upper and lower bounds on the problem. The devastating Begley and Ellis study of “landmark” papers in preclinical cancer research found that only 6 of 53 had reproducible results, even after going back to the original investigators and sometimes even after the original investigators themselves tried to reproduce their published results. Here is what the latter authors think about the health of the peer-reviewed publishing system in pre-clinical cancer research:
The academic system and peer-review process tolerates and perhaps even inadvertently encourages such conduct5. To obtain funding, a job, promotion or tenure, researchers need a strong publication record, often including a first-authored high-impact publication. Journal editors, reviewers and grant-review committees often look for a scientific finding that is simple, clear and complete — a ‘perfect’ story. It is therefore tempting for investigators to submit selected data sets for publication, or even to massage data to fit the underlying hypothesis.
Of this substantial and growing literature on the prevalence of error and publication of invalid results, the American Scientist article is entirely innocent. Instead, it uses a single Wall Street Journal article as its target for attack, and even there ignores the non-anecdotal parts of the story–evidence that retractions have been growing faster than publications since 2001 (up 1500% vs. a 44% increase in papers), that the time lag between publication and retraction is growing, and that retractions in biomedicine related to fraud have been growing faster than those due to error and constitute about 75% of the total retractions.
Perhaps a corrigendum is in order over at the Am Sci.
A September 2012 article in PNAS found that most retractions are caused by misconduct rather than error:
A detailed review of all 2,047 biomedical and life-science research articles indexed by PubMed as retracted on May 3, 2012 revealed that only 21.3% of retractions were attributable to error. In contrast, 67.4% of retractions were attributable to misconduct, including fraud or suspected fraud (43.4%), duplicate publication (14.2%), and plagiarism (9.8%). Incomplete, uninformative or misleading retraction announcements have led to a previous underestimation of the role of fraud in the ongoing retraction epidemic. The percentage of scientific articles retracted because of fraud has increased ∼10-fold since 1975. Retractions exhibit distinctive temporal and geographic patterns that may reveal underlying causes.
In a remarkably shoddy example of anti-market propaganda emanating from the Nottingham Business School, the Economist runs a screed that starts out with the debatable but reasonable premise that business leaders exaggerate their omniscience. It somehow ends up with the unsupported conclusion that business schools should abandon economics, finance, and the pursuit of profit for the cant trio of “sustainability,” “social responsibility,” and “leadership for all not for the few.”
The crude equivocating shifts from intellectual humility to moral humility to altruism would qualify for an F in any class on composition, much less philosophy. The vague assertions about “business excess” (entirely unsupported or even defined), the implicit attribution of these excesses to the teachings of business schools (ditto), and the wild leap at the end (replacing business school education with an agora-like setting in which sophists mingle with scientists and philosophers with philistines to figure out what are “social needs”), all conduce to a massive loss of reader brain cells per sentence. This article might be useful as a sort of mine detector–anyone who finds it congenial is best separated from responsibility for educating or commenting on business or economic issues.
Barry Lynn, apparently some sort of John Kenneth Galbraith wannabe, has an amusingly cockeyed post over at the Harvard Business Review blog. He seems to think that state regulations protecting local beer distributors from vertically integrated competitors are the font of virtue, preserving needed diversity in the beer market by allowing craft and micro-brewers to get their product delivered. But if the big brewers were legally able (and motivated) to foreclose distribution of the small brands, they would be legally able to do it without vertically integrating into distribution (by requiring exclusivity).
A simpler analysis: When there were many competing major brewers, independent multi-brewer distributors made economic sense, since they eliminated needless duplication of sales and delivery of all those brands to retail establishments. With the consolidation of the beer industry into two giant companies that own all the big brands (and a shift from on-premises to at-home consumption), a single-brewer distribution firm can now internalize almost all those economies. Then the beer industry starts to look a bit more like the soft-drink industry, where two major firms own and develop all the major brands and we don’t blink an eye at their bottler/distributors having exclusive relationships with the upstream brand owners or even being vertically integrated with them. If your local Costco or supermarket won’t carry a micro-brew or an off-brand soda, it’s unlikely to be due to market power on the part of the distributors.
UPDATE: It seems that AB InBev, owner of Budweiser and many other beer brands, is indeed shifting to more of a product innovation strategy and running into distribution problems with these new products:
“That’s not to say that AB InBev has perfected the process. Profit this year was hurt by higher distribution and administration costs in the U.S. as the brewer struggled to keep up with demand for Platinum and Lime-A-Rita, which required extensive — and expensive — countrywide distribution.”
So maybe there are strategic reasons why AB InBev would want more control over its distribution pipeline.
Where do great ideas come from? A popular notion among creativity experts is that recombination of preexisting ideas in a new context is the form that most if not all creativity takes. One more datum: Courtesy of my lovely wife, it seems that George Lucas may have been voguing, so to speak, when he came up with one of his most iconic images.
Apparently the University of California system decided it needed to update its image, so they cooked up this. Here are the old and the new side by side:
When I see the old logo, I think of quaint values like learning and truth. When I see the new logo, I imagine little enzymes acting like keys to unlock the stains in my laundry.
UPDATE: The UC bureaucracy folds up like a tent a mere five days after this was posted. Maybe the new logo should say vox populi somewhere. (H/t David Hoopes in the comments.)
I’ve been listening to my good friend Todd Zenger for the last few years explaining that the strategic management field is predicated on the idea that corporate managers know more than the uninformed stock market and its lazy analysts. Dick Rumelt’s Good Strategy/Bad Strategy makes a similar point. The idea is that finding unique resource synergies is a good way to get competitive advantage but a bad way to please narrow-minded investors who hate unique strategies that are hard for them to evaluate. Raghurum Rajan’s recent presidential address to the American Finance Association makes a similar point, although with a much more positive spin on the role of equity markets in supporting the creation of entrepreneurial enterprises. With such an eminent set of eloquent and insightful advocates, it’s hard not to tentatively consider the perplexing idea that stock markets systematically undervalue powerful synergistic corporate strategies.
Then I wake up.
You probably followed the news about HP’s massive writeoff on its perplexing Autonomy acquisition of a year ago. The headline to that story was HP CEO Meg Whitman’s claim that Autonomy had cooked its books and fooled its auditors prior to HP’s purchase of the firm under previous, perplexingly hired, CEO Leo Apotheker. It isn’t clear that the extent of the alleged fraud can explain the gigantic size of the writedown by HP, but in any case outsiders like short-seller Jim Chanos, much of the British tech analyst community, and the very useful John Hempton, proprietor of the Bronte Capital blog, had long smelled a rat. They thought, even prior to the acquisition, and using only the company’s official accounting statements, that there was something fishy about Autonomy’s books. How could HP’s finance team and the outside auditors have failed to notice this at the due diligence stage? It’s perplexing.
NBA Commissioner David Stern recently fined the San Antonio Spurs $250,000 and severely chastised them for the decision by Gregg Popovich, their near-legendary coach, to rest his aging stars at home rather than fly them to Miami for a meaningless (but nationally televised) tilt with the defending-champion Miami Heat. Is Stern losing his grip? Does he need an intervention and/or a forced retirement as he reaches his managerial dotage? While I haven’t heard of Commissioner Queeg–whoops, Stern–clicking steel balls in his hand or searching for the keys to the strawberries, a Caine Mutiny scenario may be approaching if he continues to deteriorate. Other firms with long-term, successful “emperor” CEOs have found their later years to be problematic. See Eisner, Michael (Disney) or Olson, Kenneth (Digital Equipment Corporation) or maybe Cizik, Robert (Cooper Industries).
I just saw a recent article in the Chronicle of Higher Education on the emerging field of neuroeconomics. Unlike behavioral economics, where ideas from psychology have been ported over to economics to explain various individual “anomalies” in choice behavior, in neuroeconomics much of the intellectual traffic has gone in the other direction–economic modeling tools are helpful in understanding psychological processes (including where those processes deviate from classic economic theory). The axiomatic approach to choice makes it a lot easier to parse out how the brain’s actual mechanisms do or don’t obey these axioms.
An important guy to watch in this area is Paul Glimcher, who mostly stays out of the popular press but is a hardcore pioneer in trying to create a unified (or “consilient”) science encompassing neuroscience, psychology, and economics. I’ve learned a lot from reading his Foundations of Neuroeonomics (2010) and Decisions, Uncertainty, and the Brain (2004): why reference points (as in prospect theory) are physiologically required; how evolutionary theory makes a functionalist and optimizing account of brain behavior more plausible than a purely mechanical, piecemeal, reflex-type theory; why complementarity of consumption goods presents a difficult puzzle for neuroscience; and much more.
A long time ago, in a blog far, far away, I outlined the idea of a “new-wave utility.” The idea was that some innovative high-growth service businesses were transitioning into utility-like systems whose large and diverse customer bases implicitly depended on them for ubiquity, reliability, and stability of offering. One example I mentioned in passing was Starbucks. Apparently, in Manhattan, Hurricane Sandy has revealed the truth of this classification. From the story in the link, access to bathrooms has been a key issue in the Big Apple. That’s less of a factor in L.A., but power outlets, WiFi, and table space in a congenial environment have certainly put Starbucks (and its smaller rivals such as the Coffee Bean and Tea Leaf Co.) in the category of utilities for the city’s horde of writers, students, and deal-makers.
Ever teach one of those classes where you’re off a beat, forget points you want to make, and ramble a little bit? Maybe your preparation wasn’t up to its usual standard. Fortunately, there isn’t an opponent in the room compounding the problem by trying to make you look bad. Barack Obama did not have that luxury tonight.
I had fun watching the president’s discomfiture, but I sure wish Romney had a better five points on the economy and that he would explain why cutting the growth of entitlements is not an optional choice. (Although his Spain remark wasn’t bad.)
Over at the Atlantic, Jordan Weissmann argues that the Obama administration’s claim to be pursuing an “all of the above” energy strategy is unrealistic because the EPA’s new CO2 emission rules will make traditional coal plants untenable while “clean coal” technology is uneconomical relative to natural gas. Fine.
But Weissmann goes on to argue that the reason why clean coal R&D is a big waste of money is because of the lack of a cap-and-trade policy that would put a price on CO2 emissions. That’s dead wrong. With a price for carbon dioxide, just as with the EPA’s technology or emissions standards, power producers would look for the cheapest alternative to coal. That would be natural gas (given the same forecasts Weissmann relies on). So the clean-coal subsidies are unlikely to pay off regardless of what kind of policy we pursue, be it a CO2 tax, cap and trade, or emissions or technology standards for power plants. Cheaper is cheaper. It’s amazing how often people fail to grasp principles of competitive advantage.
(We could, of course, come up with a convoluted policy to keep coal miners employed, similar to how the 1977 amendments to the Clean Air Act were set up to penalize low-sulphur Western coal so as to keep Eastern mines open, but I’m hopeful that we can avoid that kind of perversity this time. If you see laws or regulations that punish natural gas use in electric power, though, you’ll know my hopes have gone unfulfilled.)
Entertaining story of how the Applebee’s restaurant chain is trying to improve asset utilization by adding late-night karaoke and G-rated cut-loose partying to its traditional family-friendly but competitively besieged regular-hours dining experience. The impetus for this innovation appears to have been franchisees who went to their privately-held franchisor and asked for permission to experiment, which was enthusiastically granted.
My favorite part of this story is the superhero-like switch from regular-hours Applebee’s to creature of the night “bee’s” with the “Apple” part of the electric sign turned off, the lights switched from green to white, and the space cleared for dancing and DJs. My second-favorite part is the image of some naive soul coming in for a snack after 10:00PM–”oh, look honey, Applebee’s is still open; let’s get some pie”–and being confronted with hard-partying suburbanites belting out Foreigner tunes and dancing drunkenly. They’re going to have some delicate marketing and branding issues going forward, but it’s hard not to cheer for the entrepreneurial spirit displayed. After all, there are suburban archipelagoes of shopping centers where the only late-night place to go out will be “bee’s.”
I must admit that Andrew Hacker has been a byword for fatuousness to me for quite some time. His latest, however, is especially meretricious because it plays directly into what so many people desperately want to believe–that ignorance of algebra is AOK, so we shouldn’t bother trying to teach it. Apparently, algebra drives people to drop out of high school and fail competency tests, and these are Bad Things we can avoid by no longer teaching or testing it. (Next up–placing the thermometer in an ice bucket to cure your fever.)
Hacker’s most superficially compelling argument is that almost no workers use algebra in their day-to-day work, including a large number of people working in technical fields. (In a parody of Deweyism, he does want students to learn “citizen statistics” and “quantitative literacy” so they can understand how the Consumer Price Index is put together, although how you can understand a weighted average without understanding algebra is beyond me.) In case someone brings this job-relevance argument up at a party, here are some quick responses:
1. Jobs don’t require algebra largely because there aren’t very many people who are good at it. If no one could read, jobs wouldn’t require literacy, either, but there would be a significant productivity loss in many cases.
2. In the novel and movie The Caine Mutiny, the intellectual (though evil) character points out that in the Navy “everything was designed by geniuses to be run by idiots.” We could equally say that in the modern world everything is designed by people who know algebra to be used by people who don’t. The fewer people who know algebra, the more unnecessarily elitist that world will be.
a) It’s almost impossible to understand financial relationships without being able to manipulate algebraic formulae. You can plug and chug different numerical values into a calculator or spreadsheet formula, but you’ll have no idea about the sensitivity of results nor will you be able to check or modify the formulae.
b) Probability and statistics are doable, in a very limited and rote way, using prepared templates, for people who don’t know algebra. Anything beyond that, though, is impossible to figure out if you can’t manipulate the symbols properly. Would you trust someone to run a regression or interpret its results if they didn’t know in their bones what a coefficient represented, much less a log-linear relationship?
c) Some functional relationships can be grasped by drawing graphs without algebraic manipulation. If you need more than two dimensions or want to prove which way the curves have to move when you change assumptions, though, you’re going to have to be able to do some algebra.
d) Lots of non-STEM jobs involve spreadsheet formulas of various kinds. Media planning, job-cost estimation, tax planning, budgeting, etc. are common sorts of work for “office” employees. The people who use these spreadsheets probably could get by without algebra, but anyone who wants to write one is going to be at a big disadvantage without it.
3. It indeed would be a lot easier for people to learn algebra if it were contextualized better by being tied to applied topics in other subjects. But that approach would result in algebra becoming a more important step along the way to passing science, economics, and other courses rather than a less important part. Moreover, some of it requires rote memorization just as does learning the multiplication tables or irregular verbs in a foreign language. It will not always be fun, but some necessary things just aren’t fun.
4. Last I checked, very few jobs require writing or reading Elizabethan English, knowing about Reconstruction, drawing or painting, worrying about environmental issues, knowing the parts of a frog or flower, or understanding why potassium burns when placed in water. I bet I could reduce dropouts and improve test performance by removing those topics from the curriculum and who’d know the difference? Voc-ed uber alles!
5. Hacker’s earlier work questioned the cost/benefit ratio of college. If my hypothesis about the higher education market is correct, then Hacker’s suggestion to stop teaching algebra in high school would fuel the very phenomenon he decries.
(Note 1: Anyone who will be offended by light-hearted discussions of the Dark Knight Rises in light of the shootings at the Aurora premier should skip to another part of the Internet. I have no intention of giving murderous attention-seekers the power to hijack all media, but I am aware that not all will agree.)
(Note 2: MINOR SPOILER ALERT)
I’ve just seen the Dark Knight Rises, which was a pretty good movie–not as good as the previous film in the trilogy, unsurprisingly, but exciting and even moving at times, with lots of little moments for each minor character to reveal his true nature and seem like a unique individual.
One minor problem: Batman is supposed to be a master strategist and tactician. He’s chosen to go underground to pursue and confront his enemy, Bane, about whom he has considerable intelligence, including Bane’s background, training, experience, and physical prowess (including his main point of weakness–a mask over his mouth that keeps Bane from suffering intense pain). He can see Bane standing in front of him, a very large individual of obvious strength and questionable agility. Batman is wearing a utility belt filled with grenades, throwing blades, sleep darts, cable launchers, and bolas. He is standing in a large cavern and is capable of operating vertically by shooting lines up to the ceiling and using built-in powered winches. In short, he is in a perfect position to remain outside Bane’s striking distance while hitting him with a variety of entangling, injuring, and even killing weapons.
Out of this cornucopia of options, what does Batman choose? Of course, a head-on bull rush, followed by a slugfest and wrestling contest. That’s the combat equivalent of Neiman-Marcus starting a price war with Wal-Mart. Macho is one thing, unbelievably stupid is another. (It’s true that in the real world, people make stupid mistakes, but in fiction we want Aristotelian probability, not literalness. And if someone does go the literal route, the character’s stupidity should at least be noted by others in the story.)
The fundamental writing problem here was actually reflected way back in the Knightfall comic book series that introduced Bane–his supposed awesomeness as a combatant simply doesn’t match his capabilities. (At least in the comic book, they gave him a device that injected a super-steroid called Venom straight into his head when he needed to pump himself up for extra fighting fury. It still wasn’t enough to make him seem that tough for any foe with speed, agility, and distance weapons, but it made for a striking visual when his veins would bulge out in expansion mode.) So the Nolans gave themselves a tough writing challenge the moment they decided to use the Bane character–another example of a particular strategy causing tough execution problems.
By now, you may be getting sick of reading articles and blog posts about the crisis in higher education. This post is different. It proposes an explanation of why students have been willing to pay more and more for undergraduate and professional degrees at the same time that these degrees are becoming both less scarce and more dumbed down. And that explanation rests on a simple and plausible economic hypothesis.
Freek’s latest post on confirmation bias notes that intellectual commitments can bias which research findings one believes. The tone of the post is that we would all be better off if such biases didn’t exist, but there is definitely a tradeoff here. Greater objectivity tends to go with lower intensity of interest in a subject. (Disinterested and uninterested are correlated, for those old-timers who remember when those words had different definitions.) That’s why you often find that those with strong views on controversial topics–including those with minority or even widely ridiculed opinions–often know more about the topic, the evidence, and the arguments pro and con than “objective” people who can’t be bothered to dig into the matter. Other than partisanship, the only thing that will get people interested enough to seriously assess competing claims is a personal stake in the truth of the matter. (And in all cases, Feynman’s admonition that the easiest person to fool is yourself should be borne in mind.)
Historians of science of all stripes, from romanticists like Paul de Kruif (author of the classic The Microbe Hunters) to sophisticated evolutionists like David Hull in Science as a Process, have reported that intellectual partisanship motivates a great deal of path-breaking research. “I’ll show him!” has spawned a lot of clever experiments. Burning curiosity and bland objectivity are hard to combine.
But how can such partisanship ever lead to intellectual progress? Partisans have committed to high-profile public bets on one or another side of a controversy; their long-term career and immediate emotional payoffs depend not directly on the truth, but on whether or not they “win” in the court of relevant opinion. The key to having science advance is for qualified non-partisan spectators of these disputes be able to act as independent judges to sort out which ideas are better.
Ideally, these adjacent skilled observers would have some skin in the game by virtue of having to bet their own research programs on what they think the truth is. If they choose to believe the wrong side of a dispute, their future research will fail, to their own detriment. That’s the critical form of incentive compatibility for making scientific judgments objective, well-described in Michael Polanyi’s “Republic of Science” article. If, for most observers, decisions about what to believe are closely connected to their own future productivity and scientific reputation, then the partisanship of theory advocates is mostly a positive, motivating exhaustive search for the strengths and weaknesses of the various competing theories. Self-interested observers will sort out the disputes as best they can, properly internalizing the social gains from propounding the truth.
The problem for this system comes when 1) the only scientific interest in a dispute lies among the partisans themselves, or 2) observers’ control over money, public policy, or status flows directly from choosing to believe one side or another regardless of the truth of their findings. Then, if a false consensus forms the only way for it come unstuck is for new researchers to benefit purely from the novelty of their revisionist findings–i.e., enough boredom and disquiet with the consensus sets in that some people are willing to entertain new ideas.
For an economist studying business strategy, an interesting puzzle is why businesspeople, analysts, and regulators often don’t seem to perceive the fungibility of payments. Especially in dealing with bargaining issues, a persistent “optical illusion” causes them to fetishize particular transaction components without recognizing that the share of total gain accruing to a party is the sum of these components, regardless of the mix. Proponents of the “value-based” approach to strategy, which stresses unrestricted bargaining and the core solution concept, ought to be particularly exercised about this behavior, but even the less hard-edged V-P-C framework finds it difficult to accommodate.
- There’s been some noise lately about U.S. telecom providers cutting back on the subsidies they offer users who buy smartphones. None of the articles address the question of whether the telecom firms can thereby force some combination of a) Apple and Samsung cutting their wholesale prices and b) end users coughing up more dough for (smartphone + service). The possibility that competition among wireless providers fixes the share of surplus that they can collect, so that cutting the phone subsidy will also require them to cut their monthly service rates, is never raised explicitly. There is a pervasive confusion between the form of payments and the total size of payments.