The Chen and Pearl paper has been around for a while in working paper form and recently came out in the Real World Economics Review, also available here from the authors with much clearer typesetting.

The additional textbooks I discuss below are: Amemiya (1985), Kmenta (1986), Davidson and MacKinnon (1993), Gujarati (1999), Hayashi (2000), Wooldridge (2002), Davidson and MacKinnon (2004), Deilman (2005), and Cameron and Trivedi (2005).

**The Issue: Causality in regression models.**

A scientist is attempting to understand the relationship between, say, health and smoking. Let y denote some measure of health and let x denote a measure of smoking intensity, say, number of cigarettes smoked per day. A simple model for health supposes the two outcomes are related by,

.

In short, Chen and Pearl consider these issues: how do econometrics textbooks clearly explain what the parameter means in this model, are they consistent in that interpretation, and generally how well are issues of causality addressed?

That simple-looking equation is much trickier than it appears, as first formally discussed in the econometrics literature by Trygve Haavelmo during the Second World War. For recent discussions, see for example Heckman (2005, 2008), Heckman and Pinto (2013), or blog discussions such as on Pearl’s blog or Andrew Gelman’s blog (note comments from Pearl and from Guido Imbens). First suppose we *define* the random variable u as the difference between y and its conditional expectation:

,

then it is easy to show that the error term must be mean-independent of . In econometric jargon, we obtain exogeneity by definition. In this interpretation, the parameter is implicitly defined through,

,

that is, is by definition the gradient of . In the smoking and health example, is by definition how much health changes on average as we consider a person who smokes one more cigarette per day (specifically *without* the caveat, “other things being equal”).

This interpretation of this model is merely “agnostic” or “predictive.” An insurance agency, for example, might be interested in estimating under this interpretation: the answer might help them understand how their payouts will vary if they accept customers who smoke more. But econometricians and other scientists are only rarely interested in such a predictive relationship. Instead, we want to know the causal effect of smoking on health, and the predictive regression generally does not recover that causal effect. Suppose for example we lived in a universe in which a given person’s health is unaffected by their smoking, but also that behaviors and characteristics which lead to low health also tend to lead to more smoking. Then we would tend to estimate negative values for even though by assumption (in whatever universe we’re discussing) smoking does not cause any person’s health.

For this reason econometricians rarely interpret the error term as simply the deviation between the outcome and its conditional expectation. Rather, in a structural interpretation of the equation, takes a causal interpretation and u is interpreted as summarizing all causes of y other than x. It is well-known that any of: (1) “reverse” causation, (2) omitted variables correlated with the regressors, or (3) measurement error in the regressors, lead to correlation between u and x, which in turn means that the parameter is not defined as the derivative of with respect to . We would like to know how a randomly selected person’s health changes if we could intervene and exogenously flip smoking status; the problem is that the correlation between smoking and health calculated from observational data does not generally give us any answer to that question.

**Textbook discussion of the issue. **

The seemingly straightforward issue is not straightforward at all, and exactly what we mean by “causal,” even in the context of simple regression models such as above, is a subject of ongoing multidisciplinary research. Nonetheless, since inferring causal relations from observational data is the defining characteristic of econometric analysis, it seems very reasonable to require that econometrics textbooks should contain lucid discussions of causal relationships and, in so doing, define parameters clearly and unambiguously. Disturbingly, Chen and Pearl find that six popular econometrics textbooks fail, to a greater or lesser extent, to do so.

Chen and Pearl evaluate texts on 10 criteria, which amount to: does the textbook provide as least as much information about causal interpretation as this post does very briefly above, is the text consistent on those interpretations, and does the text provide the equivalent of Pearl’s “do(x)” operator to define causal effects? Other than the “do(x)” criterion, which I don’t think is fair because Pearl’s concept has not caught on the econometrics literature and (even it ought to catch on) should therefore not (yet?) appear in current econometrics textbooks, the criteria seem very fair to me. Pity the poor student who attempts to understand how to interpret a structural econometric model after reading this startling passage in Kennedy, for example:

Using the dictionary meaning of causality, it is impossible to test for causality. Granger developed a special definition of causality which econometricians use in place of the dictionary definition: strictly speaking, econometricians should say “Granger-cause” in place of “cause,” but usually they do not. A variable x is said to Granger-cause y if prediction of the current value of y is enhanced by using past values of x.

This is the only passage in the book in which the word “causality” is used, and the claims in that passage are not correct, in no small part because so-called Granger causality is not a causal concept. Although in my view that passage is by far the worst discussion in the six texts discussed, Chen and Pearl show persuasively that each of the discussed textbooks are at times at least vague in their discussion of causal relations. On the other hand, Chen and Pearl are perhaps somewhat uncharitable in some of their discussion. For example, they make much of this passage from Greene,

[ In the model ] does measure the value of a college education (assuming the rest of the regression model is correctly specified)? The answer is no if the typical individual who chooses to go to college would have relatively high earnings whether or not he or she went to college…

but in context this appears to be a typo: the passage is rescued if “the OLS estimate of” is inserted in front of , and the passage makes no sense if that or an equivalent edit is not made, and Greene in many, many other places clearly differentiates between mere correlations and causal parameters. Chen and Pearl, however, are not satisfied with an answer Greene gave them in a a personal communication as to the meaning of a structural parameter:

In a personal correspondence (2012), Greene wrote, “The precise definition of effect of what on what is

subject to interpretation and some ambiguity depending on the setting. I find that model coefficients are

usually not the answer I seek, but instead are part of the correct answer. I’m not sure how to answer your

query about exactly, precisely carved in stone, what should be.”

I tentatively side with Greene here, although Chen and Pearl do not specify exactly what question Greene was asked. In structural models, the structural parameters are not necessarily causal effects in and of themselves, they are rather assumed to be invariant with respect to some well-specified class of disturbance. For example, the deep parameters characterizing Harold Zurcher’s replacement of bus engines are not themselves causal effects, but given estimates of those parameters, the model can answer meaningful causal questions. Exactly what a structural coefficient means is model-dependent.

**Some results from other textbooks.**

Without going into nearly as much detail as Chen and Pearl, I took a look through some other econometrics textbooks to check to see how they discuss, or do not discuss, causality. Specifically, I looked to see whether the regression parameters are anywhere incorrectly defined as gradients of the conditional expectation of the dependent variable, and I tried to find explicit discussions of causal interpretation of estimated models. The texts surveyed below vary widely in level and vintage, including everything from introductory undergraduate to advanced graduate texts, from 1985 through 2005.

**Amemiya (1985), Advanced Econometrics.**

This textbook is now old, well, ancient, by academic standards, and is relatively technically demanding. Opens, on page 1, by dubiously asserting that the goal of econometrics is to estimate parameters which define the joint distribution of a set of random variables . As far as I can tell, the word “causal” does not appear anywhere, nor are there examples of predictive vs causal interpretation of parameters. Any notions of causality are implicit and framed in purely statistical terms. However, does not incorrectly defines as the gradient of .

**Kmenta (1986), Elements of Econometrics**

Does not incorrectly define as the gradient of .

There is a fairly long, yet confusing discussion of causality at the start of the chapter on simultaneous systems.

Although the concepts of causality and exogeneity are not identical, it is nevertheless possible to conclude that if a variable Y is–in some sense–caused by a variable X, Y cannot be considered exogenous in a system in which X also occurs. A widely discussed definition of causality has been proposed by Granger.

This is the textbook that I learned undergraduate econometrics from. I don’t remember how I thought of causality in econometric models at the time (possibly because I really didn’t like econometrics as an undergraduate). But it’s hard to see how a student could make much headway in understanding causality from that passage. Causality is first introduced “in some sense” deliberately avoiding a definition. An incorrect claim that if one variable causes another they cannot both be treated as exogenous in a system follows: that is simply not true, nothing in regression models precludes causal relationships between exogenous variables (as a trivial example, the square of an exogenous covariate is routinely used to capture nonlinear relationships between variables, which is deterministic and monocausal relationship). And then the notion of Granger-causality is introduced as the only formally defined causal concept in econometrics.

**Davidson and MacKinnon (1993), Estimation and Inference in Econometrics**

The parameters in the linear regression model are defined in Chapter 1 very abstractly as the set of real numbers defining the subspace spanned by the column vectors of the regressors. is never incorrectly defined as the gradient of . Simultaneity and omitted variable bias are discussed in purely statistical, as opposed to causal, terms in Chapter 7.

Discusses causality explicitly in section 18.2, “Exogeneity and causality.” The clearest passage is,

But we have not yet discussed the conditions under which one can validly treat a variable as explanatory. This includes the use of such variables as regressors in least squares estimation and as instruments in instrumental variables or GMM estimation. For conditional inference to be valid, the explanatory variables much be predetermined or exogenous in one or other a variety of senses to be defined below.

which is not very clear at all: the authors intend, I think, the first sentence to mean, “But we have not yet discussed the conditions under which one can treat the coefficient on a variable as reflecting a causal effect.” The matter is then further muddied as later in this subsection the concept of Granger causality is introduced, without clearly differentiating between so-called Granger-causality and causality.

There is an implicit discussion of causality when estimation of supply and demand functions is introduced as an issue to motivate instrumental variable estimation: if we remember from theory that the slopes of these functions are indeed causal effects, then the discussion amounts to asserting that OLS does not recover causal effects in this context.

**Gujarati (1999), Essentials of Econometrics, second edition.**

Does not incorrectly define as the gradient of .

Implicitly defines regression parameters as causal effects (without using the word “causal”) on page 7. On page 8, correctly defines the error term as unobserved causes of the dependent variable, and notes,

Before proceeding further, a warning regarding causation is in order…. Does regression imply causation? Not necessarily. As Kendall and Stuart note, “A statistical relationship, however strong and however suggestive, can never establish causal connection: our ideas of causation must come from outside statistics, ultimately from some theory or other.”

A variant of this warning is repeated on page 124, although somewhat oddly then proceeds to give uses for regression analysis which do not include the estimation of causal effects.

Gives examples of omitted variables bias and simultaneity bias which implicitly define the structural parameters as causal effects, and refers again to these parameters when introducing instrumental variables, a topic not pursued in this introductory-level text.

**Hayashi (2000), Econometrics.**

Defines regression parameters as causal effects (without using the word “causal”) on page 4, but also claims on the same page that an econometric model is a “set of joint distributions satisfying a set of assumptions,” which leaves it unclear whether the author intends regression parameters to reflect causal effects or parameters defining statistical distributions.

Introduces the issue of endogeneity noting that, “The most important assumption made for the OLS [sic] is the orthogonality between the error term and the regressors. Without it, the OLS estimator is not even consistent.” Much like Davidson and MacKinnon (1993), differentiates between causation and mere correlation using estimation of the slopes of supply and demand curves as an example, albeit without using any variant of the word, “cause.”

**Wooldridge (2002), Econometric Analysis of Cross-Section and Panel Data.**

Chen and Pearl discuss “baby” Wooldridge, the undergrad text. Does Papa Wooldridge fare better?

The opening passage of the text, Section 1.1 of the Introduction, begins,

The goal of most empirical studies in economics and other social sciences is to determine whether a change in one variable, say w, causes a change in another variable, say y…. Because economic variables are properly interpreted as random variables, we should use ideas from probability to formalize the sense in which change in w causes a change in y. The notion of ceteris paribus… is the crux of establishing a causal relationship. Simply finding that two variables are correlated is rarely enough….”

Goes on to define regression parameters as partial derivatives of conditional expectations, although not of but of (in our notation) of .

Includes the first, to the best of my knowledge, lengthy discussion of the counterfactuals/treatment effects literature (Chapter 18), and links the preceding discussion of regression models to the treatment effects literature.

**Davidson and MacKinnon (2004), Econometric Theory and Methods.
**

We can make a fixed-effects type observation here, as we have the another text from James and Russell, about a decade later than the 1993 text discussed above. How do the 1993 and 2004 books differ? The introductory passage on page 1 introduces regression parameters and implies their definition depends on how the error term is defined, although at this point exactly what means is deliberately left vague, it’s interpretation is “quite arbitrary,” the authors correctly note. After introducing the equivalent of the model , the text states (in our notation),

At this stage we should note that, as long as we say nothing about the unobserved quantity , [the equation] does not tell us anything. In fact, we can allow to be quite arbitrary, since for any given [value] the model… can always be be made to be true by defining suitably.

A similar passage on page 313 notes that, when a regressor is measured with error, OLS estimation gives the desired result if the error term is defined as simply the difference between the observed outcome and its expectation with respect to the observed regressor, but “in most cases” in econometrics that definition does allow us to estimate the parameters we wish to estimate.

More or less the same discussion of supply and demand as in the 1993 text can again be interpreted as an implicit discussion of causality.

**Dielman (2005), Applied Regression Analysis, 4th ed.**

Incorrectly defines as the slope of on page 75, although in the context of a model explicitly described as a “descriptive regression.” Does not immediately clarify, however, when a regression model should be interpreted as merely descriptive.

Discusses “causal” versus “extrapolative” regression models in the narrow context of time series modeling on page 112, but does not make it clear what the intended difference between these concepts is, nor is it clear why this discussion is limited to time series models. Claims that the issue with causal models is, “causal models require the identification of variables that are related to the dependent variable in a causal manner. Then data must be gathered on these explanatory variables to use the model.” This makes it seem that simple correlations can be used to infer causal relations so long as we can observe both the variables. However, also notes on page 118 that “A common mistake made when using regression analysis is to assume that a strong fit (a high ) of a regression of y on x automatically means `x causes y.'” There is then a brief discussion of endogeneity through simultaneity and through omitted variables, which is quite clear, particularly for an introductory text.

**Cameron and Trivedi (2005), Microeconometrics: Methods and Applications. **

A few sentences into the introduction on page 1, notes that,

A key distinction in econometrics is between essentially descriptive models and data summaries at various levels of statistical sophistication and models that go beyond mere associations and attempt to estimate causal parameters. The classic definitions of causality in econometrics derive from the Cowles Commission simultaneous equations model that draw sharp distinctions between exogenous and endogenous variables, and between structural and reduced form parameters. Although reduced form models are useful for some purposes, knowledge of structural or causal parameters is essential for policy analysis.

This focus on causal parameters is maintained throughout. Chapter 2 is titled “Causal and noncausal models,” and provides a quite high-level formal discussion of causality in the context of both classical simultaneous models, and introduces topics in causal modeling which will be covered through the remainder of the book, including the Rubin Causal Model and a variety of methods researchers use to identify causal parameters. Given this emphasis, it is unsurprising that regression parameters are not incorrectly defined as the gradient of . Discusses counterfactual modeling in Chapter 25, “Treatment Evaluation,” at length, linking the methods in this literature to previous discussions of single-equation regression, matching, instrumental variables, and regression discontinuity designs.

**Remarks.**

The additional textbooks briefly surveyed suffer to a greater or lesser extent from weak discussions of causality as the texts surveyed by Chen and Pearl, with the exceptions of Wooldridge (2002) and particularly Cameron and Trivedi (2005), which I think would only fail Chen and Pearl’s criterion that the equivalent of the “do(x)” concept should be included (and arguably, an equivalent is included).

There is something of a puzzle here in that the oral tradition in applied econometrics heavily emphasizes causation, but it would seem that relatively few textbooks explicitly discuss the matter. In journal articles, seminars, and economics classrooms, there is consensus that the goal of econometric analysis is almost always to estimate a model which can answer causal questions. Overcoming the various serious challenges that arise in making such attempts is the core of most papers in applied econometrics, and how successful a paper is in achieving that goal is the target of sharp-eyed readers and referees. What explains the discrepancy between how economists think about causation and what appears in most econometrics textbooks?

First, econometrics textbooks tend to be authored by theoretical econometricians, who tend to be situated much closer to the interface between statistics and econometrics than applied researchers. Since statisticians do not tend to think in terms of causality, perhaps some of that statistical tradition makes its way over to econometrics textbooks.

Second, statistical concepts which *in the context of applied econometrics* refer to causal concepts are nonetheless presented as statistical concepts in econometrics textbooks, but it is understood that the underlying objects of inference are still causal. A “biased estimate of ” is a purely statistical concept, but if a referee or seminar attendee were to use that phrase they almost certainly mean, “the estimate you present is not a good estimate of the causal effect in which we are interested.” Similarly, a remark like, “your data doesn’t credibly identify ” appears to be a claim about a purely statistical matter, but the person making that claim almost certainly means, “the causal parameter we would like to estimate is hopelessly confounded, given the data we have and the model you’ve developed.” Further to this point, I note that way back in the old-timey days of the 1990s, I took a sequence of econometrics courses from MacKinnon and Davidson based on their 1993 textbook. Even though this text does not include a good discussion of causality using that term, and it is notably lacking in applied examples, it was always very clear to me (and, I think, my classmates) that we are ultimately interested in estimating models which allow us to make causal inferences, as opposed to merely characterizing the joint distribution of some set of variables.

Third, the language of counterfactuals in which the literature on causation is currently being developed is a relatively recent development. As noted above, Wooldridge (2002) is, to the best of my knowledge, the first econometrics textbook to include an extended discussion written in this language. What amounts to the same concepts were previously, as in the examples in previous point, discussed using language borrowed from statistics. The slightly more recent text by Cameron and Trivedi (2005) is substantially more oriented towards causal modeling than any of the other texts, and also includes lengthy discussion of the recent literature on modeling heterogeneous causal effects. My impression from reading Chen and Pearl and flipping through the texts above is the textbooks tend to be getting better over time in terms of discussing causation, presumably in part because these ideas are permeating the applied econometrics literature. Notably, the oldest textbooks discussed above (Amemiya 1985 and Kmenta 1986) present the vaguest discussions of causal concepts.

The oral tradition in economics is not well-reflected in current, or particularly in outdated, textbooks. Chen and Pearl do those of us who teach or study econometrics a service in highlighting this problem, and hopefully discussion in future textbooks will continue to improve.

Tagged: causality, econometrics, Judea Pearl, textbooks ]]>

Over on Worthwhile Canadian Initiative, Nick Rowe made an attempt a few days ago to explain that an elaborated version of that argument which was published in the “Real World Economics Review” (paper) is wrong. The elaborated version includes an alleged proof of an assertion in the original paper. This post points out the conceptual and mathematical errors in that “proof.” These are errors in high school level mathematics and elementary microeconomics.

Looking back at what I wrote about Keen’s argument in 2002, I see I pitched it at too high a level. If you can follow my argument, you don’t need to read my piece to see for yourself that Steve Keen is just plain wrong. So I am going to attempt to write this post in such a way that anyone with a reasonable grasp of introductory calculus can follow along, even if you’ve never studied economics. I also think both Nick’s blog post and my previous piece make an error in possibly leaving the reader with the impression that Keen’s argument is correct if the competitive model is unrealistic or fails empirically, but that’s not the issue. Again, Keen claims that his results follow from textbook assumptions, and everyone but him has the *math* wrong.

A “competitive” firm in economic theory is one which takes prices as given, ignoring the effect of its own output on price. This is an *assumption*, not a result. Keen notes, correctly, that this assumption is false when there are a finite number of firms. Suppose demand is given by P(Q), where P is price and Q is the total output of all firms. Consider any one firm, which without loss of generality I will call firm 1 (same as firm in Keen’s paper), let denote that firm’s output, and let denote the total output of the rest of the the firms, which in general depends on . Then we have , and, as Keen says, price must fall as increases if we hold R constant, since P() is by assumption decreasing in its argument.

Along with Keen, suppose firm 1 does not take price as given. Rather, firm 1 acts to maximize its own profits taking into account that it will fetch a lower price for each incremental unit it produces, holding constant the output of all other firms. If firm 1 produces units, its revenues will be , and its profits will then be

where is the cost of producing units. What value of maximizes firm 1’s profits? To find that, we find how much profits change as output changes, and find the maximum by setting that derivative to zero:

If we hold other firms outputs constant, as Keen claims to do, and the expression simplifies to

which is the textbook solution. “Marginal revenue” here means “how much does revenue change when increases by one unit?” Note that the left-hand side is firm 1’s marginal revenue and the right is firm 1’s marginal cost, so the firm equates the two to maximize profits.

Steve Keen claims that that bit of math is wrong. He claims (page 62):

However, the individual firm’s profit is a function, not only of its own output, but of that of all other firms in the industry. This is true regardless of whether the firm reacts strategically to what other firms do, and regardless of whether it can control what other firms do. The objectively true profit maximum is therefore given by the zero of the total differential: the differential of the firm’s profit with respect to total industry output.

Let’s consider that claim. Yes, firm 1’s profits in equation (1) depend on firm 1’s own output and on the output of all other firms, R. No, that does not imply that we solve firm 1’s profit maximization problem by taking the derivative of equation (1) with respect to total output. And, no, the term “total derivative” does not mean “derivative with respect to a total.” This conceptual confusion then leads Keen to incoherent math: he takes the derivative of firm 1’s profits with respect to, in the notation here, (equation 0.4). That derivative isn’t defined because firm 1’s profits don’t depend solely on the sum of its own output and the output of all other firms.

The math Keen proceeds to do treats total output, , as if it’s a parameter that affects all firms’ outputs. Instead of we could use some other symbol to denote this variable to highlight that it’s not really total output, but I will stick with . Keen treats each firm’s output as depending on this parameter Q and on the output of all other firms, so we could write

,

and likewise for all other firms’ outputs, to clarify what’s being assumed. Keen then asks what value of this parameter Q maximizes firm 1’s profits. Notice this problem has nothing to do with the problem we’re supposed to be considering: how does firm 1 set its own output to maximize its own profits?

The way Keen has set this up, as the parameter Q changes, a firm’s output changes for two reasons: there is a direct effect of Q on each firm’s output, and there is an indirect effect operating through the effect of Q on other firm’s outputs. Keen takes the derivative of firm 1’s profits with respect to this parameter Q. He claims to treat firms as atomistic, that is, they ignore the effect of their own outputs on other firm’s outputs, by setting the derivatives of all firms’ outputs with respect to all the other firms’ outputs to zero. But he sets the derivatives of all firms’ outputs with respect to the parameter Q to one. Since firm 1 is for some reason choosing this parameter Q, to increase its own output by one unit, it increases Q by one unit. When firm 1 increases Q by one unit, all other firms also increase their output by one unit. Keen claims repeatedly and explicitly that he assumes other firms do not respond to changes in firm 1’s output, but the math he actually does assumes otherwise.

Getting back to the problem Keen for some reason considers: How should firm 1 set Q to maximize its own profits? Take the derivative of firm 1’s profits (1) with respect to the parameter Q and set it to zero to find

Keen assumes that all firms including firm 1 increase their output by one unit when Q increases by one unit. Then trivially , and since there are (n-1) firms other than firm 1 and they all increase their output by one unit too, . The term in square brackets is then equal to (n-1) + 1 = n, and the equation above simplifies to

That is Keen’s major result, equation (0.9). It differs from the textbook result, equation (2), in that the number of firms, , appears in the first term. That is, again, because as Q increases and all other firms’ outputs increase at the same rate in the problem Keen solves. Firm 1 then must take into account that as it increases output, price will fall much more rapidly when all other firms respond by increasing their output than when all other firms’ outputs are fixed. Keen does not solve firm 1’s problem taking all other firm’s outputs as given.

Keen insists that, if we do the math correctly, profit-maximizing firms do not equate marginal revenue and marginal cost. But equation (4), which is, again, Keen’s solution, says that the firm sets Q to equate marginal revenue (the left-hand side) with marginal cost (the right). Keen appears to think that marginal revenue is defined as the expression “,” so whenever marginal revenue cannot be expressed in exactly that way, it’s not marginal revenue. All of the claims about marginal revenue not equalling marginal cost follow from that basic conceptual error. Generally, any optimization problem that can be expressed as maximizing (f(x) – g(x)) with respect to x has the property that f'(x)=g'(x) at an internal solution (assuming differentiability, etc, which Keen does), so marginal revenue equalling marginal cost is a very general condition. Keen thinks he’s arguing against the “neoclassical dogma” that equates marginal revenues and costs, but he’s actually arguing the sum rule of differentiation doesn’t hold.

We can also see that Keen implicitly assumes all firms react to changes in firm 1’s output by increasing their own output by the same amount by noting that that assumption is the same as an old-school approach to strategic interaction among firms called “conjectural variations” (Keen implies later in the paper, starting on page 74, that he invented this approach. It’s actually not just textbook, it’s outdated textbook, as it’s an approach which has been eclipsed). A “conjectural variation” of 1.0 means here that firm 1 assumes that all other firms will react to a change in by changing their own outputs exactly as changes: if firm 1 increases its output by one unit, it expects all other firms to also increase their output by one unit in response. So if goes up by one unit, the output of the other (n-1) firms, R, changes by (n-1) units. Consider equation (2) again, but set instead of zero to find

,

which is exactly the same as equation (4), which, again, is the same as Keen’s equation 0.9.

Assuming conjectural variations of one is almost but not quite the same as simply assuming that firms collude. If firms collude, firm 1 would set its own output to maximize industry profits rather than its own profits, which entails setting industry marginal revenue rather than firm 1’s own marginal revenue equal to firm 1’s marginal cost. One sufficient condition for Keen’s problem to be exactly the same as assuming collusion is that we restrict attention to outcomes in which all firms produce the same amount. Call that amount q. Then firm 1’s profits can be expressed

and differentiating with respect to q gives

because total output Q is equal to nq. P'(Q)Q+P is industry marginal revenue, so this is exactly the same as simply finding the collusive outcome. Another way to see this is to note that if all firms produce the same output and have the same costs, then total profit is just n times the profit of any given firm, so maximizing any given firm’s profits is just maximizing (1/n) times total profits, so the solutions must be identical. This is just a clumsy way of solving the Econ 101 monopolist’s problem.

Steve Keen’s arguments are simply wrong. They cannot be rescued by any appeal to realism or empirical evidence, because he is arguing about math, not empirical implications, and he simply has the math wrong. It’s no surprise his paper was rejected at every reputable economics journal to which he sent it. And I am hardly the first person to point out that Keen seems to misunderstand very simple issues in basic mathematics and microeconomics. For example, here’s David Stern, writing in *Ecological Economics*—hardly a bastion of mainstream thought—about the versions of these arguments Keen puts forth to laymen in his book *Debunking Economics* (ungated .pdf),

However, despite containing much useful material the book is seriously flawed. Almost all the new criticisms of economics put forward by the author are wrong. While the author claims to know mathematics better than most economists, the mathematics in these arguments is incorrect. Some of these errors are glaring and will be apparent to anyone trained in basic calculus; others are more subtle and may not be picked up by people who have not taken advanced economics courses.

Steve Keen is offering just plain wrong arguments about very basic versions of very basic models taught to second-year undergraduates. I hope that people who take these arguments seriously attempt to reproduce Keen’s results, so they can demonstrate for themselves that Keen is wrong. If you can’t do basic calculus, consider this: it is either the case that Keen has made basic errors in basic math, or it is the case that hundreds of thousands of economists and mathematicians over many generations have all made basic errors in basic math. Which seems more likely?

I’ll close by noting that the professional literature on strategic interactions among firms, which falls under the field of economics called Industrial Organization, is highly technical and empirical. Mainstream economics, which has not been reasonably pigeon-holed as “neoclassical economics” for many decades, investigates the behavior of firms in uncertain, dynamic environments, typically using Bayesian game theory, often with a focus on when and how firms will be able to maintain implicit collusion (that is, keep prices high to make more money, at the expense of consumers). Over the last two decades researchers have developed structural microeconometric methods to deeply integrate these models with extensive empirical evidence on firms’ behavior (here’s a recent non-technical survey of this literature by Einav and Levin). Steve Keen does not mention any of this literature when he attacks mainstream economics. He limits attention to basic theory found in introductory textbooks, and his analysis of those models is just plain wrong.

Tagged: Debunking Economics, microeconomics, Steve Keen ]]>

Yesterday Milner wrote a piece titled “Why `efficient markets’ are merely wishful thinking.” The first half consists of an argument that austerity measures in Europe have been harmful. Then it goes off the rails: there’s a jarring segue, starting with the claim that austerity measures somehow show “how hard it can be to kill off a bad idea in the world of economics,” into an interview with Orrell centred on the EMH and Orrell’s belief that economists believe markets are perfect, which “provides a defence for stratospheric CEO salaries and widening income inequalities.”

It took me three reads to figure out how Milner thinks the first and second half of his article fit together: Milner isn’t aware that “efficient” markets in the sense of the EMH is not the same as “efficient” in the more common Pareto sense (which itself doesn’t imply laissez faire policy, but I digress). Milner proceeds to repeat Orrell’s brutal misunderstanding of the EMH as implying “the only time natural market efficiency is seriously threatened is when heavy-handed governments meddle in the process.” Which is apparently something economists think because “that’s the perfect theory for the 1 per cent”—recall Orrell thinks academic economics is a “a giant global conspiracy” to enrich autocrats.

Is it true that financial economics takes the EMH as sacrosanct, never challenging it empirically due its beauty and/or its alleged ability to further the interests of rich people? Of course not. Finance folks have spent the last four decades writing thousands of empirical papers studying aspects of efficient markets hypotheses—note the plural, neither Orrell nor Milner seem to be aware that the hypothesis comes in flavours other than its strong form. No one takes the strong form as empirically viable (“Since there are surely positive information and trading costs, the extreme version of the market efficiency hypothesis is surely false,” Eugene frickin’ Fama, 1991). Extensive empirical evidence in mainstream journals documents when and how prices violate the weak form of the EMH, see for example Lin and Brooks (2010). On the other hand, it empirically well-documented that the weak version is a good enough approximation that investors can’t exploit violations to systematically beat the market, and I am not aware of anyone, including serious critics of mainstream economics, who doubts that.

On Milner’s repetition of Orrell’s claim that economists ignore income inequality: There is a vast literature in mainstream economics on the causes and consequences of income inequality. Economists do not ignore income inequality, or labor market outcomes more generally—we actually have a whole field obscurely called “labor economics” focused on such phenomena!

Milner thinks it’s hard to kill a bad idea in the world of economics. He should consider how hard it is to kill a bad idea—the idea that economists think markets are invariably perfect—in the world of ill-informed economics journalism.

Tagged: efficient markets hypothesis ]]>

reg y x, robust.

Everyone knows that the usual OLS standard errors are generally “wrong,” that robust standard errors are “usually” bigger than OLS standard errors, and it often “doesn’t matter much” whether one uses robust standard errors. It is whispered that there may be mysterious circumstances in which robust standard errors are smaller than OLS standard errors. Textbook discussions typically present the nasty matrix expressions for the robust covariance matrix estimate, but do not discuss in detail when robust standard errors matter or in what circumstances robust standard errors will be smaller than OLS standard errors. This post attempts a simple explanation of robust standard errors and circumstances in which they will tend to be much bigger or smaller than OLS standard errors.

**Expressions for OLS and robust standard errors.**

Consider the univariate linear model

where is the dependent variable, is a covariate, is the error term, and is the parameter over which we would like to make inferences. I’ve omitted a constant by expressing the model in deviations from sample means, denoted with overbars. Assume is mean independent of and serially uncorrelated, but allow heteroskedasticity, . Let denote the OLS estimate of .

If we erroneously assume the error is homoskedastic, we estimate the variance of with

where . I will refer to the square root of this estimate throughout as the “OLS standard error.” When the errors are heteroskedastic, converges to the mean of , denote that . However, the true sampling variance of can easily be shown to be

Robust standard errors are based on estimates of this expression in which the are replaced with squared OLS residuals, or sometimes slightly more complicated expressions designed to perform better in small samples, see for example Imbens and Kolsar (2012).

**When do robust standard errors differ from OLS standard errors?**

Compare the expressions above to see that OLS and robust standard errors are (asymptotically) identical in the special case in which and are uncorrelated, in which case

If, on the other hand, and are positively correlated, then OLS standard errors are too small and robust standard errors will tend to be larger than OLS standard errors. And if and are negatively correlated, then OLS standard errors are too big and robust standard errors will tend to be smaller than OLS standard errors. These cases are illustrated in the graphs: in the left panel, the variance of the error terms increases with the distance between and its mean , whereas in the right panel observations are most dispersed around the regression line when is at its mean.

The graphs have been constructed such that the unconditional variance of the errors terms and the variance of are the same in each graph. But by inspection we can guess that our estimate of the slope is much less precise if the data look like the left panel than the right panel: perform a thought experiment to see that lots of regression lines fit the data in the left panel quite well, but the data in the right panel do a better job pinning down the slope. There is more information about the relationship between and in the data in the right panel even though the variance of and the unconditional variance of the error term are identical.

We see that heteroskedasticity doesn’t matter* per se*, what matters is the relationship between the variance of the error term and the covariates—if the errors are heteroskedastic but uncorrelated with , we can safely ignore the heteroskedasticity. To see why this is so, recall that in the homoskedastic case the variance of is inversely proportional to . If we add one more observation for which happens to equal , the variance of our estimate doesn’t change—there is no information in that observation about the relationship between and . As the draw of moves farther from its mean, the variance of falls more and more, because such draws, in the homoskedastic case, are more and more informative.

Now consider the case in which the variance of increases with , as in the left panel of the graph above. When we get one more observation, the amount of information it contains increases with for the same reasons as the homoskedastic case, but this effect is blunted by the higher variance of . The amount of information contained in a draw in which is far from its mean is lower than the OLS variance estimate “thinks” there is, so to speak, because the OLS variance estimate ignores the fact that such draws are more highly dispersed around the regression line. The OLS standard errors in this case are too small.

If on the other hand the variance of decreases with , then observations of far from its mean both contain more information for the usual reason in the homoskedastic case *and* are less dispersed around the regression line, as in the right panel of the graph above. These observations are even more highly informative than the OLS variance estimate “thinks” they are, and the OLS standard errors will tend to be too* large*. In this case, robust standard errors will tend to be *smaller* than OLS standard errors.

**Summarizing.
**

The upshot is this: if you have heteroskedasticity but the variance of your errors is independent of the covariates, you can safely ignore it, but if you calculate robust standard errors anyways they will be very similar to OLS standard errors. However, if the variance of your error terms tends to be higher when is far from its mean, OLS standard errors will tend to be biased down, and robust standard errors will tend to be larger than OLS standard errors. In the opposite case in which the variance of the error terms tends to be lower when is far from its mean, OLS standard errors will tend to be too large, and robust standard errors will tend to be smaller than OLS standard errors. With real data it’s commonly but not always going to be the case that the variance of the error will be higher when is far from its mean, explaining the result that robust standard errors are typically larger than OLS standard errors in economic applications.

Tagged: econometrics, robust standard error, statistics ]]>

Click here to download a copy (debunk.pdf).

Unfortunately, the link to Keen’s paper on the first page is broken. I attempted to get the paper from Keen’s site, but it’s now behind a paywall! I think the paper was called “A 75th Anniversary Gift for Sraffa,” but I failed to locate a copy.

Tagged: economics ]]>

In a recently released NBER working paper, “Behavioral hazard in health insurance,” Katherine Baicker, Sendhil Mullainathan, and Joshua Schwartzstein consider behavioral biases that lead people to (specifically, and with loss of generality) underutilize health care. How should we think about designing health insurance in the presence of such biases?

We have solid evidence that changing the copayment (the amount you pay) affects use of care, so the design of health insurance plans matters for both our finances and our health. For example, the graph shows results from the RAND health insurance experiment, in which people were randomly assigned various levels of health insurance. People assigned to pay high prices for care used less care. In Canada, patients face a copayment of zero for “necessary” care, which suggests we get way too much health care—lots of treatments for which costs exceed benefits. We’re over at the level of care associated with a coinsurance rate of zero in the graph, and the standard model tells us that even small out-of-pocket payments from patients would greatly reduce demand for treatments. Further, we should expect those treatments to have very little net benefit, so we might greatly reduce costs at little consequence to our health.

The standard model helps us to explain overuse of expensive care with low health benefits. However, it is difficult to reconcile with evidence that people often underutilize certain treatments: treatments with minimal side effects, low prices, and large health benefits. For example, Choudry *et al* (2011) show that eliminating a roughly $20 copayment heart attack patients made for statins, beta blockers, and other drugs substantially increased adherence. The standard model requires us to infer that patients who would take the drugs at a price of zero but not at $20 either receive less than $20 worth of health benefits from the drugs or experience severe side effects which greatly reduce net benefit. Neither of these hypotheses sits well with the clinical evidence on efficacy and side effects.

Baicker *et al* consider behavioral models to help understand such outcomes. They start with a simple rational choice model as a point of departure. There is one illness with severity which varies across people. Everyone pays an insurance premium (or tax) , and people who choose to receive treatment must also pay a copayment . The treatment leads to an increase in health worth , with . A person with income who receives treatment gets utility and a person who does not receive treatment gets . In this simple setup a person will choose to receive treatment if , that is, if the health benefits are worth more to them than the copayment they must make. Since people are rational and have full information in this model, anything that makes price deviate from marginal cost then causes inefficiency *ex post*.

Optimal insurance contracts in this environment involve over-utilization when people are rational and risk-averse. The copayment that maximizes social welfare, , satisfies

,

where is the cost of providing treatment, is the benefit of reduced financial risk (which depends on the curvature of the utility function), and is the elasticity of demand for care. More elastic demand implies more moral hazard, and more moral hazard means copayments should be higher. For example, if the price of a visit to an emergency room rises from $50 to $100 and almost no one is deterred from emergency care, then moral hazard is not a big issue and insurance mostly reduces risk, which means in turn that we should heavily insure emergency care. As the authors emphasize, policy makers in this world only need to know the elasticity of demand and the degree of risk aversion to design optimal insurance systems, they do not need to know how effective care is (the schedule ) because rational fully-informed agents make their decisions on the basis of health benefits.

The result that the elasticity of demand determines optimal insurance leads to some strange conclusions. For example, demand for beta blockers appears to be about as elastic as demand for cold remedies, even though beta blockers are “essential” and cold remedies are not (to put it mildly). A policy maker should then set similar copayments for cold remedies and beta blockers.

But suppose people make systematic errors. They choose treatment if , where can represent a variety of “internalities,” that is, behavioral biases, including present bias, inattention, and false beliefs (systematic over or underestimation of efficacy). Here, is the “experienced utility” of treatment whereas is the “decision utility” of treatment. Conventionally, these coincide: if you choose A over B, you are better off with A. Here, when you choose A over B, you might be better off with B.

In effect, the paper considers what happens when we allow for the possibility that demand does not coincide with marginal benefits, and much of the analysis is similar to standard analysis of activities with positive externalities, for example, vaccines against communicable diseases. Subsidizing such a vaccine such that price falls below marginal cost can be sound policy; similarly, we may want insurance to decrease the price of some treatments below cost, even if everyone is risk neutral. The graph illustrates the outcome with behavioral underutilization: suppose first that price is set to equal marginal cost. The blue line is the demand curve, so the outcome is Q treatments. However, marginal benefits do not coincide with demand, marginal benefits are given by the green line. Setting the price to zero through full insurance increases treatments to Q’. In the standard model, we would conclude that moral hazard leads to a welfare loss equal to area shaded green. In the behavioral model, we instead conclude that setting the price to zero increases welfare by an amount equal to the area of the blue triangle.

In the behavioral model, the optimal copayment satisfies

,

where is approximately equal to (see page 18 for details) and denotes illness severity for the patient who’s just indifferent to treatment. The standard model is the special case in which and the second term on the right-hand side disappears. Optimal insurance now depends on more than just the elasticity of demand and the value of financial risk reduction . Treatments with larger behavioral distortions (more negative values of ) should have lower copayments, holding and constant. Cold medication and beta blockers need not have the same copayment. Even if everyone were risk neutral so that , it would be still optimal to provide insurance, because insurance can correct the behavioral issues leading to inefficiently low levels of care. If behavioral issues are severe enough, it may even be optimal to force people to pay more than marginal cost, or subsidize rather than charge for treatment.

The authors present an empirical illustration of how dramatically these effects can change standard results. Again consider the heart attack patients studied by Choudry *et al* (2011). The standard model forces us to conclude that eliminating the copayment for heart attack drugs leads to extra costs of about $106 per patient and extra health benefits worth about $26 per patient. The incremental care provided when copayments are eliminated costs more than it’s worth; moral hazard reduces welfare by about $106 – $26 = $80 per patient. The standard model tells us to conclude that eliminating copayments is bad policy. The behavioral model, conversely, implies that the incremental care is worth roughly $3,000 per patient, not $26. According to the behavioral model, eliminating copayments is a very good policy.

What do these results imply for health care in Canada? One immediate implication is that frequently-proposed small copayments for necessary care may not be good policy. Usually, a large demand response to small copayments would be considered evidence that Canadians consume lots of care they don’t really need, that is, that moral hazard is prevalent. But we should also consider the possibility that people mistakenly forego high net benefit treatments due to behavioral bias. If we were to introduce copayments, we should do so selectively: charge people for only types of care with low health benefits or for which patients (or physicians) tend to overestimate health benefits.

Tagged: Behavioral economics, health economics, health insurance ]]>

**I. The issues.**

Following work such as Wilkinson and Pickett’s The Spirit Level, the notion that income inequality causes low health has become popular. For example, Paul Krugman recently noted in a blog post titled “Inequality Kills,”

We have lots of evidence that low socioeconomic status leads to higher mortality — even if you correct for things like availability of health insurance. Some of the effects may come through self-destructive behavior, some through simple increased stress; think about what it feels like in 21st-century America to be a worker without even a high school degree. In any case […] what we’re looking at is a clear demonstration of the fact that high inequality isn’t just unfair, it kills.

Income inequality and poor population health are correlated across counties, lending support to the idea that inequality does indeed kill. For example, the graph to the right, from *The Spirit Level*, shows a scatterplot of Gini coefficients against an index of health and social problems: more inequality is correlated with more problems. But such graphs, as we will see, are hard to interpret, and we cannot conclude from the type of correlation it displays that inequality *per se* causes poor health.

Consider the ambiguity in the Krugman’s argument above: is it *inequality*, as in the title, that leads to poor health, or is it *low socioeconomic status*, as in the body? These are clearly related mechanisms, but they are different mechanisms.

Suppose societies A and B have identical income distributions up to the 90th percentile, but A’s distribution in the top decile is more “stretched out,” that is, the relatively rich are richer still in society A. If low personal income causes low health, all else equal the bottom 90% of people in A and B will have the same health. If health is socially determined in the sense that relative deprivation matters in addition to absolute deprivation, then the bottom 90% in society A will experience worse health than in B because in society A the bottom 90% are relatively worse off compared to B. And if more income dispersion causes lower health for everyone, then the richest 10% in society A may *also* experience lower health than in B. For both policy and scientific reasons, it’s important that we discover whether a person’s health is determined by his income alone, or by both his income and the incomes of the other people in his society.

**II. Conceptualizing the relationship between income and health.**

The literature formalizes these issues as three paths from the distribution of income to a person’s health. First, a person’s income may cause that person’s health (the absolute income hypothesis). Health is only socially determined through this mechanism in the sense that every person’s income is socially determined, there is no further social effect holding individual income constant.

Second, a person’s income relative to other people in her reference group may cause her health (the relative income hypothesis). Finally, the dispersion of income in the society in which the person lives may cause her health, holding her income constant (the income inequality hypothesis). These mechanisms can be expressed:

- Absolute income hypothesis:
- Relative income hypothesis:
- Income inequality hypothesis:

where indexes people, is a measure of health, is income, , , and are unknown functions, is the income of a reference person (such as the median or mean person’s income), and is the variance or other measure of dispersion of across people. All three mechanisms may occur at the same time, they are not exclusive.

**III. Pragmatic problems with the idea relative income matters.**

The relative income and the income inequality hypothesis are less plausible on their face than the absolute income hypothesis: it is easy to think of reasons why your income causes your health (even in the presence of “free” health care), but it is harder to think of reasons why my income causes your health, as in the absolute and relative income hypotheses. Angus Deaton skeptically refers to the relative and inequality hypotheses as “action at a distance.”

Perhaps Deaton is overly skeptical, as animal studies and other evidence do lend support to the idea that low social position causes physiological changes which lead to poor health (e.g., the Whitehall studies, see Marmot et al 2001). More inequality may cause people low in the hierarchy to experience negative emotions such as stress and shame, which may directly cause low health and indirectly cause low health through behaviors such as substance abuse. However, we face a number of problems attempting to operationalize this notion, and in theory anything goes even if we accept assume this mechanism exists. Deaton, for example, asks us to consider these variants on the relative income hypothesis:

- Your health depends on your rank in the social hierarchy.
- Your health depends on the difference between your income and the richest person’s income.
- Your health depends on the difference between your income and the poorest person’s income.

These all seem reasonable ways of modeling the notion that the social hierarchy affects health. Now consider the implications of a policy which reduces inequality without changing the ordering of income across people or changing mean income. Under 1, there is no effect at all on health, as we have not changed anyone’s rank in hierarchy. Under 2, average health goes up because the distance between the richest person’s income and a person’s income falls. And under 3, average health goes down as the distance between the poorest person’s income others’ incomes falls.

Another pragmatic problem is determining appropriate reference groups. Do you compare yourself to other people in your town? Your country? Your occupation, or your age, or your ethnicity, or your friends, or some combination of all of these and many other characteristics? In theory, this is easy—models assume there are groups 1 through and each agent is assigned a group . In practice, reference groups are nebulous, and we will generally get different statistical answers depending on how we define reference groups.

**IV. Aggregate data and the concavity effect.**

Many studies attempt to use aggregate data to get at the effect of inequality on health, yielding results such as displayed as in the scatterplot of health and Gini coefficients above. Discovering that countries with more inequality tend to have lower public health is often interpreted as evidence of social causation of health operating through stress, social cohesion, or other psychological consequences of position in the social hierarchy. However, that conclusion does not follow.

One reason we’ll observe inequality and low health move together even if only the absolute income hypothesis holds is called the “concavity effect.” Suppose that the effect of an extra dollar on health is positive but lower than the effect of the previous dollar, that is, that is concave, as in the graph to the right. Then, holding mean income constant, increasing the dispersion of income in a society mechanically decreases average health. Intuitively, if we take a dollar from a rich person and give it to a poor person, average health goes up if an additional dollar increases a poor person’s health more than a rich person’s health. The concavity effect implies that studies of aggregate data cannot help us disentangle the absolute, relative, and inequality hypotheses.

The concavity effect is sometimes referred to as a statistical artifact because it generates correlation between population health and income inequality that only operates through the absolute income effect. However, it is important to note that this is the effect we have the most evidence on, the evidence mostly agrees, and the evidence tells us that redistribution, so long as it does not destroy too much average wealth, will increase average health. Put another way, *we do not have to believe that inequality per se causes stress or other mental or physical health issues to conclude that reducing poverty will increase population health.*

**V. Evidence from disaggregated data.**

With data on individuals we can shed some light on the relationship between income inequality and health, holding personal income fixed. Many papers estimate models similar to, or special cases of, specifications such as,

where is a vector of individual and contextual characteristics for person in country, region, or other reference group , is mean income within reference group, is the variance or other measure of income dispersion in ‘s reference group, is some function of income, and are parameters to be estimated, and is an error term representing other causes of health. Sometimes, is assumed to be linear, which means that curvature in the individual–level relationship may appear as a social effect. Usually, it is a quadratic or step function, and rarely no structure is imposed and the model is estimated using semiparametric methods (as in Jones and Wildman 2008). These papers typically use large, individual level cross-sectional or repeated cross-sectional datasets with countries or regions within countries treated as reference groups; infrequently panels are used or reference groups are defined more narrowly, such as age-region cells.

The evidence from estimating such models provides at best weak support for the relative and inequality hypotheses. As opposed to results from aggregate models which robustly find higher inequality is associated with lower population health *without* controlling for absolute individual income, the signs of the estimated coefficients on inequality measures are very roughly equally negative or positive, and they are commonly statistically and substantively insignificant. These results lead some authors to draw conclusions such as “evidence favouring a negative correlation between income inequality and life expectancy has disappeared” (Mackenbach 2002) and “there seems to be little support for the idea that income inequality is a major, generalizable determinant of population health differences within or between rich countries” (Lynch et al 2004), whereas “the absolute income hypothesis… is still the most likely to explain the frequently observed strong association between population health and income inequality levels” (Wagstaff and Doorslaer 2000).

**VI. Where is the literature headed?**

I’ll close by noting some of the remaining difficulties with this literature, challenges to be overcome in future research.

As we’ve seen, the literature to date largely attempts to estimate partial associations between health, personal income, and aspects of the distribution of income. Even ignoring the ambiguities and problems discussed above, we cannot interpret the resulting estimates as plausibly reflecting causal effects.

At the individual level it is very likely that health causes income as well as income causing health. The income–health gradient in part reflects the disadvantages unhealthy people face in the labor market: health and income are simultaneously determined. Further, countless personal and contextual effects may cause both health and income, so models such as those estimated in the literature typically suffer from both simultaneity bias and omitted variables bias (for example, many studies fail to even condition on education, which is an important cause of both health and income). I expect to see more efforts to pin down the effect of individual income on individual health, and to tie such efforts to the burgeoning literature examining health over the life cycle, particularly the long-term effects of childhood development (e.g., Cunha and Heckman 2007). There is some evidence that some of the correlation between absolute health and income is attributable to what is here “reverse” causation from health to income (e.g., Boyce and Oswald 2011, Case and Paxson 2011). It’s difficult to see how we can credibly estimate the effect of unequal societies on health without making further progress on the effect of a person’s income on her health.

Omitted variables at the reference group (usually, regional) level are also a problem. In equation (*) above, the only reference group level variables are the mean and dispersion of income, implying that reference-group level causes of health which are correlated with the distribution of income may generate partial correlations between income distribution and health even if income distribution does not cause health. Deaton and Lubotsky (2003), for example, show that controlling for the proportion of black people at the regional level removes the association between inequality and mortality across U.S. cities. Which other demographic, policy, or institutional differences across regions cause both inequality and low health?

A related issue for future research is opening the black box and figuring out exactly how income inequality affects health. For example, Drabo (2010) argues that his results imply that more unequal incomes reduce demand for environmental quality, lower environmental quality causes lower health, and after netting out this mechanism there is no further effect of inequality on health. More unequal incomes may lead to changes in a variety of prices, access to various goods and services, the type and quality of various public programs, and changes in various notions of social capital. Which regional characteristics mediate the effect of income inequality on health? Is there an additional effect of inequality *per se* on health after holding constant personal income and all of the social causes of health which may themselves result from more inequality? At the moment, we simply don’t know.

We have much yet to learn about the effects of the distribution of income on health, and even the simpler issue of determining the effects of individual income on health.

Tagged: econometrics, health, income inequality ]]>

No, I don’t know either.

Adamatzky, A. (2012) The World’s Colonisation and Trade Routes Formation as Imitated by Slime Mould, arXiv:1209.3958.

The plasmodium of Physarum polycephalum is renowned for spanning sources of nutrients with networks of protoplasmic tubes. The networks transport nutrients and metabolites across the plasmodium’s body. To imitate a hypothetical colonisation of the world and formation of major transportation routes we cut continents from agar plates arranged in Petri dishes or on the surface of a three-dimensional globe, represent positions of selected metropolitan areas with oat flakes and inoculate the plasmodium in one of the metropolitan areas. The plasmodium propagates towards the sources of nutrients, spans them with its network of protoplasmic tubes and even crosses bare substrate between the continents. From the laboratory experiments we derive weighted Physarum graphs, analyse their structure, compare them with the basic proximity graphs and generalised graphs derived from the Silk Road and the Asia Highway networks.

]]>

I won’t go into a long critique, but currently nature and nature’s services – cleansing, filtering water, creating the atmosphere, taking carbon out of the air, putting oxygen back in, preventing erosion, pollinating flowering plants – perform dozens of services nature to keep the planet happening.

But economists call this an ‘externality.’ What that means is “We don’t give a shit.”It’s not economic. Because they’re so impressed with humans, human productivity and human creativity at the heart of this economic system. Well, you can’t have an economy if you don’t have nature and nature’s services, but economics ignores that. And that’s an unbelievably egregious error.

(emphasis added). I agree someone’s made an “unbelievably egregious error,” and repeated it countless times to countless people.

David Suzuki owes the community of economists, and his audiences, an apology and an unequivocal retraction.

Tagged: David Suzuki, externality ]]>

For those interested in what economists actually think about the environment, and what an “externality” actually is, see for example “How do economists really think about the environment?” Or open any Economics 101 textbook.

Tagged: David Suzuki, externality ]]>