tag:blogger.com,1999:blog-71696077714980456042018-03-06T17:21:31.072-08:00Goodman # 3Lucas Goodmanhttp://www.blogger.com/profile/06191966565206086301noreply@blogger.comBlogger12125tag:blogger.com,1999:blog-7169607771498045604.post-57170720832233392372016-11-30T04:53:00.000-08:002016-11-30T04:53:13.622-08:00The Effect of the Affordable Care Act Medicaid Expansion on Migration I have a new paper out in JPAM: "<a href="http://onlinelibrary.wiley.com/doi/10.1002/pam.21952/abstract">The Effect of the Affordable Care Act Medicaid Expansion on Migration</a>."<br /><br />Here's the abstract:<br /><blockquote class="tr_bq">The expansion of Medicaid to low-income nondisabled adults is a key component of the Affordable Care Act's strategy to increase health insurance coverage, but many states have chosen not to take up the expansion. As a result, for many low-income adults, there has been stark variation across states in access to Medicaid since the expansions took effect in 2014. This study investigates whether individuals migrate in order to gain access to these benefits. Using an empirical model in the spirit of a difference-in-differences, this study finds that migration from non-expansion states to expansion states did not increase in 2014 relative to migration in the reverse direction. The estimates are sufficiently precise to rule out a migration effect that would meaningfully affect the number of enrollees in expansion states, which suggests that Medicaid expansion decisions do not impose a meaningful fiscal externality on other states. </blockquote>This paper gets at the heart of a classic topic in economics: the optimal division of roles between federal, state, and local governments -- known as fiscal federalism. On the one hand, assigning greater responsibility at the state or local level can help better align policy with local preferences. On the other hand, when one locality can exert an externality on another locality, decentralization can create inefficiency. Migration--especially migration in response to state-level means-tested benefits--can be a major source of externalities in this context: if a cut in means-tested welfare benefits in one state leads to migration of beneficiaries from that state to another, states might tend to engage in a “race to the bottom” which would not be optimal when viewed nationally. <br /><br /><a name='more'></a><br />The 2014 Medicaid expansion in the ACA is a highly unique setting to study “welfare migration” (also known as “welfare magnetism”). From a methodological perspective, the expansion of Medicaid to roughly half the country but not the other half creates very large variation in access to health care -- much larger variation than is typically studied in this literature. From an immediate policy perspective, migration responses were often cited by non-expanding states as a reason why they should not expand. Even if the state expenditures on newly eligible beneficiaries were small (due to the 90% long-run federal match), policymakers often argued that an influx of Medicaid-eligible individuals would cause expenses on <em>other </em>programs, such as education, to grow in excess of the associated growth of the tax base. Is the feared migration response evident in the data?<br /><br />To get at this question, I use <a href="https://usa.ipums.org/usa/">public use data</a> from the American Community Survey (ACS) through 2014. The large sample size of the ACS allows me to examine a subgroup of low-income individuals -- in particular, those whose reported income places them below the cutoff for Medicaid eligibility in most expansion states (138% of poverty). This data set isn't perfect, of course. I'd prefer if ACS interviews were performed all at once -- ideally toward the end of the year -- rather than on a rolling basis. Furthermore, income can be endogenous to migration decisions. Read the paper for how I handle these issues.<br /><br />Given this sample, I perform a difference-in-differences analysis, with a subtle twist. One dimension of the difference-in-differences is time, with 2014 as the "post" period. The other dimension is direction of migration flow: migration <i>from </i>non-expansion states <i>to </i>expansion states, versus migration in the opposite direction. In other words, I examine whether non-expansion-to-expansion migration increases in 2014, relative to the analogous increase in expansion-to-non-expansion migration in 2014. Here's the twist: In this difference-in-differences, non-expansion-to-expansion migration is playing the role of treated and expansion-to-non-expansion plays the role of control. However, both directions are plausibly "treated." Most obviously, the Medicaid expansion could increase non-expansion-to-expansion migration as people migrate in order to gain eligibility. But the expansion could also reduce migration in the opposite direction; e.g., individuals who would otherwise have migrated from an expansion state to a non-expansion state decide not to out of fear of losing coverage. Both of these effects push the estimates I get in the same direction. So, this empirical strategy is a test for whether at least one of these effects exists.<br /><br />Spoiler: neither effect appears to exist.<br /><br />Figure 2 from the paper, reproduced below, shows the results in graph form. The blue dotted line shows the migration rate from expansion states to non-expansion states, expressed as the percent of migrants relative to the size of the initial subgroup population. The green solid line shows the migration rate from non-expansion states to expansion states -- this is the flow we would expect to see grow in 2014 (relative to the opposite flow) if Medicaid caused migration. Since about 2008 (when I start my estimation sample), the trends in migration in these two directions were fairly parallel, which is reassuring. And, in 2014, these parallel trends appear to continue. This, visually at least, this strategy finds no effect of the expansion on migration. The regression estimates, which you can read about in the paper, confirm this null estimate. In fact, I show that the null estimate is sufficiently precise to rule out a 2% migration-induced increase in the Medicaid eligible population in expansion states, even under very aggressive assumptions. This suggests that the fiscal externality from expanding Medicaid is quite small. Additionally, in the paper, I restrict the sample to individuals who live close to the border of an expansion state and a non-expansion state; the estimates get noisier but remain consistent with a null effect.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-5gUxIpvVN7g/WD35fySiQQI/AAAAAAAABDM/tj3Q6vCToNET9UfF3bNfkPi_tPziIswDACLcB/s1600/maintreat_1_0.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="464" src="https://1.bp.blogspot.com/-5gUxIpvVN7g/WD35fySiQQI/AAAAAAAABDM/tj3Q6vCToNET9UfF3bNfkPi_tPziIswDACLcB/s640/maintreat_1_0.png" width="640" /></a></div><br /><br />So, why aren't people migrating? There are several possible explanations. Most obviously, moving might be quite costly relative to the perceived benefit of Medicaid. This would be consistent with the work of <a href="http://www.nber.org/papers/w21308">Finkelstein, Hendren, and Luttmer (2015)</a>, based on the Oregon Health Insurance Experiment. This could also be because individuals don't expect to stay on Medicaid for an extended period of time, and thus the cumulative value of Medicaid coverage is fairly small. <br /><br />Another explanation could be that the time horizon of the 2014 ACS is too short; it might take time for individuals to learn about expanded coverage in other states, and it may take time for individuals to actually move. In the paper, I punt on this, since the 2014 ACS is the latest data I had.<br /><br />Fortunately, the 2015 ACS has recently been released. The following figure is a version of Figure 2, extended to 2015. (Some states expanded Medicaid between 2014 and 2015. This figure drops those states, and disregards moves made into those states, so the graph through 2014 will look a bit different than Figure 2.) This figure shows that, if anything, non-expansion-to-expansion migration <i>fell </i>somewhat in 2015 relative to migration in the opposite direction -- an effect which has the opposite sign than one would expect if Medicaid expansions caused a delayed migration effect in 2015. Note that I have spent significantly less time analyzing the 2015 ACS relative to the analyses that made the actual paper, so take this result with a grain of salt. Nevertheless, the 2015 ACS appears to confirm the results obtained using the 2014 ACS: the ACA Medicaid expansions did not seem to induce migration.<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-UlUvYjr5_CY/WD3_AZASQjI/AAAAAAAABDs/lgC9pdKJKOoudARww8QOgJ07WeDPIZCdwCLcB/s1600/results_with_2015.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="464" src="https://3.bp.blogspot.com/-UlUvYjr5_CY/WD3_AZASQjI/AAAAAAAABDs/lgC9pdKJKOoudARww8QOgJ07WeDPIZCdwCLcB/s640/results_with_2015.png" width="640" /></a></div><i>Full disclosure: Some of this post is lifted from an <a href="http://www.appam.org/jpam-featured-article-the-effect-of-the-affordable-care-act-medicaid-expansion-on-migration/">earlier post</a> I made on JPAM's website.</i>Lucas Goodmanhttp://www.blogger.com/profile/06191966565206086301noreply@blogger.com0tag:blogger.com,1999:blog-7169607771498045604.post-73424038433860966982016-04-12T09:19:00.000-07:002016-04-12T09:19:40.875-07:00Is the new Health Inequality paper in JAMA driven by the exclusion of zero earners? Probably not.A <a href="http://jama.jamanetwork.com/article.aspx?articleid=2513561">new paper</a> in JAMA by Chetty, et al., is getting a lot of press, and rightly so. Using tax and Social Security data, the authors are able to calculate life expectancy, separately by income level and geography. One particularly interesting set of results is the geographic distribution of life expectancy of low earners.<a name='more'></a><div><br /></div><div>One shortcoming, however, is that the authors cannot calculate life expectancy for zero earners. The Social Security records (which they use to calculate mortality rates) do not reliably report the deaths of zero earners, since the Social Security records cover only deaths of U.S. residents, and many zero earners are not U.S. residents (e.g., they earned income in the U.S. once, giving them a Social Security Number or Tax Identification Number, but they no longer live --- or never lived --- in the U.S.). To address this problem, the authors simply drop all zero earners.</div><div><br /></div><div>The concern, then, is that the composition of the group of "low earners" might vary systematically with local policies or labor market conditions that encourage marginally attached individuals to earn positive income. And, these differences in composition could be driving geographical differences in the life expectancy of low earners. To take a specific example, suppose that a geographic area has a particularly effective job training policy to help displaced workers find re-employment (perhaps not the most empirically relevant example, but it works for didactic purposes). Then, the set of individuals in the lower quartile of the positive earnings distribution might have worse unobservables, and thus worse life expectancy. This source of geographic variation in life expectancy for the working poor would be far less interesting than the geographic variation due to factors orthogonal to composition (e.g., health policy and health habits), which the authors are intending to uncover.<br /><div><br />To explore this in a quick-and-dirty way, I used the 2005-2014 American Community Survey to examine the geographic correlation between (1) the fraction of adults (40 to 61) with positive family income and (2) life expectancy of the lowest quartile in a given commuting zone (using the data that the authors provide at healthinequality.org). If the authors' results were driven by this composition bias, we might expect this correlation to be negative: more workers means the bottom end of the distribution of workers might have worse unobservables and thus worse life expectancy.<br /><br />To be a little bit more precise: I constructed a crosswalk from (1) the public use microdata area (PUMA) available in the ACS public use files from <a href="https://usa.ipums.org/usa/">IPUMS </a>to (2) commuting zones, with some help from the county-to-commuting-zone crosswalk that the authors provide on healthinequality.org, as well as the PUMA-to-county crosswalks constructed using the tools at the <a href="http://mcdc.missouri.edu/">Missouri Census Data Center</a>. This allows me to assign individuals from the 2005-2014 ACS to a given commuting zone (probabilistically in some cases when PUMAs don't map cleanly into commuting zones).<br /><br />Then, I regress a dummy for having positive family earned income in the last 12 months (family being a subset of household) on (1) race/ethnicity dummies (black, asian, hispanic), (2) year dummies, and (3) a set of commuting zone fixed effects. I save the commuting zone fixed effects and consider that my variable of interest.<br /><br />Then, I regress the 1st-quartile, race-adjusted life expectancy (averaged for men and women) on those commuting zone fixed effects. I get a positive coefficient, significant at the 1% level (though, a more thorough analysis would cluster the standard errors in some way --- it is not immediately obvious the best way to do this, since commuting zones don't map cleanly into states). One simple story for this positive correlation is that places with strong labor demand (so a higher proportion of workers) have better outcomes, and this correlation is large enough to overcome any negative composition effects.<br /><br />In any case, this is the <i>opposite </i>of the sign that one would expect if the JAMA result were driven entirely by composition effects. This leads me to conclude that the exclusion of zero earners is NOT driving the results, and serves as evidence that the geographic variation described in the JAMA paper is a true effect.<br /><br /><br /></div></div>Lucas Goodmanhttp://www.blogger.com/profile/06191966565206086301noreply@blogger.com0tag:blogger.com,1999:blog-7169607771498045604.post-39859608799614316752016-02-01T12:38:00.004-08:002016-02-01T12:38:57.855-08:00What's most wrong with the corporate tax?The corporate tax is the tax that economists most love to hate. It discourages capital formation, repels profitable investment, and causes untold dead weight loss in the form of tax lawyers' labor. And while the U.S. is generally a low-tax country, its (statutory) corporate tax rate is anomalously the highest in the developed world. Thus, unsurprisingly, corporate tax reform is always on the forefront of tax economists in Washington, DC.<br /><a name='more'></a><br /><br />One solution, of course, would be to get rid of the corporate tax entirely (and maybe throw in some anti-abuse rules to prevent corporations from becoming vehicles to accumulate investments tax-free). While the corporate income tax raises far less revenue than the payroll or individual income tax, it still raises about 10% of total federal revenue, and it does so in a highly progressive way, since most of the <a href="http://www.taxpolicycenter.org/uploadedpdf/412651-tax-model-corporate-tax-incidence.pdf">incidence</a> of the corporate tax falls on capital owners. It would be quite difficult to replace that revenue in as progressive a manner as the corporate tax.<br /><br />Of course, short of repealing the corporate tax (or cutting the tax rate dramatically), there are many parameters of the corporate tax code that could be reformed --- even in a revenue-neutral way. The biggest challenge is that there are just <i>so many </i>problems caused by the corporate tax, and they all interact with each other in complicated ways. It is hard to fix one problem without exacerbating another. To illustrate, here is a non-exhaustive list of problems and distortions caused by the corporate tax:<br /><br /><br /><ol><li><b>Neoclassical effect on capital formation</b>: In the simplest model of investment, corporations invest until the marginal dollar of investment will earn some required rate of return <b>r</b>. The presence of taxes raises that required rate of return (since the owners of the corporation don't receive the entire pre-tax return), and so corporations invest less. One way to fix this is to move toward a consumption-type corporate tax, in which corporations can immediately expense (deduct) all investment, rather than slowly over the course of many years. For a marginal investment, the value of the immediate deduction would precisely offset the present value of the future stream of taxes the corporation would pay on the returns of that investment --- leading to an effective marginal tax rate of zero. But, such a change would reduce tax revenue; to recover that revenue, the statutory rate would need to be <i>increased</i>.</li><li><b>Shifting the location of discrete investment</b>: Of course, not all production occurs via a well-behaved, concave production function; one full factory can probably produce more than 4 times what 1/4 of a factory can. These types of investment projects will typically be profitable (even net of the opportunity cost of capital) at whatever tax rate we're thinking about, so the amount of the investment won't be heavily influenced by the tax rate. Yet, the location decision might be highly responsive to taxes --- in particular, to the <i>average </i>tax rate that applies to the project in question. For highly profitable projects, the average tax rate is reasonably well approximated by the statutory tax rate (unless some lower tax rate applies to the marginal dollar of profit, which is true in the case of manufacturing industries in the United States). So the way to address this issue is to lower the top rate --- perhaps, making up for the revenue loss by broadening the base ways that actually increase<i> </i>the effective <i>marginal</i> tax rate. Already, we can see how the goals of corporate tax reform conflict with each other.</li><li><b>Profit shifting</b>: This is a similar phenomenon to (2), but they should be kept separate in our heads. "Profit-shifting" involves the shifting of profits on paper, but not in any meaningful economic sense, to a lower-tax country. It's actually not immediately obvious that this is a "problem." Suppose that all companies could costlessly shift exactly half their profits (and cannot shift a cent more) to zero-tax Bermuda, such that they actually face a 17.5% U.S. corporate tax rate. While this causes a revenue loss to the U.S., it just functions as a corporate tax cut, and we should evaluate it in the same way that we would an explicit change in rates. In my view, the difference between reality and my absurdly simplified example is that (a) profit shifting is not actually costless and (b) some firms can shift profits more easily than others. Problem (a) creates dead weight loss, equal to the amount of effort spent to shift profits. Problem (b) creates horizontal inequity, and can distort the allocation of resources if profit-shifting causes a change in effective tax rates on the margin for some firms (and not others). The intuition from point (2) still holds: a lower average tax rate on profitable discrete investments --- e.g., patents or other intangible property --- reduces the incentive to profit-shift. Of course, there are lots of other tax parameters that are designed to affect profit-shifting --- most of which are beyond my pay-grade. But changing many of these parameters would affect the average and marginal tax rates facing <i>real </i>investment too --- and it's not obvious that we want to do that.</li><li><b>Miscellaneous other distortions: </b></li><ol><li>Debt vs. equity: Interest payments by corporations are generally deductible against corporate tax, while dividends are not. (Dividends are taxed at the individual level less heavily than interest payments, but the former effect dominates.) As a result, debt-financed investment is more tax-favored than equity-financed investment. One way to address this would be to eliminate (or reduce) the deductibility of corporate interest expense. But if debt is the marginal source of investment funds for a corporation, this would have the effect of increasing the effective marginal tax rate on investment --- exacerbating problem (1).</li><li>Corporate vs. non-corporate: The corporate tax applies only to C corporations, not to S corporations, partnerships (including LLCs that choose to be taxed as such), nor sole proprietorships; the "pass-through" tax regime that applies to these other forms is generally more favorable. This could lead businesses to choose a different entity form. As with profit shifting, this creates dead weight loss if (a) businesses change organizational forms due to tax policy and this has a real cost, or if (b) resources shift to pass-through firms (since pass-through status may be correlated with industry, size, etc.). </li><li>Et cetera: payout policy (dividends versus share repurchases), forms of merger/acquisition (e.g., purchase of stock or assets, with stock or cash). </li></ol></ol><div>In most discussions of corporate tax reform, I fail to see a discussion of <i>which </i>reform goal that reform is most designed to address, and how it would affect the other goals. I also haven't seen a good discussion weighing the importance of each of these elements (though I may not be looking in the right place). Without some sense of what problems we should be trying hardest to solve, it's very hard to come up with a solution that makes sense.<br /><br />Here's one very nice exception to this: <a href="http://www.econstor.eu/bitstream/10419/105154/1/cesifo_wp5101.pdf">This paper</a> by Peter Sorensen, evaluating the tradeoff between reducing the debt / equity distortion and reducing the marginal cost of capital. </div>Lucas Goodmanhttp://www.blogger.com/profile/06191966565206086301noreply@blogger.com0tag:blogger.com,1999:blog-7169607771498045604.post-44223321408585445072015-12-07T08:46:00.000-08:002015-12-07T08:46:15.678-08:00Some Simple Economics of PrEPThere's a new strategy in the fight against HIV, especially among men who have sex with men (MSM): a drug called Truvada, also referred to as PrEP (for pre-exposure prophylaxis). Early reports suggest that taking Truvada can nearly eliminate the risk of an HIV-negative individual acquiring HIV. PrEP has been somewhat controversial, since one might reasonably conjecture that reducing the risk of unprotected sex will increase the prevalence of unprotected sex.<br /><br /><a name='more'></a><br /><br />This is, essentially, the same argument that wearing a bicycle helmet will cause you to <a href="http://www.nytimes.com/2001/07/29/business/a-bicycling-mystery-head-injuries-piling-up.html?pagewanted=all">bike more recklessly</a>. While this response may be true, wearing a bicycle helmet is weakly better for a rational rider (abstracting away from the cost of the helmet, unpleasantness of wearing it, etc.) based on a revealed preference argument. "Ride at the prudence I would ride without a helmet" is in the choice set of the helmet-wearer; if he chooses something else, that something else must be at least as good as riding carefully.<br /><br />With the issue of PrEP, though, this analysis is incomplete because of externalities. If an individual engages in riskier sexual behavior, he puts others at risk as well. The HIV externality of PrEP could be negative only if the patient responded to PrEP by increasing his practice of risky sexual behaviors so much that he is <i>more </i>likely to spread HIV to others. This is unlikely, given that PrEP seems to be very effective.<br /><br />But of course, HIV is not the only sexually-transmitted infection (STI). PrEP does not protect against gonorrhea, chlamydia, or syphilis. Thus, an increase in unprotected sex, triggered by an increase in PrEP usage, <a href="http://www.towleroad.com/2015/11/cdc-sti-rates/">could increase the prevalence of these STIs</a>. <br /><br />So, what can we say about the social optimality of PrEP usage?<br /><br />Let's consider a simple model where a representative, atomistic man is choosing <b>x</b>, the amount of unprotected sex to have. (To keep things simple, I'm not distinguishing between abstinence and "safe" sex, and I will model <b>x</b> as continuous.) He has utility over unprotected sex <b>u(x)</b> which is well-behaved, concave, and increasing over some region, but need not be increasing everywhere. He incurs cost <b>c<sup>H</sup></b> every time he is diagnosed with HIV and <b>c<sup>S</sup></b> every time he is diagnosed with some other STI. (This is to avoid having to deal with exponential distributions.) Assume that each unit of unprotected sex is associated with a probability <b>p<sup>H</sup></b> of an HIV diagnosis and <b>p<sup>S</sup></b> of a diagnosis of some other STI. Furthermore, suppose <b>p<sup>H</sup></b> and <b>p<sup>S</sup></b> depend on the share of individuals with HIV or other STIs, <b>θ<sup>H</sup></b> and <b>θ<sup>S</sup></b> respectively. Lastly, suppose that for each agent, <b>p<sup>H</sup></b> is also increasing (in a differentiable way) on some parameter <b>t<sub>i</sub></b>; I will interpret PrEP as a reduction in <b>t<sub>i</sub></b>. Thus, his utility is:<br /><br /><b>u(x) - x[p<sup>H</sup>(θ<sup>H</sup>,t<sub>i</sub>)c<sup>H</sup> + p<sup>S</sup>(θ<sup>S</sup>)c<sup>S</sup>]</b><br /><br />What is the effect of a reduction in <b>t<sub>i</sub></b> --- that is, a reduction in this person's risk of acquiring HIV--- on welfare? By the envelope theorem, it's just <b>x*[d</b><b>p<sup>H</sup></b><b>/d</b><b>t<sub>i</sub></b><b>]</b>, where <b>x*</b> is the (privately) optimal choice of <b>x</b>. This expression is unambiguously positive (assuming <b>x*</b> is positive). This is a special case of the bike-helmets-are-good result, in the case of a small change and well-behaved utility: whatever adjustments are made by the agent have no first order effect on utility and only the direct effect matters.<br /><br />Now, suppose that <b>t<sub>i </sub></b>is actually just <b>t</b> --- that is, what happens when everyone's <b>t</b> falls? The envelope theorem still holds, with the important modification that <b>θ<sup>H </sup></b>and <b>θ<sup>S</sup></b> now change when <b>t</b> changes. The effect of a decrease in t is:<br /><br /><b>x*[</b><b>c<sup>H</sup></b><b> (d</b><b>p<sup>H</sup></b><b>/dt + d</b><b>p<sup>H</sup></b><b>/d</b><b>θ<sup>H</sup></b><b> * d</b><b>θ<sup>H</sup></b><b>/dt) + </b><b>c<sup>S</sup></b><b> d</b><b>p<sup>S</sup></b><b>/d</b><b>θ<sup>S</sup></b><b> * d</b><b>θ<sup>S</sup></b><b>/dt ]</b><br /><br />The first term is positive, representing the reduced risk of HIV acquisition: both from the direct effect of PrEP as well as the herd immunity effect from reduced HIV share in the community (I am assuming, not deducing, the sign of this second term). The second term is negative. <b>dθ<sup>S</sup>/dt </b>is how the prevalence of other STIs increases when HIV transmission becomes riskier: it is probably negative (so an decrease in <b>t</b> would increase other STIs), and <b>d</b><b>p<sup>S</sup></b><b>/d</b><b>θ<sup>S</sup></b> is positive.<br /><br />So, in the end, we have two offsetting forces: the benefits of the reduction in HIV transmission risk versus the cost of increasing other STIs. The sign is ambiguous, and the "helmets-are-good" result no longer holds unambiguously in the presence of these externalities.<br /><br />What can we say about the magnitudes of these effects? My guess is that <b>c<sup>H</sup></b><b> >> </b><b>c<sup>S</sup></b>, but <b>dθS/dt</b> and <b>d</b><b>p<sup>S</sup></b><b>/d</b><b>θ<sup>S</sup></b> could potentially be quite large. It is not clear which term wins the horse race.Lucas Goodmanhttp://www.blogger.com/profile/06191966565206086301noreply@blogger.com0tag:blogger.com,1999:blog-7169607771498045604.post-31309198707476265702015-10-27T07:31:00.000-07:002015-10-27T14:23:41.602-07:00More on Miller & Sanjurjo<b>Edit: Jonathan Miller was nice enough to explain my error in interpreting their claim. See below.</b> <br /><br />I wrote last week about Miller & Sanjurjo (2015), a working paper which shows how taking unweighted averages of ratios of conditional proportions of success (conditional on previous success) can lead to a biased estimate of the true conditional probability. I then claimed that this result does <i>not </i>extend meaningfuly to the context that they're trying to extend it to: the "hot hand" in basketball, particularly <a href="http://wexler.free.fr/library/files/gilovich%20%281985%29%20the%20hot%20hand%20in%20basketball.%20on%20the%20misperception%20of%20random%20sequences.pdf">Gilovich, et al. (1985).</a><br /><br />Various people smarter than me, notably <a href="http://andrewgelman.com/2015/10/18/explaining-to-gilovich-about-the-hot-hand/">Andrew Gelman</a>, disagree. They think that the Sanjurjo & Miller critique matters even for the sample sizes considered by Gilovich et al.<br /><br /><br /><br /><a name='more'></a><br /><br />This question is easily answerable with some Monte Carlo simulations. In particular, I'll set my sample size to 248, which is the minimum number of shots recorded by any player in the Gilovich study (see their Table 1). For the sake of brevity, I set the true probability equal to 1/2. Basically, I repeat the 248-shot trial a large number (10000) times and take the unweighted mean of the empirical conditional successes across trials.<br /><br />The Stata code is pretty simple:<br /><blockquote class="tr_bq"><br /></blockquote><blockquote>clear<br />set more off<br />set seed 12304<br />global obsmax = 248<br /><br />tempfile tofill<br /><br />forval j = 1/10000 {<br /> clear<br /> qui set obs $obsmax<br /> gen heads = runiform()>0.5<br /> gen success = heads==1 & heads[_n-1]==1 // the numerator<br /> gen elig = heads==1 & _n!=$obsmax // the denominator<br /> <br /> collapse (sum) success elig<br /> gen trial = `j'<br /> gen ratio = success/elig<br /> if `j'>1 {<br /> append using `tofill'<br /> }<br /> qui save `tofill', replace<br /> }<br /><br />mean ratio</blockquote> We're interested in the mean of the ratio, taken across trials. I get 0.4974, with a standard error of 0.0005 (which accounts for the fact that 10,000 is less than infinity) relative to the true conditional probability of 0.5. So, the bias is of the order of 0.0025 (or, 0.0035 at the bottom of the 95% confidence interval), not 0.02 as Gelman hypothesized. In other words, the Miller & Sanjurjo effect is at most a second-order concern for Gilovich et al.<br /><br />Or am I missing something?<br /><br />P.S., as a sanity check, when I run this code with $obsmax = 4, I get a mean ratio of about 0.4, as predicted by Miller & Sanjurjo.<br /><br /><b>Edit: Yes, I was missing something! Miller & Sanjurjo's claim with respect to Gilovich et al was not about "Study 2" of Gilovich (reported in Table 1), but "Study 4" (reported in Table 4). Study 2 involved actual shots by 76ers players during a large number of games, but there are reasons to doubt those results even without the Miller & Sanjurjo sampling bias. "Study 4" is a much purer study, involving Cornell basketball players as part of a controlled experiment. In this study, the typical sample size was 100, not 250.</b><br /><b><br /></b><b>Furthermore, Gilovich et al were examining P(hit | 3 misses) relative to P(hit | 3 hits). Conditioning on <i>three </i>hits (or misses), rather than just one, exacerbates the Miller & Sanjurjo bias. When I edit the code above to use a sample size of 100, and condition on three hits instead of one, I get an average probability of 0.461, so the bias is on the order of 0.04 (or 0.08, if we're comparing it to P(hit | 3 misses)), which is certainly nontrivial.</b><br /><b><br /></b><b>Many thanks to Jonathan Miller for <a href="http://andrewgelman.com/2015/10/18/explaining-to-gilovich-about-the-hot-hand/#comment-249189">pointing this out</a> to me.</b><br /><br /><br />Lucas Goodmanhttp://www.blogger.com/profile/06191966565206086301noreply@blogger.com0tag:blogger.com,1999:blog-7169607771498045604.post-6613155810975746852015-10-20T06:47:00.000-07:002015-10-27T14:25:05.218-07:00Are coin flips memoryless?There's a <a href="http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2627354">working paper</a> going around by Miller and Sanjurjo, cited in a <i>New York Times </i><a href="http://www.nytimes.com/2015/10/18/sunday-review/gamblers-scientists-and-the-mysterious-hot-hand.html?_r=0">article</a>, that seems to be arguing the impossible: that, in a sequence of flips of a fair coin, the probability of flipping heads is <i>smaller </i>than 1/2 if the previous flip was heads.<br /><br />The working paper argues that this is relevant to the "hot hand" debate. E.g., is a basketball player more likely to hit his next shot if he hit his previous shot? The seminal paper in this literature, <a href="http://wexler.free.fr/library/files/gilovich%20(1985)%20the%20hot%20hand%20in%20basketball.%20on%20the%20misperception%20of%20random%20sequences.pdf">Gilovich, Vallone, and Tversky (1985)</a>, found that the conditional probability of success given previous success was close to the unconditional probability of success, concluding that each shot was roughly independent. But if the laws of probability as we know them are wrong, and independence would somehow imply a <i>decline </i>in the conditional probability of success given previous success, then a finding of conditional probability equal to unconditional would actually be evidence <i>in favor </i>of the hot hand hypothesis.<br /><br />This claim, for lack of a better word, appears to be wrong.<br /><br /><b>Edit: See my <a href="http://goodman-number-3.blogspot.com/2015/10/more-on-miller-sanjurjo.html">most recent entry</a> for why I was misunderstanding Miller & Sanjurjo's claim with respect to the Gilovich, et al. study. Basically, I was looking at the wrong part of the Gilovich paper! My exposition of the Miller & Sanjurjo result is still valid, though.</b> <br /><br /><a name='more'></a><br /><br />The thought experiment that Miller and Sanjurjo are considering is as follows. Flip a fair coin <b>S </b>times, where <b>S </b>is relatively small (e.g., four). Calculate the empirical conditional probability of heads, given that the previous flip was heads. Call this empirical conditional probability <b>P<sub>t</sub></b>. Then, repeat this trial (i.e., all <b>S</b> flips) a large number <b>N </b>times. Compute the average of <b>P</b><sub><b>t</b></sub> across each <i>trial</i>. They show that this average of <b>P</b><sub><b>t</b></sub> is less than the unconditional probability of heads.<br /><br />What's going on here? Basically, it's all about how they weight this average.<br /><br />Suppose we were to weight each "eligible" flip equally, across all trials. (By "eligible," I mean that the previous flip was heads.) In other words, you define your denominator as the number of eligible flips across all <b>N*S </b>total flips, and you define your numerator as the number of heads that were realized in those eligible flips. That ratio will converge in probability to the conditional probability of heads as <b>N*S </b>goes to infinity, which remains equal to the unconditional probability because the flips are independent. Everything still works.<br /><br />When we weight by <i>trial</i>, by contrast, we're not weighting each eligible flip equally. Suppose <b>S = 4</b>. Say that <b>Trial A </b>has four heads, and <b>Trial B</b> has one heads (and suppose this heads was one of the first three flips). The empirical conditional probability for <b>Trial A</b> is one. The conditional probability for <b>Trial B </b>is zero. The average across these two trials is 0.5. But the average across eligible flips is larger. <b>Trial A </b>has three eligible flips (all except for the first), and <b>Trial B </b>has one eligible flip. In these eligible flips, <b>Trial A</b> had three successes and <b>Trial B </b>had zero. The flip-weighted average is 3/4, larger than the trial-weighted average of 1/2.<br /><br />More generally, when there are fewer eligible flips in a trial (i.e. fewer heads), those eligible flips are overweighted in a trial-weighted average relative to a flip-weighted average. And since the number of eligible flips will be correlated positively with the probability of heads given previous heads --- they're both closely related to the number of heads in the trial! -- the trial-weighted average is biased downward relative to the flip-weighted average. Since the flip-weighted average consistently estimates the unconditional probability, the trial-weighted average will underestimate it. That's all that's going on here.<br /><br /><b>[Edit: Most of my interpretation below is at best incomplete, at worst wrong. <a href="http://goodman-number-3.blogspot.com/2015/10/more-on-miller-sanjurjo.html">See my more recent entry</a>.]</b><br /><br />So, what should we make of the claim that "hot-hand" studies of the form of Gilovich, Vallone, and Tversky (1985) (GVT) have been misinterpreted? If GVT weighted each <i>game </i>(or half, or quarter) equally, then Miller and Sanjurjo would have a point. But as far as I can tell, GVT weight each "eligible shot" equally (see their Table 1). So their conclusions are unaffected by Miller and Sanjurjo's claims; this grand interpretation of this working paper is wrong.<br /><br />That said, if there are empirical settings in which researchers erroneously weight by trial instead of by flip/shot/whatever, then Miller and Sanjurjo have made an important contribution. Can we think of such empirical settings?<br /><br />Lucas Goodmanhttp://www.blogger.com/profile/06191966565206086301noreply@blogger.com0tag:blogger.com,1999:blog-7169607771498045604.post-80019117336874640312015-09-28T06:47:00.001-07:002015-09-28T06:47:31.955-07:00"Would a significant increase in the top income tax rate substantially alter income inequality?"Here's a <a href="http://www.brookings.edu/~/media/research/files/papers/2015/09/28-taxes-inequality/would-top-income-tax-alter-income-inequality.pdf">Brookings piece</a> by Gale, Kearney, and Orszag --- with some research assistance from yours truly --- which tries to perform the following accounting exercise: If we increased tax rates on the wealthy, and there were no behavioral effects, how much would the after-tax Gini decrease? The answer is "not very much at all."<br /><br /><blockquote class="tr_bq">Under current tax provisions, the after-tax Gini coefficient is .574. This compares to a Gini of .610 calculated over pre-tax income. Raising the top income tax rate to 45 percent reduces the Gini coefficient only from .575 to .573. Raising it to 50 percent brings the Gini to .571.</blockquote>Some explicit redistribution from the rich to the bottom 20% reduces inequality a bit further, but still not much.<br /><br />Why? Mostly, because we were considering changes just to the top bracket, which doesn't start until taxable income of $464,850 (for married filing jointly), which corresponds to <i>gross </i>income even higher. Changing the top bracket effects only the very top --- the top 0.5 percent or so --- while 90/10 inequality would be untouched.Lucas Goodmanhttp://www.blogger.com/profile/06191966565206086301noreply@blogger.com0tag:blogger.com,1999:blog-7169607771498045604.post-35816469178669418722015-09-18T13:20:00.000-07:002015-09-18T13:20:12.356-07:00On Borjas (2015), The Wage Impact of the Marielitos : A ReappraisalInfluential labor economist George Borjas is out with a new <a href="http://www.hks.harvard.edu/fs/gborjas/publications/working%20papers/Mariel2015.pdf">working paper</a> revisiting the famous <a href="http://davidcard.berkeley.edu/papers/mariel-impact.pdf">Card (1990)</a> result on the Mariel Boatlift. The Boatlift was a huge, plausibly exogenous immigration shock felt by Miami in 1980. Card had originally found that the Miami labor market had seemed to absorb the immigrants without an impact on native wages. Borjas' working paper challenges that result.<br /><br /><a name='more'></a><br />In brief, Borjas argues that Card should have focused on a narrower subset of the Miami labor market: specifically, that of high school dropouts, since most of the new immigrants were high school dropouts and would be competing in that market. When Borjas does this, he finds <i>huge </i>negative effects -- on the order of 30% -- on native wages in Miami shortly after the boatlift.<br /><br />I've spent some time today playing around with the data that Borjas used. (Full disclosure: I haven't yet been able to replicate his regression results exactly, but I'm within a few hundredths.) Borjas' central challenge is inference. First, the sample size of individuals is tiny, as Borjas acknowledges. Using the March CPS, he's looking at the wages of non-Hispanic men of a certain age, who are high school dropouts, in the Miami-Hialeah metropolitan area. In each year, there are something like 20 people that meet those criteria, and similar numbers in the placebo cities. At the end of the day, though, that just introduces measurement error in the dependent variable, which we know how to deal with.<br /><br />The bigger challenge is that he has one treatment city and a small number (generally) of treatment cities. In fact, in his regression, he doesn't even see the point of clustering his standard errors since the number of clusters is so low. The reported "robust"-to-heteroskedasticity standard errors are close to meaningless, obviously.<br /><br />He spends most of his inference effort in producing a distribution of placebo estimates, and seeing where Miami's post-Boatlift change in wages falls in that distribution. The more sophisticated way to do that is via the <a href="http://www.hks.harvard.edu/fs/aabadie/ccsp.pdf">synthetic controls method</a>, which Borjas does.<br /><br />I spent some time today looking at the simpler test, where the placebo estimate is just an uncontrolled pre-post change. In particular, a given placebo estimate is the change in some unaffected city <b>j </b>from years <b>(t,t+1,t+2)</b> to <b>(t+4,t+5,...,t+9)</b>. That is, the placebo treatment occurs at <b>t+3</b>, the pre-period is three years before that, and the post-period is six years after that. With this distribution, he plots the following graph, showing that the Mariel pre-post change in Miami is at the far left tail of the distribution. In particular, about 0.8% of the mass of the distribution is to the left of the Mariel effect.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-ifSiMq_Pb1g/VfxtFXIbkII/AAAAAAAAAQM/MtrEagX8Rvg/s1600/Borjas%2BFigure%2B3.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="271" src="http://1.bp.blogspot.com/-ifSiMq_Pb1g/VfxtFXIbkII/AAAAAAAAAQM/MtrEagX8Rvg/s400/Borjas%2BFigure%2B3.png" width="400" /></a></div><br />My replication looks pretty similar. (Borjas does some sort of weighting that I didn't do, so they're not exactly the same.)<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-KiUZLzRa1Lw/VfxxBQnbIKI/AAAAAAAAAQw/UVEjpy65g1Y/s1600/sixyear.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="290" src="http://4.bp.blogspot.com/-KiUZLzRa1Lw/VfxxBQnbIKI/AAAAAAAAAQw/UVEjpy65g1Y/s400/sixyear.png" width="400" /></a></div><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-69__HDHp_0Y/VfxvYyRl4oI/AAAAAAAAAQY/T4EYlCWNgJU/s1600/sixyear.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><br /></a></div><br /><br />I do have one objection to this procedure, however. He chooses a six-year "post" window because that is what makes the pre-post change in Miami look as bad as possible. Given that choice, we should do the same thing for our placebo estimates: for a given city <b>j</b> and treatment date <b>t+3</b>, we should choose the <i>worst </i>treatment effect. I replicated this exercise with this change (letting the post-period be anywhere from 2 to 7 years long, whichever is worst).<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-b28xLYzHo_A/VfxxKEymleI/AAAAAAAAAQ4/Ikglz95ECu4/s1600/mintreat.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="290" src="http://3.bp.blogspot.com/-b28xLYzHo_A/VfxxKEymleI/AAAAAAAAAQ4/Ikglz95ECu4/s400/mintreat.png" width="400" /></a></div><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-kOP5u7mrnoM/VfxvdzBrZ8I/AAAAAAAAAQg/78Z3rZKutR0/s1600/mintreat.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><br /></a></div>We can see that the mass of the distribution moves left a bit. Now, I find that 2.44% of the mass of the distribution is to the left of the Mariel effect --- meaning that we'd still reject the null of no effect in a two-sided test at a 5% significance level, barely.<br /><br />So, in sum, my objection doesn't overturn Borjas' conclusion. But I do wonder what happens when you repeat this exercise with the synthetic control placebos.<br /><br /><br />Lucas Goodmanhttp://www.blogger.com/profile/06191966565206086301noreply@blogger.com0tag:blogger.com,1999:blog-7169607771498045604.post-36638086471482177852015-09-09T14:40:00.000-07:002015-12-15T13:19:40.666-08:00Is Tax Avoidance Socially Costly?Yesterday, I was reading <a href="http://www.nber.org/papers/w21516">Gorry, Hassett, Hubbard, and Mathur (2015)</a> in this week's NBER release. Their paper is about how taxes affect the structure of executive compensation (e.g., between cash, stock grants, and stock options). This motivated me to try to think carefully about the extent to which tax avoidance is socially costly.<br /><br /><a name='more'></a><br /><br /><br />The classical answer, most notably introduced by <a href="http://www.mitpressjournals.org/doi/abs/10.1162/003465399558391#.VfCaMRHBzRY">Feldstein (1999)</a>, is "yes." Specifically, Feldstein (1999) argued that the response of taxable income to tax changes --- melding in tax avoidance just as well as real labor responses --- is a sufficient statistic for welfare calculations.<br /><br />The basic, textbook story is the following. Suppose agents choose total income <b>y</b> and avoidance activities <b>A </b>(which reduce taxable income); they face a local linear tax rate <b>t</b>. They pay <b>(y-A)t + I(t)</b> in taxes, where <b>I(t)</b> is virtual income (virtual income is the intercept of the local budget constraint extended to the y axis --- in the case of a flat tax, it's zero). They have increasing utility over consumption <b>c</b>, which equals <b>y(1-t)+tA+I(t)</b> and decreasing utility over <b>A</b> and <b>y</b>. Furthermore, suppose the utility function is continuously differentiable. Facing the tax rate <b>t</b>, they maximize with respect to <b>A</b> and <b>y</b>.<br /><br />With this set-up, suppose that the government is considering small increase in the tax rate <b>Δt</b>, which will increase revenue (per taxpayer, say) by <b>ΔR</b>. Let's define <b>ΔM</b> as the "static" or "mechanical" increase in revenue, which comes from applying the new tax rate to the existing <b>y</b> and <b>A</b> (i.e., disallowing behavioral responses). Then, as an identity, we have <b>ΔR = </b><b>ΔM + </b><b>ΔB</b>, where <b>ΔB</b> represents the change in revenue due to taxpayers' reoptimization. Typically, we think of <b>ΔB</b> as being negative for a tax increase; i.e., <b>ΔM></b><b>ΔR</b>.<br /><br />The upshot will be that the marginal dead weight loss (DWL) is <b>(</b>-<b>ΔB)</b>. To see this, suppose that the government is considering two proposals. The first is to increase revenue by <b>ΔR</b> by increasing tax rates by <b>Δt</b>, as mentioned above. The second is to increase revenue by <b>ΔM</b> via a lump sum, nondistortive tax.<br /><br />Under the first proposal, the government loses (-<b>ΔB)</b> relative to the second proposal. And, critically, the taxpayers are indifferent between the two proposals. Thus, the distortionary nature of the taxation has reduced total welfare by (-<b>ΔB)</b>, multiplied by however we want to weight a dollar in the hands of the government (which I'll normalize to one).<br /><br />Why are taxpayers indifferent? This relies heavily on the envelope theorem. Let <b>V(t)</b> be the indirect utility function; i.e., <b>V(t) = max<sub>y,A</sub>u(y(1-t)+tA+I(t),A,y)</b>. By the envelope theorem, the <i>total</i><b style="font-style: italic;"> </b>derivative of <b>V(t)</b> is equal to the <i>partial</i> derivative of <b>u(.,.,.)</b> with respect to <b>t</b>, evaluated at the optimal choice of <b>y</b> and <b>A</b>. Because individuals are, and remain, near the optimal choice of <b>y</b> and <b>A</b>, their reoptimization decisions have no first-order effects on utility. If this reoptimization comes from increasing <b>A</b>, then the benefit of the tax reduction offsets the marginal difficulty of increasing <b>A</b>.<br /><br />So, when we increase distortionary taxes by <b>Δt</b>--raising <b>ΔR</b> of revenue---we get <b>ΔV=</b><b>V'(t)</b><b>Δt</b><b>=u<sub>c</sub>*(-(y-A)+I'(t))</b><b>Δt</b>; this is just equal to the (opposite of the) marginal utility of consumption times the increase in mechanical tax revenue: <b>(y-A)</b><b>Δt - I'(t)</b><b>Δt = </b><b>ΔM</b>. If, instead, we were to simply confiscate <b>ΔM</b> via a lump-sum tax, the utility loss would be the same (to first order). Therefore, agents are indifferent between raising <b>ΔR</b> via distortionary taxes and <b>ΔM</b> via lump-sum taxes, and <b>(-</b><b>ΔB)</b> is the DWL.<br /><br />But, what if the tax reform were designed in such a way that taxpayers could costlessly avoid the tax increase? In the absurd extreme case, imagine that the top tax rate increases on paper from 39.6% to 45%, but taxpayers can check a box to have the 39.6% rate continue to apply to them. In this case <b>ΔM</b> is large --- equal to 5.4% times all income above the bottom of the highest bracket. And because avoidance is costless, <b>ΔB = -</b><b>ΔM</b>! The arguments before would say that this tax change has created a huge DWL. But that's nonsense; nothing has happened in reality.<br /><br />What's going on here is that the utility function is no longer continuously differentiable in <b>A</b>, so the envelope theorem no longer applies. This means that the taxpayer would clearly <i>not </i>be indifferent between a <b>ΔM</b> lump-sum tax (<b>ΔM </b>is large and positive!) and a "distortionary" tax raising <b>ΔR = 0</b>. The logic breaks down. In reality, the DWL is zero because the tax reform was meaningless.<br /><br />This brings me back to Gorry, et al. In their Appendix A, they claim that the (-<b>ΔB)</b> associated with deferred compensation represents the DWL. In their context, this (-<b>ΔB)</b> comes solely from the ability to shift taxation of income from high-tax periods to low-tax periods. This type of avoidance might be akin to the "checking a box for a lower tax rate"--- basically costless. So the intepretation of (-<b>ΔB)</b> as the DWL probably overstates the distortionary nature of taxation. On the other hand, if this avoidance activity entails real costs in a nice, continuously differentiable way (e.g., by exposing an executive to incrementally more risk), then Gorry, et al.'s analysis is probably more appropriate. I'll have to think more carefully about this in their context to say anything more conclusive.<br /><br /><br />Lucas Goodmanhttp://www.blogger.com/profile/06191966565206086301noreply@blogger.com0tag:blogger.com,1999:blog-7169607771498045604.post-81548541125598588392015-09-08T11:46:00.004-07:002015-09-08T11:46:57.015-07:00A Toy Model of Repatriation of Foreign Earnings of U.S. Corporations (or, How Congress Keeps Shooting Itself in the Foot)Frequently, we hear reports out of Washington that, while "tax reform is dead", the parts of the corporate tax involving foreign earnings are so self-evidently horrible that we might see a small-scale reform to this part of the tax code.<br /><br />This stylized fact seems relatively true: the <a href="http://www.bloomberg.com/news/articles/2015-03-04/u-s-companies-are-stashing-2-1-trillion-overseas-to-avoid-taxes">most recent estimates</a> suggest that U.S. corporations are holding over $2 trillion in "profits" overseas; these profits, if repatriated, would be subject to a tax equal to the difference between the U.S. corporate rate (35%) and whatever was paid initially to the foreign country [omitting some details]. Members of Congress would love to see this cash brought home, even if the benefits only accrue to shareholders and executives, as the recent literature has suggested. (Of course, the profits <a href="http://www.wsj.com/articles/SB10001424127887323301104578255663224471212">need not actually be "held" overseas</a>; we just mean that some controlled foreign corporation has yet to pay its U.S. parent corporation a big fat dividend of those profits.)<br /><br /><br /><a name='more'></a><br />With this in mind, I went looking for a simple model of business investment taking account the basic features of how the U.S. taxes foreign profits of U.S. corporations, and I couldn't find one --- if you know of one, send it my way! (I did seem to recall seeing something of this sort in <a href="http://www.amazon.com/Taxes-Business-Strategy-5th-Edition/dp/0132752670">this textbook</a> back when I worked for Bob Pozen.)<br /><br />So, I wrote down as simple a model as I could imagine. There are two periods. In the initial period, corporations are endowed with domestic capital K<sub>D0</sub> and foreign capital K<sub>F0</sub>. In this initial period, the corporation chooses its dividends D<sub>0</sub> and tomorrow's allocation of capital (K<sub>D1</sub> and K<sub>F1</sub>). If repatriations (K<sub>F0</sub>-K<sub>F1</sub>) are positive, then the corporation owes a repatriation tax τ<sup>R</sup> applied to that amount (that's saying that <i>all </i>of the K<sub>F0</sub> represents "profits" held overseas). In the second period, the corporation uses its capital (and no other factor) to produce at home and abroad. Let Q<sub>D</sub>(.) and Q<sub>F</sub>(.) denote the net-of-depreciation production function; consider these as having already subtracted out current corporate income taxes. The corporation then distributes all of its earnings and assets as a dividend D<sub>1</sub>, after paying any applicable repatriation tax, again at rate τ<sup>R</sup>. The objective function is just D<sub>0</sub>+βD<sub>1</sub>, where β=1/(1+ρ).<br /><br />The solution to this model is quite simple. K<sub>D1</sub> is always pinned down by the condition that ρ=Q<sub>D</sub>'(K<sub>D1</sub>). This is just saying that large U.S. firms have essentially an infinite supply of financing available at a constant rate.<br /><br />More interestingly, the repatriation decision is governed entirely by the initial stock of foreign capital, K<sub>F0</sub>. First, if the marginal product of foreign capital evaluated at the level of <i>initial</i> foreign capital--- that is, Q<sub>F</sub>'(K<sub>F0</sub>)---is smaller than ρ, the firm will adjust its foreign capital downward by repatriating until the marginal product is equal to ρ. Second, if this quantity is larger than ρ/(1-τ<sup>R</sup>), then the firm will adjust its foreign capital upward by <i>expatriating</i> until the marginal product equals ρ/(1-τ<sup>R</sup>). Third, if Q<sub>F</sub>'(K<sub>F0</sub>) is between ρ and ρ/(1-τ<sup>R</sup>), the firm will stand pat.<br /><br />Let me restate the first result in a different way: So long as Q<sub>F</sub>'(K<sub>F0</sub>) is less than ρ, the firm repatriates. Furthermore, it is easy to see that, in this case, the repatriation decision is completely unaffected by the presence of the repatriation tax! Suppose a firm repatriates an extra dollar. It gets an extra 1-τ<sup>R</sup> in domestic capital, which will create (1-τ<sup>R</sup>)(1+Q<sub>D</sub>'(K<sub>D1</sub>)) in output tomorrow. It loses (1+Q<sub>F</sub>'(K<sub>F1</sub>)) in foreign output tomorrow, which will be worth (1-τ<sup>R</sup>)(1+Q<sub>F</sub>'(K<sub>F1</sub>)) after repatriation tax. The first order condition (assuming positive repatriation) calls for these to be equalized --- which causes the repatriation tax to cancel! I should stress that this is not a new result; I'm pretty sure I saw a similar result in the textbook I linked above.<br /><br />There is an analogy to be made here to the equivalence of IRA and Roth IRA, with constant tax rates. With a repatriation tax, keeping foreign capital foreign represents "pre-tax dollars" being invested in an IRA (where the tax in question is the repatriation tax). Repatriating foreign capital is like using after-tax dollars to fund a Roth IRA. At the same rate of return, the two choices are identical.<br /><br />Thus, in this toy model, the effect of the repatriation tax is solely to reduce <i>expatriation</i>, not to reduce repatriation. (When a corporation is considering expatriating capital, it is comparing the domestic return to the foreign return, net of the repatriation tax to be paid in the next period on the foreign profits. In this context, the repatriation tax tilts the playing field in favor of domestic capital.)<br /><br />So, there's obviously something else going on that must explain the $2 trillion in overseas profits, basically sitting idle. The most likely answer is that firms do not expect τ<sup>R</sup> to remain constant between now and tomorrow. They probably anticipate some probability of a repatriation tax holiday; because everything is linear, this is equivalent to a certain partial reduction in tomorrow's τ<sup>R</sup>.<br /><br />To explore this, I parameterized my model with a Cobb-Douglas production function that is identical at home and abroad, but with a higher (current) tax rate in the U.S. In the baseline, I specified τ<sup>R</sup> such that the repatriation tax would represent the difference between foreign tax liability and domestic tax liability. I found that even a small chance of tax holiday tomorrow causes the phenomenon of "holding profits overseas" despite better returns available in the U.S. In particular, I set ρ=0.1; a 15% chance of a repatriation tax holiday takes the foreign return threshold down from 0.1 (the repatriation tax-free benchmark) to 0.063. In other words, my toy firms are willing to forego a 50%+ increase in marginal product in order to gamble that they will be able to repatriate their profits tax-free tomorrow.<br /><br />Put in this light, we can see how truly terrible the 2004 repatriation tax holiday was. Perhaps more concerning, all the current foreign tax reform proposals involve some sort of "transition relief" to existing foreign profits. Even if these reforms are sensible when viewed as a package, the continuing <i>discussion </i>of these plans is making the distortions to capital allocation worse. This is truly a "s--- or get off the pot" moment for Congress.<br /><br />Lucas Goodmanhttp://www.blogger.com/profile/06191966565206086301noreply@blogger.com0tag:blogger.com,1999:blog-7169607771498045604.post-15972482048218767222015-08-30T05:17:00.000-07:002015-08-30T05:17:21.682-07:00A Simple Model Challenging Gruber & Saez-type Estimates of the Elasticity of Taxable IncomeOne of the most obvious ways to reduce income inequality is to increase the marginal tax rates on high earnings (and use the revenue to redistribute to the poor or to provide public services). But, of course, this comes at the cost of further distorting the decision-making of those facing the higher tax rates. The elasticity of taxable income (ETI) with respect to the net of tax rate, <b>(1-t)</b>, is a key parameter in quantifying these distortions. A higher ETI means that the welfare costs of increasing taxes is larger.<br /><br /><a name='more'></a><br /><br />The modern ETI literature essentially starts with <a href="http://www.jstor.org/stable/2138698">Feldstein (1995)</a>. Feldstein exploits the Tax Reform Act of 1986 (TRA86), which drastically reduced taxes on high earners but had smaller impacts on lower earners. Ultimately, this boils down to a difference-in-differences in a panel setting, where the first difference is pre-post, and the second difference is between high- and low-earners (who faced different tax changes). Feldstein estimated an incredibly large ETI---greater than 3 in some specifications, suggesting that distortions are huge and, in fact, we are on the "wrong" side of the Laffer Curve.<br /><br />Since Feldstein, the ETI literature (or, the literature which I am familiar with, at least) has focused on "fixing" Feldstein. First, they trying to control for the fact that counterfactual trends are not parallel between high- and low-earners (both because of mean reversion and increasing income inequality, which push in opposite directions). Second, they explore the longer-run implications, and focus on broader income measures which are less apt to transitory manipulation. The benchmark estimates of this form come from <a href="http://piketty.pse.ens.fr/files/GruberSaez2002.pdf">Gruber and Saez (2002)</a>. They find an ETI of 0.4, which falls to an insignificant 0.1 when they look at broad income. From this, Gruber and Saez conclude that we can increase taxes on the wealthy with only modest impacts on efficiency.<br /><br />Along comes Raj Chetty, however, to <a href="http://www.rajchetty.com/chettyfiles/bounds_opt.pdf">show that this entire exercise is foolish</a>. His point is simple: Gruber and Saez are relying on individuals to actively respond, over the medium-term, to relatively small changes in the net-of-tax rate. For various reasons, we might not expect this to be the case. First, individuals might not be aware of the tax change. Second, individuals might not be able to costlessly adjust along the intensive margin. Furthermore, after putting some structure on the problem, Chetty can place upper and lower bounds on the "true" elasticity based on the observed elasticity by assuming simply that agents always locate themselves such that their lifetime utility is within 1% of the (frictionless) optimum. And the "true" elasticity --- the elasticity that governs long-run behavior --- is what really matters.<br /><br />The intuition is best given by the example on page 984 of <a href="http://www.rajchetty.com/chettyfiles/bounds_opt.pdf">Chetty</a>. The solid black line represents the <i>observed </i>response. The dotted line shows the hypothesized "true" demand curve (the slope of which is the elasticity, since everything is in logs). The top panel shows the steepest possible demand associated with that observed response, and the bottom panel shows the most shallow possible demand.<br /><br />Based on this framework, Chetty bounds the "true" elasticity given by Gruber and Saez's observed estimates as between 0.00 and 4.42 (see page 1000). <i>Oomph</i>.<br /><br />Below, I put forward a simple model that shows an example of the types of optimization frictions that Chetty describes.<br /><br /><b>My beef with the ETI literature</b><br /><br />Conceptually, we are interested in the following question: what would happen to taxable income in the long-run if the tax rate were exogenously increased? Therefore, I argue that the ideal explanatory variable would use the <i>perceived </i>tax rate, <b>tP_i</b>. This is not available in the data, so Gruber and Saez use the actual marginal tax rate faced by the agent. They take a single difference, before and after a tax reform. In particular, their model is the following, basically (where the x_i is meant to deal with mean reversion, etc):<br /><b> </b><br /><blockquote class="tr_bq"><b>Δ ln(y_i) = α + β Δ ln(1-t_i) + δ x_i + u_i</b></blockquote>Obviously, they have to instrument for <b>Δ ln(1-t_i) </b>to isolate the policy variation because the tax rate is endogenous; if you increase your earnings, you'll increase your tax rate because the tax schedule is progressive. To isolate the policy variation, they simply use <b>Δ ln(1-t*_i)</b>, where <b>t*_i</b> is the tax rate that would apply in the post period, assuming no change in earnings from pre to post. For example, taxes went up for income above $450,000 starting in 2013 [I'm omitting some caveats here; let's pretend inflation is zero]. Let's suppose the pre-period is 2012 and the post-period is 2013. For someone who earns $500,000 in 2012, <b>t*_i</b> is 35% in 2012 and 39.6% [again, omitting caveats] in 2013, regardless of what he actually earns in 2013. For someone who earns $400,000 in 2012, <b>t*_i</b> is 35% in both years, regardless of what she actually earns in 2013.<br /><br />The problem is the following: it assumes that a $450,100 earner in 2013 acts as if he faces a discontinuously higher tax rate in 2013 than a $449,900 earner in 2013. This is unrealistic for two reasons. First, income fluctuates randomly. Second, people don't know their own taxable income; while they might know their gross income (e.g., their official yearly salary), taxable income is far less salient, since it subtracts deductions and exemptions.<br /><br />Put another way: <i>the $449,900 earner has been "treated" by the tax hike</i>, because he will consider there to be some probability that his realization of 2013 taxable income will be above $450,000, both because of true fluctuations and because of non-salience of what "taxable income" really is. Critically, this probability is likely to be similar for the $449,900 earner and the $450,100 earner.<br /><br />In the mechanics of the regression, this means that <b>t_i</b> is discretely different for the $449,900 and $450,100 earners. But <b>tP_i</b> is basically the same for these two agents. If we were using <b>tP_i</b> instead of <b>t_i</b> in the regression, then we would estimate a larger ETI: intuitively, we would estimate a smaller first-stage coefficient and an identical reduced form coefficient.<br /><br /><b>My simple model</b><br /><br />So, here's my model. Suppose that individuals exert effort <b>μ </b>(measured in dolalrs), and the realization of pre-tax income is <b>Z=μ+σX</b>, where <b>X</b> is a standard normal random variable (which is realized after <b>μ</b> is chosen). Furthermore, let's specify (static) utility as <b>θ c - </b><b><b>(1+1/γ)<sup>-1</sup></b>μ <sup>1+1/γ</sup></b><b> </b>, where consumption <b>c = Z - T(Z)</b> (and <b>T(.) </b>is the tax schedule). I'm considering <b>θ</b> to be heterogeneous across individuals, while <b><b>γ </b></b>is shared across the population.<br /><br />(Note that this utility function abstracts away from income effects in the choice of <b>μ</b>, which I argue is reasonable given the Gruber and Saez estimate an income effect near zero. I recognize the irony of using one Gruber and Saez result to attack another.)<br /><br />In the case where <b>σ=0</b>, the optimal choice of <b>μ</b> (which equals <b>Z</b> because I have eliminated all uncertainty in this special case) is given simply by <b>(θ(1-t))<sup>γ</sup></b>, where <b>t</b> is the marginal tax rate. Furthermore, it is straightforward to show that the ETI is equal to <b>γ</b>.<br /><br />In the case where <b>σ>0</b>, we need to consider an expected utility maximization problem. The linearity in utility over consumption means that this problem is trivial when the tax function is linear (i.e., if we have a flat tax). But when the tax function is non-linear (e.g., progressive), the optimization is non-trivial, though straightforward.<br /><br />In a simple case with two tax rates, the optimization problem turns out to have an elegant solution. The optimal choice of effort <b>μ</b> is the same as the <b>σ=0</b> case, except that agents are acting as if they faced a convex combination of each tax rate, where the weight on each tax rate is the ex-ante probability that they will end up facing the tax rate in question, which is a function only of <b>θ</b>.<br /><br />How does this relate to Gruber and Saez? Consider two individuals that straddle $450,000 in 2013, and assume that counterfactual trends are, in fact, parallel (i.e., abstract away from mean reversion and divergence in the income distribution). The Gruber and Saez model assumes that the guy on the low side acted in 2013 as if he is facing a 35% tax rate, while the guy on the high side will have acted in 2013 as if he is facing the 39.6% tax rate. If these individuals are instead behaving according to my model, they'll both tend to have similar values of <b>θ</b>, so they'll tend to be acting in 2013 as if they're facing approximately the same tax rate as each other.<br /><br />To explore how this might matter quantitatively, I ran the Gruber and Saez empirical model on a simulated dataset that behaved according to my model. (One empirically import caveat: to prevent mean reversion, I assume that <b>X_i,post</b> follows a normal distribution with variance one and mean equal to the realization of <b>X_i,pre</b>, and agents know this.) The inputs to this simulation are as follows:<br /><br /><ul><li>I let <b>θ</b> be distributed normally in the population, such that the mean value of <b>θ</b> would lead to an optimal choice of effort of $450,000 in the <b>σ=0</b> case, and I let<b> </b><b>θ </b>have a standard deviation equal to one fourth its mean.</li><li>I let <b>σ </b>equal $70,000.</li><li>I let <b>γ</b> equal 1.</li></ul>When I run this, I get a tightly estimated coefficient of <b>0.62</b>, relative to the "true" elasticity of 1.<br /><br />What is my conclusion from this? My model doesn't explain anywhere close to the entire Chetty bound --- my model (with its wholly arbitrary inputs) says that the observed elasticity could be something like 60% of the truth, while the Gruber/Saez estimate is about 3% of Chetty's upper bound. But this simple model provides a relatively tractable example of the sorts of factors pushing Gruber/Saez-type estimates to be too close to zero.<br /><br />Lucas Goodmanhttp://www.blogger.com/profile/06191966565206086301noreply@blogger.com0tag:blogger.com,1999:blog-7169607771498045604.post-73535499339711193132015-08-12T05:03:00.001-07:002015-08-12T05:03:59.595-07:00Sallee, West, and Fan (2015): Do Consumers Recognize the value of fuel economy? Evidence from used car prices and gasoline price fluctuations.From this week's NBER release is a great paper showing how to isolate variation you care about while holding a lot of things constant. <a href="http://www.nber.org/papers/w21441">Abstract</a>:<br /><br /><blockquote class="tr_bq">Debate about the appropriate design of energy policy hinges critically on whether consumers might undervalue energy efficiency, due to myopia or some other manifestation of limited rationality. We contribute to this debate by measuring consumers' willingness to pay for fuel economy using a novel identification strategy and high quality microdata from wholesale used car auctions. We leverage differences in future fuel costs across otherwise identical vehicles that have different current mileage, and therefore different remaining lifetimes. By seeing how price differences across high and low mileage vehicles of different fuel economies change in response to shocks to the price of gasoline, we estimate the relationship between vehicle prices and future fuel costs. Our data suggest that used automobile prices move one for one with changes in present discounted future fuel costs, which implies that consumers fully value fuel economy. </blockquote><br />The most well-known market failure in the market for carbon-producing goods is the negative externality of pollution: consumers rationally do not internalize the harm that their carbon emissions will place on others. The Econ 101 solution for this is a Pigouvian carbon tax or, equivalently (in terms of efficiency) a cap-and-trade system.<br /><br />But some argue that consumer inattention causes a second market failure: consumers undervalue their own savings from energy efficiency. This failure would cause the level of carbon emissions to be too high even under the optimal Econ 101 Pigouvian tax. As a result, there is a long literature (with which I'm not too familiar) that tries to estimate consumers' valuation of fuel efficiency.<br /><br />At a first glance, the simplest way of answering this question in the context of automobiles would be cross-sectional: compare the sales prices of cars with varying fuel efficiency, while richly controlling for observable characteristics. Of course, the price of a given car is substantially determined by unobservable characteristics which are correlated with fuel economy, so this strategy is not credible.<br /><br />A slightly more sophisticated strategy would exploit changes in the price of gasoline, and compare the change in price for high-efficiency and low-efficiency vehicles. An increase in the price of gasoline should cause the price for a Hummer to fall by more than the price of a Camry. For this to measure consumer valuation of energy efficiency, there can't be anything else differentially occurring for high- and low-efficiency vehicles correlated with energy prices. But if more fuel-efficient models are introduced in response to a fuel price increase---increasing competition in that segment---we could see a fall in the price of high-efficiency vehicles that isn't caused by consumer valuation (see Langer and Miller (2013)).<br /><br />Enter Sallee, West, and Fan (2015). Their strategy makes use of more subtle variation. They use variation in the <i>odometer </i>readings, interacted with variation in fuel prices. Intuitively, fuel prices should matter less for car prices if the car has a shorter expected life---i.e., the change in the present discounted cost of fuel will be smaller if the life is shorter. The beauty of this strategy is that you can look <i>within</i> vehicle-month cells.<br /><br />This is, in some sense, analogous to a triple difference.<br /><br />The first difference is within a single vehicle type sold in a given month; say, two 2007 Honda Civics sold in May 2013, where the only variation is the odometer reading. We can essentially estimate the slope of the mileage-price curve, which will presumably be negative (lower mileage cars are more valuable).<br /><br />The second difference is across vehicle types: between 2007 Honda Civics and 2007 Ford F-150s. We compare the mileage-price curve of both types of vehicles. Holding all else constant, we'd expect the 2007 Ford F-150 to have a <i>flatter</i> mileage-price curve, since higher mileage means the expected life of the vehicle --- and thus the life during which its driver will "suffer" from its relative fuel inefficiency --- is smaller.<br /><br />Of course, not all else is constant: it's possible that trucks have better longevity, or vice versa, which would contaminate the mileage-price curve. So, enter the third difference: when fuel prices are higher and lower. Intuitively, <i>the extent to which the F-150 mileage-price curve is flatter than that of the Honda Civic mileage-price curve should be increasing in the fuel price</i>. In practice, this triple difference is estimated by using vehicle type X month fixed effects.<br /><br />What do they find? Putting aside the caveats that their fuel-cost variables are constructed with many assumptions (to which the results may not be fully robust), they find that an increase in the PDV of fuel costs---variation in which comes solely from variation in mileage---is passed through as a 1-to-1 reduction in the wholesale purchase price.<br /><br />This, to me at least, was a somewhat surprising result. Their calculation of the PDV of the fuel cost is non-trivial, and I highly doubt that consumers are literally making that calculation. Instead, some combination of rules-of-thumb and other market forces are combining to set the market price of these cars "correctly." It is a very interesting question why the rules-of-thumb and market forces provide the "correct" price in this market, but not in other markets that suffer from similar complexities, e.g., health care, retirement savings, etc. This, to me, seems a central question at the intersection of neo-classical and behavioral economics, to which I don't think a satisfying answer has been provided.<br />Lucas Goodmanhttp://www.blogger.com/profile/06191966565206086301noreply@blogger.com0