I'm very saddened by news of the death of Charles Kingson.
Charley was a pillar of the NYC Bar, an important member of the NYU tax community, a serious thinker and writer, delightful company, and a greatly valued professional friend. I will miss him.
Thursday, February 28, 2019
Wednesday, February 27, 2019
NYU Tax Policy Colloquium, week 6: Ruud de Mooij on profit-shifting (semi-)elasticities
Yesterday at the
Tax Policy Colloquium, Ruud de Mooij of the IMF presented his paper (coauthored
by Sebastian Beer and Li Liu at the IMF), International Corporate TaxAvoidance: A Review of the Channels, Magnitudes, and Blind Spots.
The piece is a
meta-study of profit-shifting by multinational corporations (MNCs). That is, it
in effect combines all the prior studies, using them as data to be amalgamated
towards seeking an overall empirical bottom line from research to date. The
prior studies’ weighting may be affected by the numbers of observations they
had, confidence intervals that they found, etc., but not by the meta-studier’s
(so to speak) view that, say, some were better studies than others.
Profit-shifting
here means “the international reallocation of profits by an MNC in response to
tax differences between countries, with an aim to minimize the global tax bill.
Hence, we ignore reallocation of real capital in response to tax.” In other
words, the response being studied is formal or artificial profit-shifting, not
actual changes in where substantial economic activity takes place. In practice,
of course, and as the paper notes, real and artificial reallocations are richly
intertwined with each other – e.g., an MNC may put a factory here rather than
there not just due to statutory tax rates, but because of how it can or can’t
profit-shift, with respect either to this particular factory or anything else
it might happen to be doing around the world.
The paper’s
headline finding is that the average semi-elasticity over the period from 1990
to 2015 was 0.98, but this reflects an upward trend from 0.6 in 1990 to 1.5 in
2015. But let’s build in a bit more background about why and how the issue is
of interest, before discussing what the headline empirical finding here
actually means.
What is profit-shifting? – Reallocation
of profits by reason of tax rate differences presupposes a prior allocation
that would have prevailed otherwise. It’s of course well-known that source
issues regarding where profits arose often lack clear answers, even assuming
that one has (controversially) decided on the question of what it means for a
dollar of profit to have arisen here rather than there (e.g., when there is a multi-jurisdictional
productive integration or else a cross-border sale). The underlying conception
one has in mind might involve both “true” profit-shifting relative to a correct
standard, and tax rate-motivated interpretation of source rules that leave one
room to place a dollar of profit here rather than there, even holding constant
one’s relative substantial economic decisions about what to do where.
Why does profit-shifting matter? – The
two inputs into a given MNC’s source-based income taxation in a given country
are the rate and the base. Suppose we think of the latter as “true”
source-based income (despite that concept’s limitations, as noted above) as
adjusted for profit-shifting. In a high-tax country, this will presumably tend
to be net profit-shifting out of, rather than into, the jurisdiction.
Given that the
MNC’s tax liability is a product of the rate and the base, profit-shifting’s
effect on the latter does not automatically matter all by itself. Rather, one
needs to push a bit further to establish why and how it might matter.
Suppose that a
given country wants to impose a 15 percent effective tax rate on MNCs’ true
source-based income. Without more to the question, one might be indifferent
about whether it got there by (a) measuring the MNCs’ income accurately and directly
imposing a 15% rate, or (b) allowing 40 percent of the profits to be
out-shifted and imposing a 25% rate on the rest.
But why not just
measure the income accurately and impose the rate one likes? Well, it’s not as
simple as that. For one thing, if monitoring profit-shifting is costly, it
might make sense to up the nominal rate in lieu of spending the extra resources
to get it exactly right.
But I think there
are two main considerations here, in terms of why care about profit-shifting,
with greater practical force than just that::
a) For optical
reasons, both domestic and international, one may want to apply the same
statutory rates to MNC income and that earned by purely local businesses. Yet one
may want the MNCs to face lower effective rates, because they are more mobile.
A relatively extreme example would be Ireland’s quite rationally, from a unilateral
national welfare standpoint, offering Apple a special tax deal. Allowing just the
“right” amount of profit-shifting, but not “too much,” may be a convenient (if
not exactly first-best) way of getting there.
b) Despite its
artificial character, profit-shifting generally isn’t costless, so requiring
companies to engage in it may create deadweight loss. (Then again, differential
tax rates around the globe may also lead companies to incur deadweight loss,
and profit-shifting opportunities can enable them to reduce that other DWL.)
But even insofar as profit-shifting induces MNCs to incur extra DWL, a given
jurisdiction may not care much about it, on the view that the individuals
bearing it may be foreign (e.g., shareholders across global capital markets)
rather than domestic.
Semi-elasticities vs. elasticities – Again,
the paper finds an MNC profit-shifting semi-elasticity of 0.98 for the entire
period from 1990 to 2015, but rising throughout the period so that by 2015 it
stood at about 1.5. What does this mean in plainer English.
A semi-elasticity,
in this particular context, is a change in reported profits per 1 percent
change in the corporate tax rate. Thus, if the semi-elasticity is 1.5 percent
throughout, then reported profits – holding the real location of investment
constant – will decline by that percentage whether the corporate rate rises
from (say) 20% to 21%, 33% to 34%, or 66% to 67%.
Econometric
research often offers elasticities, rather than semi-elasticities. Had the
paper’s measure been stated in terms of elasticities, rather than
semi-elasticities, it would have offered a number for the change in reported
profits, by reason of changes to profit-shifting, as the corporate tax rate
changed by a percentage of its prior level.
If the corporate
rate rises from 20% to 21%, it has increased by 5% of its prior level.
Accordingly, here a semi-elasticity of 1.5 = an elasticity of .3 (i.e., as
stated in terms of a 1% change in the prior corporate rate, as distinct from in
the absolute corporate rate).
Likewise, a
semi-elasticity of 1.5 as the corporate rate increases from 33% to 34% (i.e.,
by 3% of its prior level), equates to an elasticity here of .5.
And a
semi-elasticity of 1.5 as the corporate rate increases from 66% to 67% (i.e.,
by about 1.5% of its prior level) equates to an elasticity here of 1.0. This
would imply, all else equal, that 66% was the revenue-maximizing corporate
rate.
In short,
constant semi-elasticities would imply rising elasticities, since the same 1%
increase in the absolute corporate rate becomes an ever-smaller relative increase
as we get higher up.
The fact that the
U.S. corporate rate just changed by so much – from 35% to 21% counting the
federal component only, adds interest to the question of whether one should
think of the semi-elasticity as likely to be comparable at different absolute
rate levels. Other changes to the U.S. corporate tax rate by reason of the 2017
tax act may also affect the answer to this question. Thus, suppose we think of
the enactment of GILTI as meaning that U.S. MNCs are no longer looking at
driving their global tax rate down from 35% to 0% when they profit-shift out of
the U.S., but instead just from 21% to 13.125%. (This is based on the GILTI
marginal rate of effectively 10.5%, increased for the foreign tax liability
that would be needed to zero out one’s GILTI liability, given that foreign
taxes are only 80% creditable thereunder.) This might conceivably reduce both
the elasticity and the semi-elasticity of profit-shifting for such MNCs for the
post-2017 period, relative to pre-2018.
One also should presumably
be cautious about assuming that the same semi-elasticities apply to very large,
as opposed to much smaller, tax rate changes. For example, suppose that the
semi-elasticity is simply 1.5 throughout. Then lowering the U.S. corporate tax
rate from 35% to 21% would have caused about a 23 percent increase in the
reported U.S. corporate tax base. As per the paper’s Table 6 (on page 20), a 23
percent increase in the reported 2015 U.S. corporate tax base would have caused
it to be more than 100 percent of the year’s “true” corporate tax base. (This
is basically 1.5 percent times 14, upped slightly by reason of compounding.)
This would have been
by no means paradoxical or inconsistent. It would simply have meant that there
was now, in the hypothetical 2015 to which 2018 law applied, net inward profit-shifting
to the U.S., rather than net outbound from the U.S.
Nonetheless, I personally
would have thought that a surprising result, given that there are still plenty
of tax havens out there. On the other hand, it is true that some set of things constrains
profit-shifting, or else reported profits would be zero everywhere apart from
in the havens. But I don’t think that either I personally, or we collectively,
have a good handle on the cost functions and everything else that determine how
much profit-shifting occurs. What exactly limits it? We may have some general
ideas, but not very well-honed and specific ones.
Doing some simple
arithmetic with the numbers in Table 6, had the 2015 U.S. reported corporate tax
base been 23 percent greater, but the tax rate 14% lower, there would have been
an overall revenue loss of about $80 billion. Although this ignores real (as opposed
to mere profit-shifting) effects on where investment and economic activity are
actually located, it helps one to see why it was so totally obvious (at least
to the fair-minded) that the 2017 U.S. corporate rate cut would lose a ton of
revenue, unless there was a wildly unrealistic level of inbound real responses.
Wednesday, February 20, 2019
Amazon's zero federal income tax bill last year
There's been a lot of press covcrage recently of the fact that Amazon paid zero federal income taxes last year, despite earning $11 billion in profits (as per its financial statements). I've been too busy with teaching and other work to take a close look at this story, but I know enough about the general issue to have realized that it doesn't tell us anything definitive prior to considering how exactly Amazon got there.
Now, however, Matthew Yglesias has done me the favor of explaining how Amazon got its taxes down from over $2 billion (had it paid 21% on $11 billion) to zero
If this had happened in 2017 or earlier, net operating losses (NOLs) might have been the lead suspect, at least before one actually examined the evidence. Amazon had big losses for years, and there's absolutely nothing wrong if a company (genuinely) loses $11 billion in Year 1, then earns $11 billion in Year 2, and pays no Year 2 tax by reason of the NOLs from Year 1. But ironically, the 2017 act effectively presumes that there would be something wrong here, as it limits NOL deductions to 80% of current year profits. This was presumably directed at optics, more than at substance, although note that the disallowed excess NOLs can be carried forward indefinitely.
Yglesias finds three main causes in the record for Amazon's zero tax liability. The first is research and development (R&D) credits, which he notes get wide academic support, because research may yield positive externalities.
The second is temporary expensing for investing in equipment, which he notes is more controversial than R&D credits, but also gets some support across party lines. But I'd add two points here. First, expensing makes more sense when one is doing more than the 2017 act did to limit the tax benefit from effectively pairing it with interest deductions, which can cause debt-financed investment to be better than exempt. Second, there may have been a temporary transition effect insofar as, in 2018, Amazon was combining expensing for new equipment with continued depreciation for past years' equipment. This overlap from the shift between regimes would be expected to diminish in future years as expensing remains in place. And if equipment expensing is indeed allowed to expire as currently scheduled, then in the first year after the expiration Amazon's tax liability, relative to reported profits, might be unusually high (all else equal), by reason of the opposite regime shift.
The third cause for Amazon's zero tax liability, despite its $11 billion in reported earnings, is that its surging profits, by driving up its stock price, increased its deductions for paying stock-based compensation to its executives. Yglesias explains why allowing the deduction may make sense from a corporate income measurement standpoint, leaving aside the corporate governance issues that may be associated with very high executive compensation, but I'd add one more thing. The $1 billion in stock-based compensation that he reports as having been deducted by Amazon presumably DID lead to significant tax liability on the executives' part - indeed, one would presume, at a tax rate that was generally well above Amazon's 21 percent corporate rate. So the paid-out $1 billion in profits was indeed taxed to someone, and perhaps at more than the U.S. corporate rate, unless there were tax planning machinations at the individual level.
Insofar as this $1 billion was taxed by the U.S. at the individual level at a rate above 21%, I'd view that as an entirely adequate substitute for Amazon's being directly taxed on the same value.
Now, however, Matthew Yglesias has done me the favor of explaining how Amazon got its taxes down from over $2 billion (had it paid 21% on $11 billion) to zero
If this had happened in 2017 or earlier, net operating losses (NOLs) might have been the lead suspect, at least before one actually examined the evidence. Amazon had big losses for years, and there's absolutely nothing wrong if a company (genuinely) loses $11 billion in Year 1, then earns $11 billion in Year 2, and pays no Year 2 tax by reason of the NOLs from Year 1. But ironically, the 2017 act effectively presumes that there would be something wrong here, as it limits NOL deductions to 80% of current year profits. This was presumably directed at optics, more than at substance, although note that the disallowed excess NOLs can be carried forward indefinitely.
Yglesias finds three main causes in the record for Amazon's zero tax liability. The first is research and development (R&D) credits, which he notes get wide academic support, because research may yield positive externalities.
The second is temporary expensing for investing in equipment, which he notes is more controversial than R&D credits, but also gets some support across party lines. But I'd add two points here. First, expensing makes more sense when one is doing more than the 2017 act did to limit the tax benefit from effectively pairing it with interest deductions, which can cause debt-financed investment to be better than exempt. Second, there may have been a temporary transition effect insofar as, in 2018, Amazon was combining expensing for new equipment with continued depreciation for past years' equipment. This overlap from the shift between regimes would be expected to diminish in future years as expensing remains in place. And if equipment expensing is indeed allowed to expire as currently scheduled, then in the first year after the expiration Amazon's tax liability, relative to reported profits, might be unusually high (all else equal), by reason of the opposite regime shift.
The third cause for Amazon's zero tax liability, despite its $11 billion in reported earnings, is that its surging profits, by driving up its stock price, increased its deductions for paying stock-based compensation to its executives. Yglesias explains why allowing the deduction may make sense from a corporate income measurement standpoint, leaving aside the corporate governance issues that may be associated with very high executive compensation, but I'd add one more thing. The $1 billion in stock-based compensation that he reports as having been deducted by Amazon presumably DID lead to significant tax liability on the executives' part - indeed, one would presume, at a tax rate that was generally well above Amazon's 21 percent corporate rate. So the paid-out $1 billion in profits was indeed taxed to someone, and perhaps at more than the U.S. corporate rate, unless there were tax planning machinations at the individual level.
Insofar as this $1 billion was taxed by the U.S. at the individual level at a rate above 21%, I'd view that as an entirely adequate substitute for Amazon's being directly taxed on the same value.
NYU Tax Policy Colloquium, week 5: Susan Morse's "Government-to-Robot Enforcement"
"I'm sorry, Dave, I'm afraid you can't deduct that."
2001's HAL, of course, wasn't a robot, if one's definition of the term requires creature-like embodiment. But he was enough like us to be capable of going mad. (For that matter, I once inadvertently gave an otherwise well-adjusted pet iguana a seemingly neurotic aversion to going into his water bowl when there were people around - he instinctually expected to be safe when he was in there, so was startled to find I'd just take him out of the cage anyway. I concluded that at least fairly intelligent animals - iguanas, surprisingly, qualify! - can develop neurotic aversions. But I digress.)
Yesterday's colloquium guest, Susan Morse, presented an interesting paper that addresses how the rise of automation and centralization in legal compliance may transform the character of enforcement. These days, lots of tax filing involves the use of software - for example, Turbo Tax, H&R Block, TaxAct, Tax Slayer, Liberty Tax, and proprietary products that, say, a leading accounting firm might deploy with respect to its clients.
These programs, though hardly on the more sentient side of AI (unless one agrees with David Chalmers about thermostats) might nonetheless have their own versions of "I'm sorry, Dave, I'm afraid you can't deduct that." An example that I've heard about, from a few years back, concerns Turbo Tax and how to allocate business vs. personal use regarding expenses that relate to a second home which one also rents out. By days of specific business versus personal use, or using a 365-day base? Apparently there are arguments for both approaches, but Turbo Tax either heavily steered people one way, or actually "refused" to do it the other way.
What's more, they might offer an opportunity for centralized enforcement - for example, for the IRS's directly collecting from Turbo Tax the taxes underpaid on Turbo Tax filings, at least where this reflected an error in the Turbo Tax program. The amount might be estimated, rather than calculated precisely as to each particular Turbo Tax-filed return.
In this scenario, if we assume that Turbo Tax isn't going to try to get the money from individual filers (and note its current practice of holding customers harmless for extra taxes paid by reason of its errors or oversights), then in effect it will add expected IRS payouts into its prices, making it somewhat like an insurer.
The paper's goal is not to advocate approaches of this kind, but rather to say that their increasing feasibility means we should think about them, and about the broader opportunities and challenges presented by rising automation and centralization, both in tax filing and elsewhere.
I myself have tended to see Turbo Tax, which I used until recently, as little more than a glorified calculator and form filler-outer. For example, it allows one to spare oneself the enormous nuisance of computing AMT liability (a real issue for New Yorkers pre-2017 act), or of having forgotten to include a small item until after one had already made computations based on adjusted gross income. So Intuit would really have had to screw up something basic, in order for Turbo Tax to have gotten my federal income tax liability wrong.
Nonetheless, especially if these programs become more HAL-like, but even just today when they offer a data and collection source, they can become important loci for federal enforcement and collection efforts. The paper notes, however, that there might be issues both of capture (Intuit manipulates government policymakers to favor its interests) and of reverse capture (Intuit, despite incentives to please customers that push the other way, decides on "I'm sorry, Dave" by reason of its relationship with the tax authorities).
Here's an example that occurs to me - although I suspect it's not actually true. New York State created certain charities, gifts to which qualify for an 85% credit against state income tax. Thus, if at the margin I can't deduct any further state income taxes on my federal return, giving a dollar to such a charity costs me only 15 cents after state tax, but leaves me 22 cents after federal tax, if a full $1 charitable deduction is permissible and my federal marginal rate is 37%.
The Treasury has taken the view that, under these circumstances, my permissible federal deduction would only be 15 cents (i.e., the contribution minus the value of the state credit) - even though claiming simple state charitable deductions is not thus treated. But the Treasury might be wrong - i.e., it depends on what the courts ultimately decide, if the issue is litigated. Suppose, however, that Turbo Tax, which has you list the names of the charities to which you have given deductible contributions, in effect said "I'm sorry, Dave" once you had typed in the requisite name. This would in effect be reverse capture (although I doubt that Turbo Tax actually works this way, and it wouldn't be hard for taxpayers to think of simple workarounds). It might impede taking the deductions, and then fighting the IRS in court if necessary, while using Turbo Tax.
One of the paper's important themes concerns the relationship between (1) finding someone (such as tax sofrware providers) to serve an insurance function, and (2) being able to improve taxpayer incentives, at least in one particular dimension, by having no-fault penalties for underpayment of tax. (I wrote about this issue here.)
To illustrate the reasons for and problems with no-fault penalties, consider the following toy example: Suppose I can either owe $50 of tax with certainty, or else engage in a transaction the correct treatment of which is legally uncertain. If I do it and the issue is then resolved, there's a 50% chance that I'll owe zero, and a 50% chance that I'll owe $100. So my expected liability is $50 either way - assuming that the latter transaction will definitely be tested.
In reality, however, what we call the "audit lottery" means that I can do the transaction, report zero liability, and be highly likely never to have it examined. Suppose that the chance of its being examined was as high as 50%. Even under that, probably quite unrealistic, scenario, my expected tax liability, if I do the transaction, is only $25. 50% of the time it's never challenged, and 50% of the time when it's challenged I win.
This is actually a pervasive issue in tax planning, inducing taxpayers to favor taking uncertain and even highly aggressive positions because they might never be challenged. The key here is that one generally won't be penalized if one loses, if the position one took was sufficiently reasonable. A 50% chance of being correct would easily meet that threshold.
The solution to this incentive problem was stated long ago in work by Gary Becker concerning efficient crime deterrence. Suppose a crime one might commit has a $100 social cost. With certainty of detection, the Becker model advocates a $100 fine (leading, of course, to the notion that there are efficient crimes, e.g., one that I commit anyway because the benefit to me is $105). But then, Becker notes, there is the issue of uncertainty of detection. If there's only a 50% chance that I would be caught, then the penalty, from this standpoint, ought to be $200.
Ditto for the above tax planning / audit lottery example. Given the 50% chance of detection, it all comes out right (in terms of ex ante incentives, ignoring risk) if we say that I have to pay $200, rather than $100, in the case where I am audited and lose. This is a no-fault or strict liability penalty, ramping up the amount I owe in order to reverse out the incentive effects of the audit lottery.
But what about the fact that I apparently did nothing wrong, yet am being penalized? Surely it's not unreasonable for me to take a position that has a 50% chance of being correct. And we don't currently require that taxpayers flag all uncertain positions in their tax returns - partly because the IRS would never be able to check more than a small percentage anyway. While I'm not being sent to jail here, there is an issue of risk. But before turning to that, consider one more path to the same result: Steve Shavell's well-known work concerning negligence versus strict liability.
Shavell doesn't have a multiplier for uncertainty of detection in his simplest model (although I'm sure he deals with it thoroughly somewhere). But he notes that strict liability produces more efficient outcomes than negligence where only the party that would face the liability is making "activity level" choices. E.g., if drivers don't have to pay for the accidents they cause unless they're negligent, they'll drive too much, by reason of disregarding the cost of non-negligent accidents. (It's more complicated, of course, if, say, the pedestrians they would hit are also deciding on their own activity levels.)
Returning to tax uncertainty, the problem with a negligence standard for underpayment penalties is that it leads to an excessive "activity level" with respect to taking uncertain positions that might be wrong yet remain unaudited. Strict liability is therefore more efficient than negligence at this margin, unless we add to the picture the equivalent of activity-level-varying pedestrians. For example, we might say that the government's losing revenues from uncertainty plus < 100% audit rates gives it an incentive to try to reduce uncertainty. But I don't personally find that a very persuasive counter-argument in this setting.
Okay, on to the problems with strict liability tax penalties. Let's suppose in the above toy example that my chance of being meaningfully audited on this issue was only 5%. Then the optimal Becker-Shavell penalty is twenty times the under-payment, or $2,000. Add a few zeroes and, say, a $10,000 tax underpayment (as determined ex post) leaves me owing $200,000. Or, if the chance of a meaningful review was 1%, the short straw leaves me owing $1 million - even though we may feel I did nothing unreasonable. (Again, the disclosure option, while in special circumstances required under existing federal income tax law, can't go very far given the costliness of review - which is itself a further complicating factor for the analysis.)
If I am risk-averse, the burden this imposes on me may yield deadweight loss (other than insofar as we like its deterring me). From an ex post rather than ex ante perspective, it leads to particular outcomes that we may find unpalatable.
A further, but lesser, problem is that it may be hard to compute audit probabilities accurately. Note, however, that requiring negligence is equivalent to presuming a 100% audit chance, in cases where it would not be found. From that perspective, multipliers that are still "too low" but greater than 1.0 at least do something to improve incentives around uncertainty and the audit lottery. So the risk problem arguably weighs more heavily against strict liability than the difficulty of getting the multiplier just right.
This is where insurance comes in. The above problem goes away if taxpayers with uncertain positions can and do transfer the risk, for an actuarially fair price, to counterparties that can price and diversify it properly. But tax insurance is not widely available, and is hard to price. Hence the potential appeal of recruiting entities (such as Turbo Tax) that sit in a centralized position, if doing so doesn't create overly bad problems such as adverse selection or moral hazard. (Adverse selection is inevitably an issue, however, if not all taxpayers use entities that can be recruited to serve, in effect, as insurers.)
One further issue, in this regard, on which the paper touches is the feasibility of a system that would, say, charge Turbo Tax for user underpayments that reflected factual inaccuracies in the data that one entered. Can we even imagine a system in which, if I used Turbo Tax and left out a $10,000 cash payment that someone had made to me, it was liable?
The answer would seem to be no, but actually it's a bit more complicated. Consider car insurance. The insurer will typically pay for accident costs even if they're completely the driver's fault, reflecting wildly inappropriate behavior (such as driving drunk, running red lights, texting while driving, etc.), In other words, the insurer loses if the driver is negligent, even though negligence is under the driver's control or at least influence.
How is such insurance coverage feasible? Well, it certainly creates moral hazard, but there are ways of addressing it, such as literal coinsurance (such as ffrom deductibles and copays), implicit coinsurance (such as from collateral psychic or other accident costs to the driver that aren't covered), and future years' insurance rates that will now presumably be higher. So it's feasible to have at least some car insurance for negligent drivers, despite the issue of moral hazard.
By extension, we could conceivably have a model in which Turbo Tax was liable, at least in part, even with respect to factual errors made by its customers, so long as analogous mechanisms sufficiently addressed moral hazard. But this still of course leaves the problem of mandatorily drafting software providers to serve as insurers by imposing no-fault collective liability, if strict liability doesn't apply to taxpayers who file without using such providers.
Returning to the paper, it doesn't purport to resolve any of these questions, but rather to begin laying out and addressing them. This particular piece will be appearing shortly in the University of Illinois Law Review, but I'll be looking forward to Morse's further work in the area.
2001's HAL, of course, wasn't a robot, if one's definition of the term requires creature-like embodiment. But he was enough like us to be capable of going mad. (For that matter, I once inadvertently gave an otherwise well-adjusted pet iguana a seemingly neurotic aversion to going into his water bowl when there were people around - he instinctually expected to be safe when he was in there, so was startled to find I'd just take him out of the cage anyway. I concluded that at least fairly intelligent animals - iguanas, surprisingly, qualify! - can develop neurotic aversions. But I digress.)
Yesterday's colloquium guest, Susan Morse, presented an interesting paper that addresses how the rise of automation and centralization in legal compliance may transform the character of enforcement. These days, lots of tax filing involves the use of software - for example, Turbo Tax, H&R Block, TaxAct, Tax Slayer, Liberty Tax, and proprietary products that, say, a leading accounting firm might deploy with respect to its clients.
These programs, though hardly on the more sentient side of AI (unless one agrees with David Chalmers about thermostats) might nonetheless have their own versions of "I'm sorry, Dave, I'm afraid you can't deduct that." An example that I've heard about, from a few years back, concerns Turbo Tax and how to allocate business vs. personal use regarding expenses that relate to a second home which one also rents out. By days of specific business versus personal use, or using a 365-day base? Apparently there are arguments for both approaches, but Turbo Tax either heavily steered people one way, or actually "refused" to do it the other way.
What's more, they might offer an opportunity for centralized enforcement - for example, for the IRS's directly collecting from Turbo Tax the taxes underpaid on Turbo Tax filings, at least where this reflected an error in the Turbo Tax program. The amount might be estimated, rather than calculated precisely as to each particular Turbo Tax-filed return.
In this scenario, if we assume that Turbo Tax isn't going to try to get the money from individual filers (and note its current practice of holding customers harmless for extra taxes paid by reason of its errors or oversights), then in effect it will add expected IRS payouts into its prices, making it somewhat like an insurer.
The paper's goal is not to advocate approaches of this kind, but rather to say that their increasing feasibility means we should think about them, and about the broader opportunities and challenges presented by rising automation and centralization, both in tax filing and elsewhere.
I myself have tended to see Turbo Tax, which I used until recently, as little more than a glorified calculator and form filler-outer. For example, it allows one to spare oneself the enormous nuisance of computing AMT liability (a real issue for New Yorkers pre-2017 act), or of having forgotten to include a small item until after one had already made computations based on adjusted gross income. So Intuit would really have had to screw up something basic, in order for Turbo Tax to have gotten my federal income tax liability wrong.
Nonetheless, especially if these programs become more HAL-like, but even just today when they offer a data and collection source, they can become important loci for federal enforcement and collection efforts. The paper notes, however, that there might be issues both of capture (Intuit manipulates government policymakers to favor its interests) and of reverse capture (Intuit, despite incentives to please customers that push the other way, decides on "I'm sorry, Dave" by reason of its relationship with the tax authorities).
Here's an example that occurs to me - although I suspect it's not actually true. New York State created certain charities, gifts to which qualify for an 85% credit against state income tax. Thus, if at the margin I can't deduct any further state income taxes on my federal return, giving a dollar to such a charity costs me only 15 cents after state tax, but leaves me 22 cents after federal tax, if a full $1 charitable deduction is permissible and my federal marginal rate is 37%.
The Treasury has taken the view that, under these circumstances, my permissible federal deduction would only be 15 cents (i.e., the contribution minus the value of the state credit) - even though claiming simple state charitable deductions is not thus treated. But the Treasury might be wrong - i.e., it depends on what the courts ultimately decide, if the issue is litigated. Suppose, however, that Turbo Tax, which has you list the names of the charities to which you have given deductible contributions, in effect said "I'm sorry, Dave" once you had typed in the requisite name. This would in effect be reverse capture (although I doubt that Turbo Tax actually works this way, and it wouldn't be hard for taxpayers to think of simple workarounds). It might impede taking the deductions, and then fighting the IRS in court if necessary, while using Turbo Tax.
One of the paper's important themes concerns the relationship between (1) finding someone (such as tax sofrware providers) to serve an insurance function, and (2) being able to improve taxpayer incentives, at least in one particular dimension, by having no-fault penalties for underpayment of tax. (I wrote about this issue here.)
To illustrate the reasons for and problems with no-fault penalties, consider the following toy example: Suppose I can either owe $50 of tax with certainty, or else engage in a transaction the correct treatment of which is legally uncertain. If I do it and the issue is then resolved, there's a 50% chance that I'll owe zero, and a 50% chance that I'll owe $100. So my expected liability is $50 either way - assuming that the latter transaction will definitely be tested.
In reality, however, what we call the "audit lottery" means that I can do the transaction, report zero liability, and be highly likely never to have it examined. Suppose that the chance of its being examined was as high as 50%. Even under that, probably quite unrealistic, scenario, my expected tax liability, if I do the transaction, is only $25. 50% of the time it's never challenged, and 50% of the time when it's challenged I win.
This is actually a pervasive issue in tax planning, inducing taxpayers to favor taking uncertain and even highly aggressive positions because they might never be challenged. The key here is that one generally won't be penalized if one loses, if the position one took was sufficiently reasonable. A 50% chance of being correct would easily meet that threshold.
The solution to this incentive problem was stated long ago in work by Gary Becker concerning efficient crime deterrence. Suppose a crime one might commit has a $100 social cost. With certainty of detection, the Becker model advocates a $100 fine (leading, of course, to the notion that there are efficient crimes, e.g., one that I commit anyway because the benefit to me is $105). But then, Becker notes, there is the issue of uncertainty of detection. If there's only a 50% chance that I would be caught, then the penalty, from this standpoint, ought to be $200.
Ditto for the above tax planning / audit lottery example. Given the 50% chance of detection, it all comes out right (in terms of ex ante incentives, ignoring risk) if we say that I have to pay $200, rather than $100, in the case where I am audited and lose. This is a no-fault or strict liability penalty, ramping up the amount I owe in order to reverse out the incentive effects of the audit lottery.
But what about the fact that I apparently did nothing wrong, yet am being penalized? Surely it's not unreasonable for me to take a position that has a 50% chance of being correct. And we don't currently require that taxpayers flag all uncertain positions in their tax returns - partly because the IRS would never be able to check more than a small percentage anyway. While I'm not being sent to jail here, there is an issue of risk. But before turning to that, consider one more path to the same result: Steve Shavell's well-known work concerning negligence versus strict liability.
Shavell doesn't have a multiplier for uncertainty of detection in his simplest model (although I'm sure he deals with it thoroughly somewhere). But he notes that strict liability produces more efficient outcomes than negligence where only the party that would face the liability is making "activity level" choices. E.g., if drivers don't have to pay for the accidents they cause unless they're negligent, they'll drive too much, by reason of disregarding the cost of non-negligent accidents. (It's more complicated, of course, if, say, the pedestrians they would hit are also deciding on their own activity levels.)
Returning to tax uncertainty, the problem with a negligence standard for underpayment penalties is that it leads to an excessive "activity level" with respect to taking uncertain positions that might be wrong yet remain unaudited. Strict liability is therefore more efficient than negligence at this margin, unless we add to the picture the equivalent of activity-level-varying pedestrians. For example, we might say that the government's losing revenues from uncertainty plus < 100% audit rates gives it an incentive to try to reduce uncertainty. But I don't personally find that a very persuasive counter-argument in this setting.
Okay, on to the problems with strict liability tax penalties. Let's suppose in the above toy example that my chance of being meaningfully audited on this issue was only 5%. Then the optimal Becker-Shavell penalty is twenty times the under-payment, or $2,000. Add a few zeroes and, say, a $10,000 tax underpayment (as determined ex post) leaves me owing $200,000. Or, if the chance of a meaningful review was 1%, the short straw leaves me owing $1 million - even though we may feel I did nothing unreasonable. (Again, the disclosure option, while in special circumstances required under existing federal income tax law, can't go very far given the costliness of review - which is itself a further complicating factor for the analysis.)
If I am risk-averse, the burden this imposes on me may yield deadweight loss (other than insofar as we like its deterring me). From an ex post rather than ex ante perspective, it leads to particular outcomes that we may find unpalatable.
A further, but lesser, problem is that it may be hard to compute audit probabilities accurately. Note, however, that requiring negligence is equivalent to presuming a 100% audit chance, in cases where it would not be found. From that perspective, multipliers that are still "too low" but greater than 1.0 at least do something to improve incentives around uncertainty and the audit lottery. So the risk problem arguably weighs more heavily against strict liability than the difficulty of getting the multiplier just right.
This is where insurance comes in. The above problem goes away if taxpayers with uncertain positions can and do transfer the risk, for an actuarially fair price, to counterparties that can price and diversify it properly. But tax insurance is not widely available, and is hard to price. Hence the potential appeal of recruiting entities (such as Turbo Tax) that sit in a centralized position, if doing so doesn't create overly bad problems such as adverse selection or moral hazard. (Adverse selection is inevitably an issue, however, if not all taxpayers use entities that can be recruited to serve, in effect, as insurers.)
One further issue, in this regard, on which the paper touches is the feasibility of a system that would, say, charge Turbo Tax for user underpayments that reflected factual inaccuracies in the data that one entered. Can we even imagine a system in which, if I used Turbo Tax and left out a $10,000 cash payment that someone had made to me, it was liable?
The answer would seem to be no, but actually it's a bit more complicated. Consider car insurance. The insurer will typically pay for accident costs even if they're completely the driver's fault, reflecting wildly inappropriate behavior (such as driving drunk, running red lights, texting while driving, etc.), In other words, the insurer loses if the driver is negligent, even though negligence is under the driver's control or at least influence.
How is such insurance coverage feasible? Well, it certainly creates moral hazard, but there are ways of addressing it, such as literal coinsurance (such as ffrom deductibles and copays), implicit coinsurance (such as from collateral psychic or other accident costs to the driver that aren't covered), and future years' insurance rates that will now presumably be higher. So it's feasible to have at least some car insurance for negligent drivers, despite the issue of moral hazard.
By extension, we could conceivably have a model in which Turbo Tax was liable, at least in part, even with respect to factual errors made by its customers, so long as analogous mechanisms sufficiently addressed moral hazard. But this still of course leaves the problem of mandatorily drafting software providers to serve as insurers by imposing no-fault collective liability, if strict liability doesn't apply to taxpayers who file without using such providers.
Returning to the paper, it doesn't purport to resolve any of these questions, but rather to begin laying out and addressing them. This particular piece will be appearing shortly in the University of Illinois Law Review, but I'll be looking forward to Morse's further work in the area.
Friday, February 15, 2019
Tax Games article: law review version
The official law review follow-up to the "Tax Games" article (and sequel) that 13 of us co-authored during the 2017 tax act's rush to enactment has now officially appeared in the Minnesota Law Review.
It's called "The Games They Will Play: Tax Games, Roadblocks, and Glitches Under the 2017 Tax Act," and it's available for download here.
The abstract goes something like this: "The 2017 tax legislation brought sweeping changes to the rules for taxing individuals and business, the deductibility of state and local taxes, and the international tax regime. The complex legislation was drafted and passed through a rushed and secretive process intended to limit public comment on one of the most consequential pieces of domestic policy enacted in recent history.
This Article is an effort to supply the analysis and deliberation that should have accompanied the bill's consideration and passage, and describes key problem areas in the new legislation. Many of the new changes fundamentally undermine the integrity of the tax code and allow well-advised taxpayers to game the new rules through strategic planning. These gaming opportunities are likely to worsen the bill's distributional and budgetary costs beyond those expected in the official estimates. Other changes will encounter legal roadblocks, while drafting glitches could lead to uncertainty and haphazard increases or decreases in taxes. This Article also describes reform options for policymakers who will inevitably be tasked with enacting further changes to the tax law in order to undo the legislation's harmful effects on the fiscal system."
It's called "The Games They Will Play: Tax Games, Roadblocks, and Glitches Under the 2017 Tax Act," and it's available for download here.
The abstract goes something like this: "The 2017 tax legislation brought sweeping changes to the rules for taxing individuals and business, the deductibility of state and local taxes, and the international tax regime. The complex legislation was drafted and passed through a rushed and secretive process intended to limit public comment on one of the most consequential pieces of domestic policy enacted in recent history.
This Article is an effort to supply the analysis and deliberation that should have accompanied the bill's consideration and passage, and describes key problem areas in the new legislation. Many of the new changes fundamentally undermine the integrity of the tax code and allow well-advised taxpayers to game the new rules through strategic planning. These gaming opportunities are likely to worsen the bill's distributional and budgetary costs beyond those expected in the official estimates. Other changes will encounter legal roadblocks, while drafting glitches could lead to uncertainty and haphazard increases or decreases in taxes. This Article also describes reform options for policymakers who will inevitably be tasked with enacting further changes to the tax law in order to undo the legislation's harmful effects on the fiscal system."
Wednesday, February 13, 2019
Roemer paper on Kantian cooperation, part 2: tax policy implications
As the previous post noted, John Roemer has a paper and book asserting that we should think about real world prisoner's dilemmas in light of the possibility that people do not always act like selfish Nashian optimizers, but may instead base their decisions on the Kantian question of what decision would be best if adopted by everyone who is facing a given choice. Hence the classic prisoner in the dilemma may hold out rather than confess and implicate his colleagues, and an individual may choose (say) to recycle on the view that it's better for everyone than no one to do so.
From this starting point, Roemer constructs a model in which universal Kantianism would lead to Pareto-superior outcomes that were better for everyone than the set of such outcomes that would result from selfish Nashian optimization. So neoclassical economics (with selfish optimizers but perfect markets) is not the only route to maximizing efficiency. And, in his model, efficiency and equity aims no longer need be in conflict. For example, in his tax instantiation, there will be no deadweight loss whether the labor income tax rate is 0 percent, 100 percent, or anywhere in between, because will ignore the tax rate in deciding how much labor (yielding such income) to supply.
How does he get there? Let's start with a hypothetical society in which (a la Mirrlees) the government simply levies a labor income tax to fund a demogrant. Suppose initially that everyone has the same (1) "ability" or wage rate, (2) preferences, and indeed (3) labor income. Only, one of these individuals is now considering increasing her labor supply, thus increasing as well her income and her tax liability.
If she's a selfish Nashian optimizer, she'll evaluate the choice in light of the fact that she'll only get to keep the after-tax income. While it will also increase the revenues that are used to fund the demogrant, her share of that, in a large society, is trivial.
But now suppose she's a Kantian. She'll ask herself: How I would be affected if EVERYONE increased labor supply, and thus income, by this amount. The answer, given the society's assumed homogeneity, is that the tax and demogrant would be exactly equal. E.g., suppose that, in a 10-person society with a 100% tax rate, she earns an extra $100, but the other 9 members do so as well. She'll keep zero of the extra earnings after-tax but pre-grant, but her grant will go up by $100 (i.e., one-tenth of the newly generated $1,000).
So she bases her labor supply choice on pre-tax income, reflecting that with the demogrant (at any consistently applied tax rate, but everyone's doing the same thing) the tax and demogrant will be a wash. By contrast, the selfish Nashian optimizer would ask: What if only I increase my income by $100? In this 10-person set-up, I get to keep only $10 from the increased demogrant.
Again, the Kantian's motivation is not that maybe other people will in fact work more, too, if she does. Rather, it is that the moral, cooperative way to think about the question is to ask: What would be best if everyone made the same choice as I do?
Now let's add heterogeneity to the picture. At first, just in wage rate. Suppose she can earn $100 an hour, while all other members of the society, being less "able," can earn only $5 an hour. In John's model this makes no difference, because what she asks herself is: What if everyone worked long enough to earn $100 more? Then I would still break even from the tax plus demogrant, and so pre-tax income is the right metric.
What if we add in heterogeneous tastes? Then two individuals who are both Kantians may make different marginal choices. One deems the disutility of working more to be adequately offset by getting to consume more, as determined based on pretax income. The other likes leisure more and market consumption less, so she decides not to work more. We end up getting a transfer, at the margin, from the first of these individuals to the second. This result, while not rebutting the claim of Pareto superiority from the system, strikes me as a bit perverse, in the sense that we are transferring $$ from one who subjectively values them more at the margin compatred to leisure, to someone who subjectively values them less in this sense. But John isn't claiming that the system yields overall welfare maximization or otherwise defined optimality, and he regards its effectively decentralizing decision-making from any central planning function to all of the individual workers as a virtue.
What if we now change the model so that the government is funding goods and services with its tax revenues, rather than demogrants? This doesn't change things fundamentally, although it's true that Kantians' possibly varying beliefs regarding the benefit derived from government spending might result in differentiating their choices. But it does seem that here "pretax income" becomes a less precise statement of what the Kantian will be evaluating when making a marginal labor supply choice.
This brings us to the question from my post earlier today of exactly what sorts of questions the categorical imperative might be thought to demand that we ask. Again, "cooperate vs. defect" is easy; "how cooperate" less so. But an eminent NYU philosopher once told me that he personally decided on whether, say, to accept a consulting engagement, for which he was being offered $X, based on the pretax amount, not the after-tax amount. This appeared to reflect a Kantian feeling that it was morally wrong to look only at the after-tax amount, given that the tax payment wasn't being lost - it was merely being transferred from his individual pocket to the collective one.
That strikes me as a more salient and intuitive way to think about Kantian labor supply behavior than doing so in terms of "efficiency units in labor supply," in the manner of the Roemer paper. But why might we expect anyone to think about pretax versus after-tax income even in that way? Are people Kantian enough to do that, even assuming that it captures how they would be Kantian?
The paper notes evidence that tax compliance is higher than it "ought" to be given people's actual economic incentives (at low audit levels) and risk preferences as otherwise discerned. And the tax compliance literature extensively shows that "tax morale" - reflecting, for example, perceptions regarding others' compliance behavior, the tax system's "fairness," the overall political system's fairness, and so forth - can have a major impact on compliance behavior, even holding constant the actual "audit lottery" odds (given penalties as well as audit levels).
So the extent to which people act as Kantians by focusing on pretax income, when they make labor supply choices, might likewise reflect considerations analogous to morale in compliance. But I'm not certain that they do, since for me there's a lot of context-specific sociology to the question of how people who have non-zero Kantian inclinations will interpret the demands of a taste for cooperation in practice. There's no reason to think that they (or I) do so in a universal and logically consistent fashion.
But if a degree of Kantianism here is plausible, then one might be able to reduce labor supply elasticity by addressing morale-type considerations about social solidarity, faith and trust in government, and others' willingness to overlook tax planning considerations and focus on pretax income.
One last set of questions potentially raised by the paper goes to fleshing out how a Kantian might think about all the various choices that we face in making tax planning decisions. "Work more and thereby earn more" is only one possible choice. One could also try to apply the reasoning, say, to lawful tax avoidance (ranging from the clearly "intended" to the arguably "unintended" even if efficacious). But for now I will leave them to the reader to ponder, if he or she likes.
Tax policy colloquium, week 4: John Roemer's "A theory of cooperation in games with an application to market socialism"
Yesterday we were pleased to have John Roemer as our speaker, discussing this paper and his related forthcoming book: How We Cooperate: A Theory of Kantian Optimization. The basic thesis is intellectually important, and likely to get some attention from economists, as well as from philosophers who are willing to look over the walls of their silo, so I will discuss it in general terms here, before turning to the tax aspect that caused it to be a good fit for us in the Tax Policy Colloquium (where, of course, each week can be totally different from the ones before and after).
Prisoner's dilemmas are pervasive in public policy. One gets them whenever there are positive or negative externalities that no institutions (be they Coasean markets or Pigovian taxes and subsidies) adequately address.
Pollution and over-fishing are among the classic examples. E.g., if I want to drive my car a lot, run the heat and AC to the max, etc, but everyone's doing this causes catastrophic global warming, then, from a selfish standpoint, the best thing would be if everyone BUT me curtailed their activities suitably. But, given my individually trivial contribution to the overall problem, I'm best off defecting whether or not everyone else is cooperating, absent sanctions or other ways of internalizing to me the marginal cost of my causing carbon release.
With selfish players, a one-shot prisoner's dilemma has a simple Nash answer: everyone defects, so everyone loses relative to the case where everyone cooperated. While there may be real world mitigating solutions, such as repeat play with sanctions from the other players, wouldn't it be nice if people were willing to cooperate voluntarily, despite the selfish unilateral incentive to defect?
John answers: Not only would it be nice, but we do in fact frequently cooperate! So the Nashian view of people as always selfishly pursuing just their own welfare is inaccurate. Indeed, evolution has yielded in us a species that is unusually, and among the great apes uniquely, inclined towards cooperating with each other under suitable conditions (such as where we feel solidarity and trust towards fellow group members).
While sanctions for defection may plan an important role in preserving cooperative non-Nash equilbria, they're not the only reason we cooperate. Nor is altruism the main reason, as it tends to be limited to a much smaller core group (such as immediate family) than the set of people with whom one is willing to cooperate.
John also finds it largely unhelpful to posit exotic preferences, such as a "warm glow" achieved subjectively by cooperating, as the explanation for the behavior. It seems to him both too hand-tailored (like Ptolemaic epicycles to reconcile celestial movements to data) and backwards, in the sense that I don't cooperate to get a warm glow, even if I in fact get one from cooperating. I cooperate because I believe it's right to do so.
While I see his point here, I think the "warm glow" framing is intellectually helpful for a particular reason. Even if I cooperate because I think it's right to do so, and that this differs from eating chocolate because I think it tastes good, real-world cooperators are likely to be trading off their desire to cooperate against other things they care about. Suppose I recycle because I think it's right to do so, not because the city might find out and fine me if I don't. I still would likely start recycling a lot less if, say, it took several hours a week.
John says that those who cooperate, rather than defect, in prisoner's dilemmas are generally being Kantians, as I'll discuss shortly. But while the paper we discussed yesterday doesn't discuss Kantianism that's limited by one's trading it off against selfish preferences, it does discuss conditional Kantians - that is, those whose willingness to beuave cooperatively depends on how prevalent they believe cooperative behavior is in the relevant population. (See Figure 1, at page 33 of the paper, for a visual depiction of an equilibrium at which the % actually cooperating equals the % that are willing to cooperate at that level of cooperation.)
I gather that philosophers have questioned this set-up, saying you aren't actually a Kantian if you're being conditional about it. While this is true as a matter of definition, once one has defined Kantians as they choose to, it is intellectually unhelpful, and would appear to be an instance of narrow-minded and retrograde siloing (an inclination that I've encountered from other disciplines, in my project on literature and high-end inequality).
Returning to prisoner's dilemmas, a Kantian who faces one may ask: What is the decision that would be best if ALL of us made it? With the classic PD structure, the answer (of course) is Cooperate, don't defect. So the Kantian does what would be best if all did it, simply because this is the right thing to do, and not based on any actual presumed effect of one's own decision on what others will decide. So the Kantian (for example) recycles - and, I would think, also considers following a code with respect to carbon emissions that, if universalized, would properly curtail global warming and other adverse climate change.
But how does one identify the proper Kantian course of action? In a simple prisoner's dilemma set-up, it's obvious, since there are just two choices, Cooperate and Defect. Maybe one should think of recycling that way. As to global carbon abatement, it's not as clear, not to mention that the motivation to cooperate (even assuming one can determine how) will be weaker if one is among John's conditional Kantians.
John notes that many people do in fact recycle, beyond the point that sanctions and conventional incentives would seem to be inducing. There may also be a bit of Kantian behavior around carbon abatement. For example, while I am sure I do not do nearly enough in that regard, or as much as I would do if I were responding via standard incentives to a global carbon tax that had been set at an appropriate level, it is something I have in mind, and that induces me to disfavor what I feel is overly wasteful behavior. So yes, I am, upon a reflection, somewhat of a Kantian, albeit a conditional one both in John's sense of being influenced by what I think others are doing, and my sense of trading off my preference for doing what is right in the Kantian sense against more selfish considerations.
In calling my own behavior Kantian, however imperfectly so, I am agreeing with John about the underlying psychology. Whether or not the categorical imperative is exactly the right formulation, the underlying sentiment of fairness does appear to me (from self-reflection) to have something to do with symmetry and consistency between what people do for themselves and expect from others. And in my case, but I suspect for many other people as well, a lot of it is driven by notions of reciprocity. I neither want to be a sucker, who cooperates when everyone else is defecting, nor a jerk, who defects when everyone else is cooperating. This gives psychological appeal to conditional Kantianism. And it's not just me, if tit-for-tat sentiments, embracing both the good and the bad, are more generally intuitive.
But what does all this have to do with tax? I'll address that in a separate post.
Prisoner's dilemmas are pervasive in public policy. One gets them whenever there are positive or negative externalities that no institutions (be they Coasean markets or Pigovian taxes and subsidies) adequately address.
Pollution and over-fishing are among the classic examples. E.g., if I want to drive my car a lot, run the heat and AC to the max, etc, but everyone's doing this causes catastrophic global warming, then, from a selfish standpoint, the best thing would be if everyone BUT me curtailed their activities suitably. But, given my individually trivial contribution to the overall problem, I'm best off defecting whether or not everyone else is cooperating, absent sanctions or other ways of internalizing to me the marginal cost of my causing carbon release.
With selfish players, a one-shot prisoner's dilemma has a simple Nash answer: everyone defects, so everyone loses relative to the case where everyone cooperated. While there may be real world mitigating solutions, such as repeat play with sanctions from the other players, wouldn't it be nice if people were willing to cooperate voluntarily, despite the selfish unilateral incentive to defect?
John answers: Not only would it be nice, but we do in fact frequently cooperate! So the Nashian view of people as always selfishly pursuing just their own welfare is inaccurate. Indeed, evolution has yielded in us a species that is unusually, and among the great apes uniquely, inclined towards cooperating with each other under suitable conditions (such as where we feel solidarity and trust towards fellow group members).
While sanctions for defection may plan an important role in preserving cooperative non-Nash equilbria, they're not the only reason we cooperate. Nor is altruism the main reason, as it tends to be limited to a much smaller core group (such as immediate family) than the set of people with whom one is willing to cooperate.
John also finds it largely unhelpful to posit exotic preferences, such as a "warm glow" achieved subjectively by cooperating, as the explanation for the behavior. It seems to him both too hand-tailored (like Ptolemaic epicycles to reconcile celestial movements to data) and backwards, in the sense that I don't cooperate to get a warm glow, even if I in fact get one from cooperating. I cooperate because I believe it's right to do so.
While I see his point here, I think the "warm glow" framing is intellectually helpful for a particular reason. Even if I cooperate because I think it's right to do so, and that this differs from eating chocolate because I think it tastes good, real-world cooperators are likely to be trading off their desire to cooperate against other things they care about. Suppose I recycle because I think it's right to do so, not because the city might find out and fine me if I don't. I still would likely start recycling a lot less if, say, it took several hours a week.
John says that those who cooperate, rather than defect, in prisoner's dilemmas are generally being Kantians, as I'll discuss shortly. But while the paper we discussed yesterday doesn't discuss Kantianism that's limited by one's trading it off against selfish preferences, it does discuss conditional Kantians - that is, those whose willingness to beuave cooperatively depends on how prevalent they believe cooperative behavior is in the relevant population. (See Figure 1, at page 33 of the paper, for a visual depiction of an equilibrium at which the % actually cooperating equals the % that are willing to cooperate at that level of cooperation.)
I gather that philosophers have questioned this set-up, saying you aren't actually a Kantian if you're being conditional about it. While this is true as a matter of definition, once one has defined Kantians as they choose to, it is intellectually unhelpful, and would appear to be an instance of narrow-minded and retrograde siloing (an inclination that I've encountered from other disciplines, in my project on literature and high-end inequality).
Returning to prisoner's dilemmas, a Kantian who faces one may ask: What is the decision that would be best if ALL of us made it? With the classic PD structure, the answer (of course) is Cooperate, don't defect. So the Kantian does what would be best if all did it, simply because this is the right thing to do, and not based on any actual presumed effect of one's own decision on what others will decide. So the Kantian (for example) recycles - and, I would think, also considers following a code with respect to carbon emissions that, if universalized, would properly curtail global warming and other adverse climate change.
But how does one identify the proper Kantian course of action? In a simple prisoner's dilemma set-up, it's obvious, since there are just two choices, Cooperate and Defect. Maybe one should think of recycling that way. As to global carbon abatement, it's not as clear, not to mention that the motivation to cooperate (even assuming one can determine how) will be weaker if one is among John's conditional Kantians.
John notes that many people do in fact recycle, beyond the point that sanctions and conventional incentives would seem to be inducing. There may also be a bit of Kantian behavior around carbon abatement. For example, while I am sure I do not do nearly enough in that regard, or as much as I would do if I were responding via standard incentives to a global carbon tax that had been set at an appropriate level, it is something I have in mind, and that induces me to disfavor what I feel is overly wasteful behavior. So yes, I am, upon a reflection, somewhat of a Kantian, albeit a conditional one both in John's sense of being influenced by what I think others are doing, and my sense of trading off my preference for doing what is right in the Kantian sense against more selfish considerations.
In calling my own behavior Kantian, however imperfectly so, I am agreeing with John about the underlying psychology. Whether or not the categorical imperative is exactly the right formulation, the underlying sentiment of fairness does appear to me (from self-reflection) to have something to do with symmetry and consistency between what people do for themselves and expect from others. And in my case, but I suspect for many other people as well, a lot of it is driven by notions of reciprocity. I neither want to be a sucker, who cooperates when everyone else is defecting, nor a jerk, who defects when everyone else is cooperating. This gives psychological appeal to conditional Kantianism. And it's not just me, if tit-for-tat sentiments, embracing both the good and the bad, are more generally intuitive.
But what does all this have to do with tax? I'll address that in a separate post.
Kantian background to discussing John Roemer paper
In my previous post, I set at 80 percent the probability that, at yesterday's NYU Tax Policy Colloquium discussion of John Roemer's A Theory of Cooperation in Games With an Application to Market Socialism, I would "end up recounting the tale of the unfair bad grade (worst of my career) that I got as a freshman on a Kant paper." These subjective odds reflected that the story, which reading the paper had helped to return from long hibernation to the forefront of my mind, actually relates to issues of prime interest that the paper raises.
As it happens, I didn't end up recounting the story either in the AM class or at the PM public session, as it would have taken too much airtime. But I'll indulge myself by leading with it here, before turning more particularly to the paper in a follow-up post.
It's September or perhaps very early in October 1974, and I've recently arrived at Princeton University as a 17-year old freshman. (I later ascertained that 94% of the class was older than me - this in an era when 18 was the legal drinking age and there was a on-campus student pub at which you'd be carded.)
Having both a competitive nature and a family background that placed intense value on "intelligence" and academic achievement, I was eager to rate myself against the field, as well as judge myself against demanding self-expectations. I also made a point from the start of taking classes in which there were frequent student papers, because I liked writing, along with the greater control over content that they offered relative to answering exam questions.
The first short paper I got back, presumably in history or political science, came out in accordance with my self-demands. But then came the second one, in Intro to Moral Philosophy. This was a lecture course taught by Thomas Scanlon, but my "preceptor" (as they called the leaders of the weekly small-group seminar meetings) was a graduate student in the philosophy department whose name I still recall.
This paper's subject was Kant, and more particularly the categorical imperative, which might be stated (per Wikipedia) as follows: "Act according to the maxim that you would wish all other rational people to follow, as if it were a universal law."
Intellectually unformed though I then was, I realized that, in interpreting it, one faces what I might today call a "level of generality" problem. The example I thought of was as follows: While it DOES mean, say, that I shouldn't lie because if everyone lied we would lose the ability to have the truth believed, it surely DOESN'T mean that I can't go to the Wawa Market on Alexander Street at 8 pm, on the ground that no one could go there if everyone tried to at the same time. So, in attempting to apply the categorical imperative, there is a broader issue, which may have no simple or obvious answer, regarding the level of generality at which one should state the maxims that one is testing for rational consistency.
To this day, I don't think that's bad for what was presumably a 2-page (or at the most 5-page) paper in an undergraduate Intro to Philosophy class. But I got it back with a grade of C+ and some sort of peremptory, even angry or at least disgusted / impatient, scrawl - which might as well have been in crayon - to the effect of: No, that's wrong, that's not what the categorical imperative says. No effort beyond that to engage or explain where or how the grad student thought I had gone wrong.
These days, when a student gets a poor grade and comes in to see me, I'll try to reconstruct the reasons for it (if it's an exam that doesn't have comments like a graded paper), but I'll also say very strongly if this appears to be among the student's concerns: This DOESN"T mean you're a bad student, or not good at law or at tax, etc. - it's just a thing that happened one time in terms of answering one question that might have been either well or poorly chosen (and then graded) by me.
But I didn't have the older me to tell me this at the time, nor did I go talk to the graduate student, towards whom I now felt hostile. (Plus, I knew it was generally bad form to complain about grades.) What I should have done, of course, is go see Scanlon - not to complain about the grade as such, but to get broader dialogue and feedback, but the thought of doing this never occurred to me. I think I viewed him, through no fault of his own, as too far removed and remote from me.
Taking the whole thing far too seriously, I was shaken by the grade, which hurt my self-confidence (hence, I told no one about it at the time), even though I felt that it was misguided, unfair, perhaps biased for some specific reason that I couldn't fathom, and stupid. I also concluded that maybe I wasn't fated to do as well in philosophy classes as in other liberal arts subjects. I responded by working more diligently for the rest of that semester then I ever would again. (Once I had restored my self-confidence via my final fall 1974 results, I continued to take my schoolwork, for the most part, reasonably seriously, but I developed a tendency to prefer pursuing my own intellectual interests to those of a particular course or instructor.)
Anyway, the very interesting Roemer paper raises, among other questions, that of how good Kantians should frame the maxims that they are hypothetically universalizing in their minds. Depending on the context, the answer to this question is sometimes clear, but other times much less so.
As it happens, I didn't end up recounting the story either in the AM class or at the PM public session, as it would have taken too much airtime. But I'll indulge myself by leading with it here, before turning more particularly to the paper in a follow-up post.
It's September or perhaps very early in October 1974, and I've recently arrived at Princeton University as a 17-year old freshman. (I later ascertained that 94% of the class was older than me - this in an era when 18 was the legal drinking age and there was a on-campus student pub at which you'd be carded.)
Having both a competitive nature and a family background that placed intense value on "intelligence" and academic achievement, I was eager to rate myself against the field, as well as judge myself against demanding self-expectations. I also made a point from the start of taking classes in which there were frequent student papers, because I liked writing, along with the greater control over content that they offered relative to answering exam questions.
The first short paper I got back, presumably in history or political science, came out in accordance with my self-demands. But then came the second one, in Intro to Moral Philosophy. This was a lecture course taught by Thomas Scanlon, but my "preceptor" (as they called the leaders of the weekly small-group seminar meetings) was a graduate student in the philosophy department whose name I still recall.
This paper's subject was Kant, and more particularly the categorical imperative, which might be stated (per Wikipedia) as follows: "Act according to the maxim that you would wish all other rational people to follow, as if it were a universal law."
Intellectually unformed though I then was, I realized that, in interpreting it, one faces what I might today call a "level of generality" problem. The example I thought of was as follows: While it DOES mean, say, that I shouldn't lie because if everyone lied we would lose the ability to have the truth believed, it surely DOESN'T mean that I can't go to the Wawa Market on Alexander Street at 8 pm, on the ground that no one could go there if everyone tried to at the same time. So, in attempting to apply the categorical imperative, there is a broader issue, which may have no simple or obvious answer, regarding the level of generality at which one should state the maxims that one is testing for rational consistency.
To this day, I don't think that's bad for what was presumably a 2-page (or at the most 5-page) paper in an undergraduate Intro to Philosophy class. But I got it back with a grade of C+ and some sort of peremptory, even angry or at least disgusted / impatient, scrawl - which might as well have been in crayon - to the effect of: No, that's wrong, that's not what the categorical imperative says. No effort beyond that to engage or explain where or how the grad student thought I had gone wrong.
These days, when a student gets a poor grade and comes in to see me, I'll try to reconstruct the reasons for it (if it's an exam that doesn't have comments like a graded paper), but I'll also say very strongly if this appears to be among the student's concerns: This DOESN"T mean you're a bad student, or not good at law or at tax, etc. - it's just a thing that happened one time in terms of answering one question that might have been either well or poorly chosen (and then graded) by me.
But I didn't have the older me to tell me this at the time, nor did I go talk to the graduate student, towards whom I now felt hostile. (Plus, I knew it was generally bad form to complain about grades.) What I should have done, of course, is go see Scanlon - not to complain about the grade as such, but to get broader dialogue and feedback, but the thought of doing this never occurred to me. I think I viewed him, through no fault of his own, as too far removed and remote from me.
Taking the whole thing far too seriously, I was shaken by the grade, which hurt my self-confidence (hence, I told no one about it at the time), even though I felt that it was misguided, unfair, perhaps biased for some specific reason that I couldn't fathom, and stupid. I also concluded that maybe I wasn't fated to do as well in philosophy classes as in other liberal arts subjects. I responded by working more diligently for the rest of that semester then I ever would again. (Once I had restored my self-confidence via my final fall 1974 results, I continued to take my schoolwork, for the most part, reasonably seriously, but I developed a tendency to prefer pursuing my own intellectual interests to those of a particular course or instructor.)
Anyway, the very interesting Roemer paper raises, among other questions, that of how good Kantians should frame the maxims that they are hypothetically universalizing in their minds. Depending on the context, the answer to this question is sometimes clear, but other times much less so.
Friday, February 08, 2019
Everyone has a favorite Kant story (or maybe not)
At next Tuesday's NYU Tax Policy Colloquium, we will be discussing with John Roemer a paper on Kantian cooperation and (inter alia) tax policy.
I see about an 80% chance that I will end up recounting the tale of the unfair bad grade (worst of my career) that I got as a freshman on a Kant paper. Not that I'm still brooding about it or anything!
I see about an 80% chance that I will end up recounting the tale of the unfair bad grade (worst of my career) that I got as a freshman on a Kant paper. Not that I'm still brooding about it or anything!
Wednesday, February 06, 2019
NYU Tax Policy Colloquium, week 3: David Kamin's Effects of Capital Gains Rate Uncertainty on Realization
Yesterday at the
Tax Policy Colloquium, my colleague David Kamin presented his paper (coauthored
by Jason Oh of UCLA Law School), The Effects of Capital Gains Rate Uncertaintyon Realization. The piece capably addresses important issues that are known to
be there but have been under-explored in prior literature.
The paper’s
starting point is that, while one would expect capital gain realizations (and
elasticities) to depend, not just on current CG rates but also on expected
future CG rates, work in the field, including revenue estimates at different CG
rates, has tended to under-appreciate how great the effect might be. It’s
well-understood that a capital gains rate change has both short-term and
long-term revenue effects, where the former might involve rushing to market
before a rate increase, or the initial release of pent-up demand to sell where
there’s a rate cut, but the issue merits modeling further out than that, and
this is where the paper aims to add insight, such as by offering multiple models
of how this might play out.
One core
question, of course, is how we might want to get a handle on expected future CG
rates, since this is a question of what investors actually anticipate. While this
is unlikely to be a function just of the current rate and/or historical rates,
two possible benchmarks, before one starts about thinking about, say, which
party looks strong in the next election (and what their platforms say about
capital gains or tax rates / taxation of investment more generally), would be
as follows:
--Random walk
from the current CG rate: Under this view, whatever the rate is now, so far
as one can tell (leaving aside any particular political information like that
noted above), it’s just as likely to go up or down.
--Historically
bounded CG rate range: Under this view, we’ve learned from history the
basic range within which we (or rather, investors) might expect CG rates to
continue fluctuating. Say this runs from about 15% to 30%. So if the current
rate is towards the high end or the low end of this range, there’d be some lean
towards expecting it to revert towards or even past the middle.
Since the paper
presents alternative models rather than advocating one particular approach, it’s
open to and potentially consistent with both, but it gives particular attention
to the latter.
Anyway, here are
some of my main thoughts in response to the current draft:
1) New view with a twist – By reason
of its focus on expectations regarding future rates, the paper brought to mind
for me the so-called new view of dividend repatriations (and of corporate dividends in the domestic context).
This was a positive association for me, as I consider the new view, properly
understood – rather than improperly misunderstood, as sometimes it is – as a
truly central organizing idea.
Only, the issue
discussed by Kamin and Oh is the new view with a twist, as I’ll explain below.
Okay, let’s start
with the new view itself, as applied to the international / dividend
repatriation context. Suppose that a resident multinational isn’t currently taxable
domestically on certain foreign source income (FSI) that is earned through
foreign subsidiaries. But let’s further suppose that, as under U.S.
international tax law pre-2017 act, dividend or other repatriations of the FSI
are taxable to the domestic parent. Then, at least in theory, the FSI is
domestically taxable, but benefits from deferral, because the tax awaits the
repatriations.
One might think it
obvious that deferral lowers the present value of the parent’s domestic
liability with regard to the FSI. After all, isn’t that what deferral within an
income tax usually does? But the new view (dating from a 1985 paper by David
Hartman that drew on earlier work, regarding classical double corporate income taxation,
by the likes of David Bradford, Alan Auerbach, Mervyn King, and William
Andrews) showed that under certain conditions this is false. More specifically,
under those conditions, deferral does NOT lower the present value of the
ultimate domestic tax liability.
Suppose we assume
the following: taxable repatriation will take place at some point, the
repatriation tax rate is fixed and will never change, and the after-tax rate of
return that is available domestically equals that which is available abroad. Then
deferral does not lower the present value of the ultimate domestic tax liability,
with the further consequence that there is no tax lock-out: the system isn’t
discouraging companies from repatriating their foreign profits. (Note of course
that it is a separate question whether, say, publicly traded companies might be
reluctant to repatriate by reason of accounting rules that have built up around
the tax rule.)
To show this
algebraically, suppose X is the amount of foreign profits that are waiting to
be repatriated, r is the globally available
after-tax rate of return, and t is
the unchanging repatriation tax rate. Given the above assumptions, immediate
repatriation, followed by domestic reinvestment of the funds for a period,
leaves the taxpayer with X(1 – t)(1 +
r). Repatriating at the end of the period leaves
the taxpayer with X(1 + r)(1 – t).
One way of
explaining the equivalence intuitively is that, while deferral lowers the
present value of the tax that would be due if X were repatriated today, the
amount to be repatriated, and hence the amount of the tax given t’s fixed character, keeps growing at
the same interest rate. So it is crucial to the analysis that this is a
one-and-done tax: Once FSI is repatriated, its further domestic growth is not
subject to the repatriation tax, only to whatever domestic income tax there
might happen to be for all domestic source income.
When I referred
above to misuses of the new view based on misunderstanding it, I have in mind
treatment of it as a specific empirical claim – i.e., that there is no lockout
under a deferral regime, because in fact the assumptions about r and t are actually (or even necessarily) true. But that is simply not
the right way to view it or use it. Indeed, that would be on a par with
claiming that the Coase Theorem purports to show that it makes no difference
who owns a particular legal entitlement, or that the Modigliani-Miller Theorem
(MMT) shows that it makes no difference how one uses debt versus equity
financing.
It’s become
well-recognized that the Coase Theorem, by showing that it makes no difference
who owns the entitlement under specified circumstances (i.e., zero transaction
costs, and where pertinent to one’s use of it no relevant endowment /
distributional effects), doesn’t show that the thing at issue doesn’t matter –
rather, it shows where one would have to look in order for it to matter.
Likewise, what MMT shows is that, for debt versus equity to matter, it must
have something to do with its underlying assumptions – e.g., no bankruptcy or
tax implications, and no effect on agency costs under asymmetric information.
So once again, what one actually learns is where to look, in assessing whether
the thing at issue matters.
In the case of
the new view, we learn that, for expected tax burdens under a deferral system
to influence repatriation behavior, something about r, or something about t,
must be doing the dirty work. Hence, one now knows where to look. So one is applying
the new view – not “refuting” it – when one observes that U.S. companies became
extra-eager to avoid repatriations in light of (a) the 2004 foreign dividend
tax holiday, (b) the clear pre-2017 prospect that the U.S. corporate rate would
be lowered from its then 35% level, and (c) the clear pre-2017 prospect that the
U.S. would adopt dividend exemption without fully replacing the forgiven future
taxes via a deemed repatriation that accompanied its enactment,
Anyway, back to
new view with a twist in the Kamin-Oh paper. How do we face a different issue
than we did under the international new view? Here’s a short list:
a) The capital
gains tax isn’t one-and-done if you sell a capital asset and invest the
proceeds in a new capital asset. Instead, it starts accruing all over again.
Hence, deferral does offer time value benefits to the taxpayer.
b) Given the step-up
in basis at death under Code section 1014, the tax disappears if one is willing
to wait long enough, rather than being inevitable at some point.
c) It’s not as
clear in CG tax policy as it was in international tax policy that the current
rate was likely to go down, rather than up.
d) Suppose the
rate is about to change, and you want to beat it to market. In the CG realm,
there may be times when this is difficult. E.g., suppose you have a unique
business asset that’s hard to value and for which is there a thin market of
potential buyers. By contrast, in international, generating a taxable dividend
from the foreign sub to the U.S. parent (which need not be funded out of loose
cash already on hand) should generally not have been that hard.
2) Uncertainty versus optionality
- This last difference brings to central stage an important distinction between
two related concepts. One is uncertainty,
insofar as taxpayers don’t and even can’t know what future capital gains rates
are going to be. The other is optionality,
insofar as taxpayers can deliberately plan to realize taxable gains more in
low-rate periods and less in high-rate periods, including by selling just
before a rate increase or just after a rate cut.
Uncertainty is bad
for a risk-averse taxpayer. But optionality can only be good. An option that
you possess can’t be worth less than zero. The option to wait for a lower
future CG rate is worth more than otherwise if rates are volatile rather than
stable. And it’s worth more than otherwise if the rate is more likely to go
down than up. Thus, if we believe that future CG rates most likely will stay
within the historically observed 15 to 30 percent range, the option is worth
more, all else equal, if the current CG rate is in the neighborhood of 30
percent than 15 percent.
Still, because
optionality is bound to be an important part of the picture for many or most
real world holders of appreciated capital assets, and because an option can’t
be worth less than zero, I think of capital gains rate uncertainty as likely,
in the main, to put a thumb on the scales in favor of deferral (which, again,
is already tax-favored in this setting), not against it).
Subscribe to:
Posts (Atom)