Wednesday, February 20, 2019

NYU Tax Policy Colloquium, week 5: Susan Morse's "Government-to-Robot Enforcement"

"I'm sorry, Dave, I'm afraid you can't deduct that."

2001's HAL, of course, wasn't a robot, if one's definition of the term requires creature-like embodiment. But he was enough like us to be capable of going mad. (For that matter, I once inadvertently gave an otherwise well-adjusted pet iguana a seemingly neurotic aversion to going into his water bowl when there were people around - he instinctually expected to be safe when he was in there, so was startled to find I'd just take him out of the cage anyway. I concluded that at least fairly intelligent animals - iguanas, surprisingly, qualify! - can develop neurotic aversions. But I digress.)

Yesterday's colloquium guest, Susan Morse, presented an interesting paper that addresses how the rise of automation and centralization in legal compliance may transform the character of enforcement. These days, lots of tax filing involves the use of software - for example, Turbo Tax, H&R Block, TaxAct, Tax Slayer, Liberty Tax, and proprietary products that, say, a leading accounting firm might deploy with respect to its clients.

These programs, though hardly on the more sentient side of AI (unless one agrees with David Chalmers about thermostats) might nonetheless have their own versions of "I'm sorry, Dave, I'm afraid you can't deduct that." An example that I've heard about, from a few years back, concerns Turbo Tax and how to allocate business vs. personal use regarding expenses that relate to a second home which one also rents out. By days of specific business versus personal use, or using a 365-day base? Apparently there are arguments for both approaches, but Turbo Tax either heavily steered people one way, or actually "refused" to do it the other way.

What's more, they might offer an opportunity for centralized enforcement - for example, for the IRS's directly collecting from Turbo Tax the taxes underpaid on Turbo Tax filings, at least where this reflected an error in the Turbo Tax program. The amount might be estimated, rather than calculated precisely as to each particular Turbo Tax-filed return.

In this scenario, if we assume that Turbo Tax isn't going to try to get the money from individual filers (and note its current practice of holding customers harmless for extra taxes paid by reason of its errors or oversights), then in effect it will add expected IRS payouts into its prices, making it somewhat like an insurer.

The paper's goal is not to advocate approaches of this kind, but rather to say that their increasing feasibility means we should think about them, and about the broader opportunities and challenges presented by rising automation and centralization, both in tax filing and elsewhere.

I myself have tended to see Turbo Tax, which I used until recently, as little more than a glorified calculator and form filler-outer. For example, it allows one to spare oneself the enormous nuisance of computing AMT liability (a real issue for New Yorkers pre-2017 act), or of having forgotten to include a small item until after one had already made computations based on adjusted gross income. So Intuit would really have had to screw up something basic, in order for Turbo Tax to have gotten my federal income tax liability wrong.

Nonetheless, especially if these programs become more HAL-like, but even just today when they offer a data and collection source, they can become important loci for federal enforcement and collection efforts. The paper notes, however, that there might be issues both of capture (Intuit manipulates government policymakers to favor its interests) and of reverse capture (Intuit, despite incentives to please customers that push the other way, decides on "I'm sorry, Dave" by reason of its relationship with the tax authorities).

Here's an example that occurs to me - although I suspect it's not actually true. New York State created certain charities, gifts to which qualify for an 85% credit against state income tax. Thus, if at the margin I can't deduct any further state income taxes on my federal return, giving a dollar to such a charity costs me only 15 cents after state tax, but leaves me 22 cents after federal tax, if a full $1 charitable deduction is permissible and my federal marginal rate is 37%.

The Treasury has taken the view that, under these circumstances, my permissible federal deduction would only be 15 cents (i.e., the contribution minus the value of the state credit) - even though claiming simple state charitable deductions is not thus treated. But the Treasury might be wrong - i.e., it depends on what the courts ultimately decide, if the issue is litigated. Suppose, however, that Turbo Tax, which has you list the names of the charities to which you have given deductible contributions, in effect said "I'm sorry, Dave" once you had typed in the requisite name. This would in effect be reverse capture (although I doubt that Turbo Tax actually works this way, and it wouldn't be hard for taxpayers to think of simple workarounds). It might impede taking the deductions, and then fighting the IRS in court if necessary, while using Turbo Tax.

One of the paper's important themes concerns the relationship between (1) finding someone (such as tax sofrware providers) to serve an insurance function, and (2) being able to improve taxpayer incentives, at least in one particular dimension, by having no-fault penalties for underpayment of tax. (I wrote about this issue here.)

To illustrate the reasons for and problems with no-fault penalties, consider the following toy example: Suppose I can either owe $50 of tax with certainty, or else engage in a transaction the correct treatment of which is legally uncertain. If I do it and the issue is then resolved, there's a 50% chance that I'll owe zero, and a 50% chance that I'll owe $100. So my expected liability is $50 either way - assuming that the latter transaction will definitely be tested.

In reality, however, what we call the "audit lottery" means that I can do the transaction, report zero liability, and be highly likely never to have it examined. Suppose that the chance of its being examined was as high as 50%. Even under that, probably quite unrealistic, scenario, my expected tax liability, if I do the transaction, is only $25. 50% of the time it's never challenged, and 50% of the time when it's challenged I win.

This is actually a pervasive issue in tax planning, inducing taxpayers to favor taking uncertain and even highly aggressive positions because they might never be challenged. The key here is that one generally won't be penalized if one loses, if the position one took was sufficiently reasonable. A 50% chance of being correct would easily meet that threshold.

The solution to this incentive problem was stated long ago in work by Gary Becker concerning efficient crime deterrence. Suppose a crime one might commit has a $100 social cost. With certainty of detection, the Becker model advocates a $100 fine (leading, of course, to the notion that there are efficient crimes, e.g., one that I commit anyway because the benefit to me is $105). But then, Becker notes, there is the issue of uncertainty of detection. If there's only a 50% chance that I would be caught, then the penalty, from this standpoint, ought to be $200.

Ditto for the above tax planning / audit lottery example. Given the 50% chance of detection, it all comes out right (in terms of ex ante incentives, ignoring risk) if we say that I have to pay $200, rather than $100, in the case where I am audited and lose. This is a no-fault or strict liability penalty, ramping up the amount I owe in order to reverse out the incentive effects of the audit lottery.

But what about the fact that I apparently did nothing wrong, yet am being penalized? Surely it's not unreasonable for me to take a position that has a 50% chance of being correct. And we don't currently require that taxpayers flag all uncertain positions in their tax returns - partly because the IRS would never be able to check more than a small percentage anyway. While I'm not being sent to jail here, there is an issue of risk. But before turning to that, consider one more path to the same result: Steve Shavell's well-known work concerning negligence versus strict liability.

Shavell doesn't have a multiplier for uncertainty of detection in his simplest model (although I'm sure he deals with it thoroughly somewhere). But he notes that strict liability produces more efficient outcomes than negligence where only the party that would face the liability is making "activity level" choices. E.g., if drivers don't have to pay for the accidents they cause unless they're negligent, they'll drive too much, by reason of disregarding the cost of non-negligent accidents. (It's more complicated, of course, if, say, the pedestrians they would hit are also deciding on their own activity levels.)

Returning to tax uncertainty, the problem with a negligence standard for underpayment penalties is that it leads to an excessive "activity level" with respect to taking uncertain positions that might be wrong yet remain unaudited. Strict liability is therefore more efficient than negligence at this margin, unless we add to the picture the equivalent of activity-level-varying pedestrians. For example, we might say that the government's losing revenues from uncertainty plus < 100% audit rates gives it an incentive to try to reduce uncertainty. But I don't personally find that a very persuasive counter-argument in this setting.

Okay, on to the problems with strict liability tax penalties. Let's suppose in the above toy example that my chance of being meaningfully audited on this issue was only 5%. Then the optimal Becker-Shavell penalty is twenty times the under-payment, or $2,000. Add a few zeroes and, say, a $10,000 tax underpayment (as determined ex post) leaves me owing $200,000.  Or, if the chance of a meaningful review was 1%, the short straw leaves me owing $1 million - even though we may feel I did nothing unreasonable. (Again, the disclosure option, while in special circumstances required under existing federal income tax law, can't go very far given the costliness of review - which is itself a further complicating factor for the analysis.)

If I am risk-averse, the burden this imposes on me may yield deadweight loss (other than insofar as we like its deterring me). From an ex post rather than ex ante perspective, it leads to particular outcomes that we may find unpalatable.

A further, but lesser, problem is that it may be hard to compute audit probabilities accurately. Note, however, that requiring negligence is equivalent to presuming a 100% audit chance, in cases where it would not be found. From that perspective, multipliers that are still "too low" but greater than 1.0 at least do something to improve incentives around uncertainty and the audit lottery. So the risk problem arguably weighs more heavily against strict liability than the difficulty of getting the multiplier just right.

This is where insurance comes in. The above problem goes away if taxpayers with uncertain positions can and do transfer the risk, for an actuarially fair price, to counterparties that can price and diversify it properly. But tax insurance is not widely available, and is hard to price. Hence the potential appeal of recruiting entities (such as Turbo Tax) that sit in a centralized position, if doing so doesn't create overly bad problems such as adverse selection or moral hazard. (Adverse selection is inevitably an issue, however, if not all taxpayers use entities that can be recruited to serve, in effect, as insurers.)

One further issue, in this regard, on which the paper touches is the feasibility of a system that would, say, charge Turbo Tax for user underpayments that reflected factual inaccuracies in the data that one entered. Can we even imagine a system in which, if I used Turbo Tax and left out a $10,000 cash payment that someone had made to me, it was liable?

The answer would seem to be no, but actually it's a bit more complicated. Consider car insurance. The insurer will typically pay for accident costs even if they're completely the driver's fault, reflecting wildly inappropriate behavior (such as driving drunk, running red lights, texting while driving, etc.), In other words, the insurer loses if the driver is negligent, even though negligence is under the driver's control or at least influence.

How is such insurance coverage feasible? Well, it certainly creates moral hazard, but there are ways of addressing it, such as literal coinsurance (such as ffrom deductibles and copays), implicit coinsurance (such as from collateral psychic or other accident costs to the driver that aren't covered), and future years' insurance rates that will now presumably be higher. So it's feasible to have at least some car insurance for negligent drivers, despite the issue of moral hazard.

By extension, we could conceivably have a model in which Turbo Tax was liable, at least in part, even with respect to factual errors made by its customers, so long as analogous mechanisms sufficiently addressed moral hazard. But this still of course leaves the problem of mandatorily drafting software providers to serve as insurers by imposing no-fault collective liability, if strict liability doesn't apply to taxpayers who file without using such providers.

Returning to the paper, it doesn't purport to resolve any of these questions, but rather to begin laying out and addressing them. This particular piece will be appearing shortly in the University of Illinois Law Review, but I'll be looking forward to Morse's further work in the area.

6 comments:

Susan Morse said...

A lot of fun and usefulness for me in talking about this idea yesterday from a number of perspectives, including strict liability/insurance and distributive justice. Quite a few things left to work through. Thanks to all participants.

DM Hasen said...

On the cost internalization point and penalties, there has been a lot of work by Calfee and Craswell, Raskolnikov and others. Logue has a piece in which he shows that strict liability even for legal standards is superior to negligence liability. In some draft work I show that, if one is going to remain with a negligence standard (as seems likely), the tradeoff between having more detection or a more rule-like legal regime depends heavily on the cost to the regulated party of compliance. If the cost is low, slightly increasing detection probability tends to be effective; if the cost is high, it is usually better to move to a more rule-like legal regime.

Daniel Shaviro said...

Thanks, David. That sounds very plausible.

Raj Kumar Singh said...

This Article is Really Fantastic And Thanks For Sharing The Valuable Post..Issues With Quicken Software Dial Or Visit Quicken Bill Pay Support ! Get Instant Help and Support from Quicken Certified Experts.You Do Not Want To Get Hold Of The Quicken Support Online Or Be Looking For Opening Hours For Quicken Support Team.

Rosie Aney said...

Thanks you Robert seaman for your wonderful masterclass strategy which has help me earn at least $8,000 weekly using his masterclass strategy and has also helped me recover all my lost money in binary options trading, i recommend his help to each traders whose point is to succeed and make good profits in binary options and also for those who wants to get back all their lost money and for those who are new in trading or have any issues in tradings, you can contact him on 
Email: Robertseaman939@gmail.com  or
whatsApp: +44 7466 770724     

Leah Hart said...

I can invest my last dollar with only one person and that’s Mike Defi who prove to be the most honest and trustworthy Expert trader. him the best account manager I have ever known or heard of. him has been helping me. I started with just $1000 and now am getting great profits like $11,670 upward. You get to withdraw yourself after 6-7days of trading, no extra commissions.
If you are really interested, You can contact him via:
Email: Defimike93@gmail.com