-
-
Notifications
You must be signed in to change notification settings - Fork 160
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add test that compares tbi and std behavioral-response estimates #1840
Conversation
Codecov Report
@@ Coverage Diff @@
## master #1840 +/- ##
======================================
Coverage 100% 100%
======================================
Files 37 37
Lines 3103 3103
======================================
Hits 3103 3103 Continue to review full report at Codecov.
|
@martinholmer If this test is based on current master, then things have changed a lot since version 0.14.2. Most importantly, the current law in current master is TCJA, for 0.14.2 version on the other hand, its current law was still pre-TCJA. If I were not mistaken, the tbi interface is only capable of calculating assumptions based on current law. |
@GoFroggyRun said about pull request #1840:
So what? The main point is that you reported in #1827 that Tax-Calculator produced different results depending on whether you used standard (non-tbi) function calls or a call to the The questions I posed in my original #1840 comment are still unanswered. Can you help by answering them? |
@martinholmer asked:
Why my answer doesn't answer your question?
Given these, why would you expect your test in #1840 could replicate the bug report in #1827? |
I would be interested to know whether replicating the exact same baseline and reform from #1827 on the tip of the master branch also produces nonsensical results. That seems more relevant than knowing whether there was/is a bug in 0.14.1. |
This pull request contains a new test that has been added to the
test_tbi.py
file and is markedpre_release
andtbi_vs_std_behavior
andrequires_pufcsv
.This test assumes
_BE_sub
equals 0.25 and compares the aggregate tax revenues generated by standard Python Tax-Calculator programming with those generated by calling the tbi.run_nth_year_tax_calc_model() function. The motivation for adding this test is the discussion in issue #1827.Because of the results generated by the tbi function call are "fuzzed" for PUF privacy reasons, there is no expectation that those results would be identical to the results generated by standard Tax-Calculator calls.
The new test simulates a massive tax reductions caused by reducing the regular-income and pass-through tax rates no higher than 25 percent and raising the personal exemption from zero to $1000 beginning in 2019. This reform causes a substantial reduction in marginal tax rates and hence a substantial behavioral response.
However, despite the discussion in #1827, there are no significant differences --- meaning more than a 0.2 percent difference in aggregate tax revenues --- in the results generated using the tbi function call and the results generated using standard (that is, non-tbi) function calls. Here is how I ran the test (using code at the tip of the master branch) and here is what I got:
Why do these test results seem different from the results reported in #1827?
Perhaps I've made a mistake in writing the test.
Or perhaps the code in 0.14.2 is different from the code at the tip of the master branch.
Or perhaps mistakes were made in the work reported in #1827.
Does anybody have any ideas about this?
@MattHJensen @GoFroggyRun