Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Replace loop with vectorized calculation of optimal share #1079

Merged
merged 4 commits into from
Jun 7, 2023

Conversation

alanlujan91
Copy link
Member

There's long been a comment on ConsPortfolioModel that says:

This loop can probably be eliminated, but it's such a small step that it won't speed things up much.

This is my attempt at replacing the loop with vectorized calculations to test just how much faster this could be. Working out how to do this in vectorized form can provide insights into how to do it in higher dimensions, where loops might be much slower than in this instance.

  • Update CHANGELOG.md with major/minor changes.

replace optimization loop with vectorized calculation to speed up process
@alanlujan91 alanlujan91 changed the title Update ConsPortfolioModel.py Replace loop with vectorized calculation of optimal share Nov 11, 2021
Comment on lines +737 to +749
crossing = np.logical_and(FOC_s[:, 1:] <= 0.0, FOC_s[:, :-1] >= 0.0)
share_idx = np.argmax(crossing, axis=1)
a_idx = np.arange(self.aNrmCount)
bot_s = self.ShareGrid[share_idx]
top_s = self.ShareGrid[share_idx + 1]
bot_f = FOC_s[a_idx, share_idx]
top_f = FOC_s[a_idx, share_idx + 1]
bot_c = self.EndOfPrddvdaNvrs[a_idx, share_idx]
top_c = self.EndOfPrddvdaNvrs[a_idx, share_idx + 1]
alpha = 1.0 - top_f / (top_f - bot_f)

self.Share_now = (1.0 - alpha) * bot_s + alpha * top_s
self.cNrmAdj_now = (1.0 - alpha) * bot_c + alpha * top_c
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

alpha and other variables here can be saved and added to save_points.

@alanlujan91
Copy link
Member Author

Good candidate for benchmark #1131

@alanlujan91 alanlujan91 requested a review from Mv77 June 4, 2023 13:58
@codecov
Copy link

codecov bot commented Jun 4, 2023

Codecov Report

Patch coverage: 100.00% and project coverage change: -0.01 ⚠️

Comparison is base (90178db) 72.56% compared to head (7815d06) 72.55%.

Additional details and impacted files
@@            Coverage Diff             @@
##           master    #1079      +/-   ##
==========================================
- Coverage   72.56%   72.55%   -0.01%     
==========================================
  Files          78       78              
  Lines       13012    13009       -3     
==========================================
- Hits         9442     9439       -3     
  Misses       3570     3570              
Impacted Files Coverage Δ
HARK/ConsumptionSaving/ConsPortfolioModel.py 89.81% <100.00%> (-0.07%) ⬇️

☔ View full report in Codecov by Sentry.
📢 Do you have feedback about the report comment? Let us know in this issue.

Copy link
Contributor

@Mv77 Mv77 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good.

This is not about this PR but I don't really like how the optimal share and consumption are found.

I think it'd be clearer (and maybe more accurate?) to do the stage thing where the optimal share associated with every a is found and then the a is decided later.

But that's for another day. Not what this PR is about.

Happy to have it merged.

@Mv77
Copy link
Contributor

Mv77 commented Jun 7, 2023

Here are some thoughts.

  1. The share grid is kind of inefficient. We evaluate in, say, 11 places to find the pair of points at which the FOC crosses 0. Then we approximate the FOC linearly and find its 0. This uses 11 evaluations and in the end we are guaranteed to be within $0.1$ of the optimal share. If we instead used something like bisection, 11 evaluations would get us to a bound of $1/2^{9}$ if my math is correct.

  2. We are finding optimal consumption as a convex combination of consumption at the share-interval endpoints. And the combination weights are the same as that of the estimated share. This might not be very accurate? It might be worth finding the exact consumption implied by the EoPdvda implicit in the estimated share.

@alanlujan91
Copy link
Member Author

I agree. There are some shortcuts here that are meant to be "close enough" and maybe pedagogical. But maybe we should have the "best" portfolio choice solver we can come up with and the more pedagogical one as separate models.

@alanlujan91 alanlujan91 merged commit 37134b9 into econ-ark:master Jun 7, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants