Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

scipy.stats distributions #1197

Merged
merged 41 commits into from
Jan 11, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
41 commits
Select commit Hold shift + click to select a range
b40ce36
Merge branch 'scipy_generator' into scipy_dist
alanlujan91 Dec 16, 2022
f06b338
Merge branch 'scipy_generator' into scipy_dist
alanlujan91 Dec 20, 2022
2b99a4f
Merge branch 'scipy_generator' into scipy_dist
alanlujan91 Dec 21, 2022
a8d996d
Merge remote-tracking branch 'upstream/master' into scipy_dist
alanlujan91 Dec 21, 2022
f42a288
Merge remote-tracking branch 'upstream/master' into scipy_dist
alanlujan91 Dec 28, 2022
1d2148e
Merge remote-tracking branch 'upstream/master' into scipy_dist
alanlujan91 Dec 30, 2022
e50ee7e
reorganize distributions
alanlujan91 Jan 1, 2023
85c2d12
update Distribution as seed manager
alanlujan91 Jan 1, 2023
b279ec0
implement `ContinuousFrozenDistribution`
alanlujan91 Jan 1, 2023
76a5762
update `Normal`
alanlujan91 Jan 1, 2023
fcc1116
update `Lognormal`
alanlujan91 Jan 1, 2023
1f50afd
update `Uniform`
alanlujan91 Jan 1, 2023
0789215
update `Weibull`
alanlujan91 Jan 1, 2023
c21ed13
update `MVNormal`
alanlujan91 Jan 1, 2023
40ed24c
implement `DiscreteFrozenDistribution`
alanlujan91 Jan 1, 2023
ae92315
update `Bernoulli`
alanlujan91 Jan 1, 2023
4cc6f72
update `DiscreteDistribution`
alanlujan91 Jan 1, 2023
abc64e9
update `DiscreteDistributionLabeled`
alanlujan91 Jan 1, 2023
dc5f46f
update `IndexDistribution`
alanlujan91 Jan 1, 2023
7888fb8
update `MarkovProcess`
alanlujan91 Jan 1, 2023
24cd88c
fix notebook bug
alanlujan91 Jan 2, 2023
f44bfa0
abstract approx method
alanlujan91 Jan 2, 2023
9a65ffd
Merge remote-tracking branch 'upstream/master' into scipy_dist
alanlujan91 Jan 5, 2023
dc6d7a4
fix bugs to pass tests
alanlujan91 Jan 5, 2023
43ed27b
update notebooks
alanlujan91 Jan 5, 2023
78d0b1c
add discretize method
alanlujan91 Jan 6, 2023
74ed094
implement `discretize` method and `limit` attribute
alanlujan91 Jan 7, 2023
ea4645b
change `approx` to `discretize` in codebase files
alanlujan91 Jan 7, 2023
3809d64
fix tests
alanlujan91 Jan 7, 2023
072c6a7
update examples
alanlujan91 Jan 7, 2023
41cbb01
update notebooks
alanlujan91 Jan 7, 2023
37a2307
update failing notebook
alanlujan91 Jan 7, 2023
539722f
use 'expand_dims`
alanlujan91 Jan 7, 2023
69e932e
add documentation
alanlujan91 Jan 7, 2023
1149b52
set method parameter on each discretize call
alanlujan91 Jan 7, 2023
41d2bae
fix error
alanlujan91 Jan 7, 2023
7b22040
implement hermite approx to lognormal
alanlujan91 Jan 8, 2023
54870bf
Merge remote-tracking branch 'upstream/master' into scipy_dist
alanlujan91 Jan 9, 2023
dd10e24
update failing notebook
alanlujan91 Jan 9, 2023
e2d62a1
address Seb's comments
alanlujan91 Jan 11, 2023
05edaf7
Update CHANGELOG.md
alanlujan91 Jan 11, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions Documentation/CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,7 @@ Release Date: TBD
* Updates the `numpy` random generator from `RandomState` to `Generator`. [#1193](https://github.com/econ-ark/HARK/pull/1193)
* Turns the income and income+return distributions into `DiscreteDistributionLabeled` objects. [#1189](https://github.com/econ-ark/HARK/pull/1189)
* Creates `UtilityFuncCRRA` which is an object oriented utility function with a coefficient of constant relative risk aversion and includes derivatives and inverses. Also creates `UtilityFuncCobbDouglas`, `UtilityFuncCobbDouglasCRRA`, and `UtilityFuncConstElastSubs`. [#1168](https://github.com/econ-ark/HARK/pull/1168)
* Reorganizes `HARK.distribution`. All distributions now inherit all features from `scipy.stats`. New `ContinuousFrozenDistribution` and `DiscreteFrozenDistribution` to use `scipy.stats` distributions not yet implemented in HARK. New `Distribution.discretize(N, method = "***")` replaces `Distribution.approx(N)`. New `DiscreteDistribution.limit` attribute describes continuous origin and discretization method. [#1197](https://github.com/econ-ark/HARK/pull/1197).

### Minor Changes

Expand Down
24 changes: 12 additions & 12 deletions HARK/ConsumptionSaving/ConsAggShockModel.py
Original file line number Diff line number Diff line change
Expand Up @@ -2026,11 +2026,11 @@ def make_AggShkDstn(self):
-------
None
"""
self.TranShkAggDstn = MeanOneLogNormal(sigma=self.TranShkAggStd).approx(
N=self.TranShkAggCount
self.TranShkAggDstn = MeanOneLogNormal(sigma=self.TranShkAggStd).discretize(
N=self.TranShkAggCount, method="equiprobable"
)
self.PermShkAggDstn = MeanOneLogNormal(sigma=self.PermShkAggStd).approx(
N=self.PermShkAggCount
self.PermShkAggDstn = MeanOneLogNormal(sigma=self.PermShkAggStd).discretize(
N=self.PermShkAggCount, method="equiprobable"
)
self.AggShkDstn = combine_indep_dstns(self.PermShkAggDstn, self.TranShkAggDstn)

Expand Down Expand Up @@ -2280,11 +2280,11 @@ def make_AggShkDstn(self):
-------
None
"""
self.TranShkAggDstn = MeanOneLogNormal(sigma=self.TranShkAggStd).approx(
N=self.TranShkAggCount
self.TranShkAggDstn = MeanOneLogNormal(sigma=self.TranShkAggStd).discretize(
N=self.TranShkAggCount, method="equiprobable"
)
self.PermShkAggDstn = MeanOneLogNormal(sigma=self.PermShkAggStd).approx(
N=self.PermShkAggCount
self.PermShkAggDstn = MeanOneLogNormal(sigma=self.PermShkAggStd).discretize(
N=self.PermShkAggCount, method="equiprobable"
)
self.AggShkDstn = combine_indep_dstns(self.PermShkAggDstn, self.TranShkAggDstn)

Expand Down Expand Up @@ -2532,13 +2532,13 @@ def make_AggShkDstn(self):

for i in range(StateCount):
TranShkAggDstn.append(
MeanOneLogNormal(sigma=self.TranShkAggStd[i]).approx(
N=self.TranShkAggCount
MeanOneLogNormal(sigma=self.TranShkAggStd[i]).discretize(
N=self.TranShkAggCount, method="equiprobable"
)
)
PermShkAggDstn.append(
MeanOneLogNormal(sigma=self.PermShkAggStd[i]).approx(
N=self.PermShkAggCount
MeanOneLogNormal(sigma=self.PermShkAggStd[i]).discretize(
N=self.PermShkAggCount, method="equiprobable"
)
)
AggShkDstn.append(
Expand Down
28 changes: 18 additions & 10 deletions HARK/ConsumptionSaving/ConsIndShockModel.py
Original file line number Diff line number Diff line change
Expand Up @@ -933,7 +933,7 @@ def use_points_for_interpolation(self, cNrm, mNrm, interpolator):
cFuncNowUnc = interpolator(mNrm, cNrm)

# Combine the constrained and unconstrained functions into the true consumption function
# breakpoint() # LowerEnvelope should only be used when BoroCnstArt is true
# LowerEnvelope should only be used when BoroCnstArt is true
cFuncNow = LowerEnvelope(cFuncNowUnc, self.cFuncNowCnst, nan_bool=False)

# Make the marginal value function and the marginal marginal value function
Expand Down Expand Up @@ -3024,14 +3024,22 @@ def make_euler_error_func(self, mMax=100, approx_inc_dstn=True):
if approx_inc_dstn:
IncShkDstn = self.IncShkDstn[0]
else:
TranShkDstn = MeanOneLogNormal(sigma=self.TranShkStd[0]).approx(
N=200, tail_N=50, tail_order=1.3, tail_bound=[0.05, 0.95]
TranShkDstn = MeanOneLogNormal(sigma=self.TranShkStd[0]).discretize(
N=200,
method="equiprobable",
tail_N=50,
tail_order=1.3,
tail_bound=[0.05, 0.95],
)
TranShkDstn = add_discrete_outcome_constant_mean(
TranShkDstn, self.UnempPrb, self.IncUnemp
)
PermShkDstn = MeanOneLogNormal(sigma=self.PermShkStd[0]).approx(
N=200, tail_N=50, tail_order=1.3, tail_bound=[0.05, 0.95]
PermShkDstn = MeanOneLogNormal(sigma=self.PermShkStd[0]).discretize(
N=200,
method="equiprobable",
tail_N=50,
tail_order=1.3,
tail_bound=[0.05, 0.95],
)
IncShkDstn = combine_indep_dstns(PermShkDstn, TranShkDstn)

Expand Down Expand Up @@ -3502,8 +3510,8 @@ class LognormPermIncShk(DiscreteDistribution):

def __init__(self, sigma, n_approx, neutral_measure=False, seed=0):
# Construct an auxiliary discretized normal
logn_approx = MeanOneLogNormal(sigma).approx(
n_approx if sigma > 0.0 else 1, tail_N=0
logn_approx = MeanOneLogNormal(sigma).discretize(
n_approx if sigma > 0.0 else 1, method="equiprobable", tail_N=0
)
# Change the pmv if necessary
if neutral_measure:
Expand Down Expand Up @@ -3538,8 +3546,8 @@ class MixtureTranIncShk(DiscreteDistribution):
"""

def __init__(self, sigma, UnempPrb, IncUnemp, n_approx, seed=0):
dstn_approx = MeanOneLogNormal(sigma).approx(
n_approx if sigma > 0.0 else 1, tail_N=0
dstn_approx = MeanOneLogNormal(sigma).discretize(
n_approx if sigma > 0.0 else 1, method="equiprobable", tail_N=0
)
if UnempPrb > 0.0:
dstn_approx = add_discrete_outcome_constant_mean(
Expand Down Expand Up @@ -3612,7 +3620,7 @@ def __init__(
name="Joint distribution of permanent and transitory shocks to income",
var_names=["PermShk", "TranShk"],
pmv=joint_dstn.pmv,
data=joint_dstn.atoms,
atoms=joint_dstn.atoms,
seed=seed,
)

Expand Down
7 changes: 5 additions & 2 deletions HARK/ConsumptionSaving/ConsMedModel.py
Original file line number Diff line number Diff line change
Expand Up @@ -628,8 +628,11 @@ def update_med_shock_process(self):
MedShkStdNow = self.MedShkStd[t]
MedShkDstnNow = Lognormal(
mu=np.log(MedShkAvgNow) - 0.5 * MedShkStdNow**2, sigma=MedShkStdNow
).approx(
N=self.MedShkCount, tail_N=self.MedShkCountTail, tail_bound=[0, 0.9]
).discretize(
N=self.MedShkCount,
method="equiprobable",
tail_N=self.MedShkCountTail,
tail_bound=[0, 0.9],
)
MedShkDstnNow = add_discrete_outcome_constant_mean(
MedShkDstnNow, 0.0, 0.0, sort=True
Expand Down
12 changes: 8 additions & 4 deletions HARK/ConsumptionSaving/ConsPortfolioFrameModel.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,8 +13,8 @@
PortfolioConsumerType,
init_portfolio,
)
from HARK.distribution import Bernoulli # Random draws for simulating agents
from HARK.distribution import (
Bernoulli,
IndexDistribution,
Lognormal,
MeanOneLogNormal,
Expand Down Expand Up @@ -133,7 +133,9 @@ def birth_pLvlNow(self, N):
"mean": init_portfolio["PermGroFac"],
"std": init_portfolio["PermShkStd"],
},
).approx(init_portfolio["PermShkCount"], tail_N=0),
).discretize(
init_portfolio["PermShkCount"], method="equiprobable", tail_N=0
),
),
Frame(
("TranShk"),
Expand All @@ -144,7 +146,9 @@ def birth_pLvlNow(self, N):
transition=add_discrete_outcome_constant_mean(
IndexDistribution(
MeanOneLogNormal, {"sigma": init_portfolio["TranShkStd"]}
).approx(init_portfolio["TranShkCount"], tail_N=0),
).discretize(
init_portfolio["TranShkCount"], method="equiprobable", tail_N=0
),
p=init_portfolio["UnempPrb"],
x=init_portfolio["IncUnemp"],
),
Expand All @@ -159,7 +163,7 @@ def birth_pLvlNow(self, N):
"std": init_portfolio["RiskyStd"],
}
# seed=self.RNG.integers(0, 2 ** 31 - 1) : TODO: Seed logic
).approx(init_portfolio["RiskyCount"]),
).discretize(init_portfolio["RiskyCount"], method="equiprobable"),
aggregate=True,
),
Frame(
Expand Down
3 changes: 2 additions & 1 deletion HARK/ConsumptionSaving/ConsPrefShockModel.py
Original file line number Diff line number Diff line change
Expand Up @@ -124,8 +124,9 @@ def update_pref_shock_process(self):
PrefShkStd = self.PrefShkStd[t]
new_dstn = MeanOneLogNormal(
sigma=PrefShkStd, seed=self.RNG.integers(0, 2**31 - 1)
).approx(
).discretize(
N=self.PrefShkCount,
method="equiprobable",
tail_N=self.PrefShk_tail_N,
)
PrefShkDstn.append(new_dstn)
Expand Down
10 changes: 5 additions & 5 deletions HARK/ConsumptionSaving/ConsRiskyAssetModel.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
from scipy.optimize import minimize_scalar, root_scalar

from HARK import make_one_period_oo_solver
from HARK.ConsumptionSaving.ConsIndShockModel import (
from HARK.ConsumptionSaving.ConsIndShockModel import ( # PortfolioConsumerType inherits from it; Baseline dictionary to build on
ConsIndShockSolver,
ConsumerSolution,
IndShockConsumerType,
Expand Down Expand Up @@ -133,7 +133,7 @@ def update_RiskyDstn(self):
Lognormal.from_mean_std,
{"mean": self.RiskyAvg, "std": self.RiskyStd},
seed=self.RNG.integers(0, 2**31 - 1),
).approx(self.RiskyCount)
).discretize(self.RiskyCount, method="equiprobable")

self.add_to_time_vary("RiskyDstn")

Expand All @@ -142,7 +142,7 @@ def update_RiskyDstn(self):
else:
self.RiskyDstn = Lognormal.from_mean_std(
self.RiskyAvg, self.RiskyStd
).approx(self.RiskyCount)
).discretize(self.RiskyCount, method="equiprobable")
self.add_to_time_inv("RiskyDstn")

def update_ShockDstn(self):
Expand Down Expand Up @@ -174,13 +174,13 @@ def update_ShockDstn(self):
# Names of the variables (hedging for the unlikely case that in
# some index of IncShkDstn variables are in a switched order)
names_list = [
list(self.IncShkDstn[t].dataset.data_vars.keys()) + ["Risky"]
list(self.IncShkDstn[t].variables.keys()) + ["Risky"]
for t in range(self.T_cycle)
]

conditional = {
"pmv": [x.pmv for x in dstn_list],
"data": [x.atoms for x in dstn_list],
"atoms": [x.atoms for x in dstn_list],
"var_names": names_list,
}

Expand Down
108 changes: 67 additions & 41 deletions HARK/ConsumptionSaving/tests/test_ConsMarkovModel.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,21 +7,21 @@
from HARK.ConsumptionSaving.ConsMarkovModel import MarkovConsumerType
from HARK.distribution import (
DiscreteDistribution,
DiscreteDistributionLabeled,
MeanOneLogNormal,
combine_indep_dstns,
DiscreteDistributionLabeled,
)


class test_ConsMarkovSolver(unittest.TestCase):
def setUp(self):

# Define the Markov transition matrix for serially correlated unemployment
unemp_length = 5 # Averange length of unemployment spell
urate_good = 0.05 # Unemployment rate when economy is in good state
urate_bad = 0.12 # Unemployment rate when economy is in bad state
bust_prob = 0.01 # Probability of economy switching from good to bad
recession_length = 20 # Averange length of bad state
unemp_length = 5 # Averange length of unemployment spell
urate_good = 0.05 # Unemployment rate when economy is in good state
urate_bad = 0.12 # Unemployment rate when economy is in bad state
bust_prob = 0.01 # Probability of economy switching from good to bad
recession_length = 20 # Averange length of bad state
p_reemploy = 1.0 / unemp_length
p_unemploy_good = p_reemploy * urate_good / (1 - urate_good)
p_unemploy_bad = p_reemploy * urate_bad / (1 - urate_bad)
Expand Down Expand Up @@ -68,12 +68,12 @@ def setUp(self):
# Replace the default (lognormal) income distribution with a custom one
employed_income_dist = DiscreteDistributionLabeled(
pmv=np.ones(1),
data=np.array([[1.0], [1.0]]),
atoms=np.array([[1.0], [1.0]]),
var_names=["PermShk", "TranShk"],
) # Definitely get income
unemployed_income_dist = DiscreteDistributionLabeled(
pmv=np.ones(1),
data=np.array([[1.0], [0.0]]),
atoms=np.array([[1.0], [0.0]]),
var_names=["PermShk", "TranShk"],
) # Definitely don't
self.model.IncShkDstn = [
Expand Down Expand Up @@ -126,42 +126,68 @@ def test_simulation(self):

Markov_Dict = {
# Parameters shared with the perfect foresight model
"CRRA": 2, # Coefficient of relative risk aversion
"Rfree": np.array([1.05**0.25] * 2), # Interest factor on assets
"DiscFac": 0.985, # Intertemporal discount factor
"LivPrb": [np.array([0.99375] * 2)], # Survival probability
"PermGroFac": [np.array([1.00] * 2)], # Permanent income growth factor
# Coefficient of relative risk aversion
"CRRA": 2,
"Rfree": np.array([1.05**0.25] * 2), # Interest factor on assets
"DiscFac": 0.985, # Intertemporal discount factor
"LivPrb": [np.array([0.99375] * 2)], # Survival probability
# Permanent income growth factor
"PermGroFac": [np.array([1.00] * 2)],
# Parameters that specify the income distribution over the lifecycle
"PermShkStd": [0.06], # Standard deviation of log permanent shocks to income
"PermShkCount": 7, # Number of points in discrete approximation to permanent income shocks
"TranShkStd": [0.3], # Standard deviation of log transitory shocks to income
"TranShkCount": 7, # Number of points in discrete approximation to transitory income shocks
# Standard deviation of log permanent shocks to income
"PermShkStd": [0.06],
# Number of points in discrete approximation to permanent income shocks
"PermShkCount": 7,
# Standard deviation of log transitory shocks to income
"TranShkStd": [0.3],
# Number of points in discrete approximation to transitory income shocks
"TranShkCount": 7,
"UnempPrb": 0.00, # .08 # Probability of unemployment while working
"IncUnemp": 0.0, # Unemployment benefits replacement rate
"UnempPrbRet": 0.0005, # Probability of "unemployment" while retired
"IncUnempRet": 0.0, # "Unemployment" benefits when retired
"T_retire": 0.0, # Period of retirement (0 --> no retirement)
"tax_rate": 0.0, # Flat income tax rate (legacy parameter, will be removed in future)
# Unemployment benefits replacement rate
"IncUnemp": 0.0,
# Probability of "unemployment" while retired
"UnempPrbRet": 0.0005,
# "Unemployment" benefits when retired
"IncUnempRet": 0.0,
# Period of retirement (0 --> no retirement)
"T_retire": 0.0,
# Flat income tax rate (legacy parameter, will be removed in future)
"tax_rate": 0.0,
# Parameters for constructing the "assets above minimum" grid
"aXtraMin": 0.001, # Minimum end-of-period "assets above minimum" value
"aXtraMax": 60, # Maximum end-of-period "assets above minimum" value
"aXtraCount": 60, # Number of points in the base grid of "assets above minimum"
"aXtraNestFac": 4, # Exponential nesting factor when constructing "assets above minimum" grid
"aXtraExtra": [None], # Additional values to add to aXtraGrid
# Minimum end-of-period "assets above minimum" value
"aXtraMin": 0.001,
# Maximum end-of-period "assets above minimum" value
"aXtraMax": 60,
# Number of points in the base grid of "assets above minimum"
"aXtraCount": 60,
# Exponential nesting factor when constructing "assets above minimum" grid
"aXtraNestFac": 4,
# Additional values to add to aXtraGrid
"aXtraExtra": [None],
# A few other parameters
"BoroCnstArt": 0.0, # Artificial borrowing constraint; imposed minimum level of end-of period assets
"vFuncBool": True, # Whether to calculate the value function during solution
"CubicBool": False, # Preference shocks currently only compatible with linear cFunc
"T_cycle": 1, # Number of periods in the cycle for this agent type
# Artificial borrowing constraint; imposed minimum level of end-of period assets
"BoroCnstArt": 0.0,
# Whether to calculate the value function during solution
"vFuncBool": True,
# Preference shocks currently only compatible with linear cFunc
"CubicBool": False,
# Number of periods in the cycle for this agent type
"T_cycle": 1,
# Parameters only used in simulation
"AgentCount": 100000, # Number of agents of this type
"T_sim": 200, # Number of periods to simulate
"aNrmInitMean": np.log(0.8) - (0.5**2) / 2, # Mean of log initial assets
"aNrmInitStd": 0.5, # Standard deviation of log initial assets
"pLvlInitMean": 0.0, # Mean of log initial permanent income
"pLvlInitStd": 0.0, # Standard deviation of log initial permanent income
"PermGroFacAgg": 1.0, # Aggregate permanent income growth factor
"T_age": None, # Age after which simulated agents are automatically killed
"AgentCount": 100000, # Number of agents of this type
"T_sim": 200, # Number of periods to simulate
# Mean of log initial assets
"aNrmInitMean": np.log(0.8) - (0.5**2) / 2,
# Standard deviation of log initial assets
"aNrmInitStd": 0.5,
# Mean of log initial permanent income
"pLvlInitMean": 0.0,
# Standard deviation of log initial permanent income
"pLvlInitStd": 0.0,
# Aggregate permanent income growth factor
"PermGroFacAgg": 1.0,
# Age after which simulated agents are automatically killed
"T_age": None,
# markov array
"MrkvArray": [np.array([[0.984, 0.856], [0.0152, 0.14328]]).T],
}
Expand All @@ -174,11 +200,11 @@ def main_test(self):

TranShkDstn_e = MeanOneLogNormal(
Markov_vFuncBool_example.TranShkStd[0], 123
).approx(Markov_vFuncBool_example.TranShkCount)
).discretize(Markov_vFuncBool_example.TranShkCount, method="equiprobable")
TranShkDstn_u = DiscreteDistribution(np.ones(1), np.ones(1) * 0.2)
PermShkDstn = MeanOneLogNormal(
Markov_vFuncBool_example.PermShkStd[0], 123
).approx(Markov_vFuncBool_example.PermShkCount)
).discretize(Markov_vFuncBool_example.PermShkCount, method="equiprobable")

# employed Income shock distribution
employed_IncShkDstn = combine_indep_dstns(PermShkDstn, TranShkDstn_e)
Expand Down
Loading