-
Notifications
You must be signed in to change notification settings - Fork 3.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GSOC2017 Genaralized Negative Binomial (NB-P) model #3832
Conversation
counts = np.atleast_2d(np.arange(0, np.max(self.endog)+1)) | ||
mu = self.predict(params, exog=exog, exposure=exposure, | ||
offset=offset)[:,None] | ||
return nbinom.pmf(counts, mu, params[-1], self.parametrization) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm pretty sure this is wrong, nbinom uses a different parameterization
for standard negative binomial see
https://gist.github.com/josef-pkt/c4f5d0f315c0ce4e6ecc65f0512e8296 In [22]
and #106 (comment)
we need an extra method to convert the parameterization, e.g. convert_params
Log(exposure) is added to the linear prediction with coefficient | ||
equal to 1. | ||
""" + base._missing_param_doc} | ||
def __init__(self, endog, exog, p=1, offset=None, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
empty line before `def
Looks good based on a quick read (excluding unit tests) predict "prob" will need unit tests. |
This pull request introduces 3 alerts - view on lgtm.com new alerts:
Comment posted by lgtm.com |
@josef-pkt |
@@ -1772,6 +1773,244 @@ def test_predict_prob(self): | |||
assert_allclose(chi2[:], (0.64628806058715882, 0.98578597726324468), | |||
rtol=0.01) | |||
|
|||
class TestNegativeBinomial_pNB2Newton(CheckModelResults): | |||
@classmethod | |||
def setupClass(cls): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
master now uses pytest instead of nosetest.
this needs to be setup_class
now, instead of camel case
|
||
class TestNegativeBinomial_pNB1Newton(CheckModelResults): | ||
@classmethod | ||
def setupClass(cls): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same here, and other places below
I made inline comments. (I will be offline for a few hours, but then I can check if there are other problems) |
1 similar comment
@josef-pkt Can you review this PR at first? Because I want to finish it firstly and than implement ZINB, Truncated NB and some Hurdle models with NB-p. Personal note: |
def loglike(self, params): | ||
""" | ||
Loglikelihood of Negative Binomial model | ||
Parameters |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
empty line before Parameters, also in other docstrings
return llf | ||
|
||
def score_obs(self, params): | ||
if self._transparams: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
missing docstring
return np.concatenate((dparams, np.atleast_2d(dalpha).T), | ||
axis=1) | ||
|
||
def score(self, params): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
docstrings
some docstrings are missing to be more pep-8 compatible, it is better to rename the class name Otherwise, I think we should merge this so you can rebase on it for the other models. |
The implementation is similar to NegativeBinomial. However, there might be problems with NegativeBinomial and then similarly here. E.g. I think that the transparams might not be handled correctly and that there are problems with small alpha #3863 |
This pull request introduces 3 alerts - view on lgtm.com new alerts:
Comment posted by lgtm.com |
@josef-pkt |
I would rather merge this NBP PR into master and you can rebase your other branches on master. |
@josef-pkt NBP has the same problem with alpha=0 as current NB. I didn't tried to fix it. |
I don't know yet if we can fix that or how to fix it. For sure it will not be easy. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added a few more comments mainly for changes in docstrings.
I only spot checked the code, test coverage seems to be good.
Then you can rebase and I will merge it
@@ -2704,6 +2706,345 @@ def fit_regularized(self, start_params=None, method='l1', | |||
|
|||
return L1NegativeBinomialResultsWrapper(discretefit) | |||
|
|||
class NegativeBinomialP(CountModel): | |||
__doc__ = """ | |||
Negative Binomial model for count data |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
better to distinguish from NegativeBinomial:
"Generalized Negative Binomial (NB-P) model for count data"
endog : array | ||
A reference to the endogenous response variable | ||
exog : array | ||
A reference to the exogenous design. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"p" parameter is missing AFAICS
|
||
def loglikeobs(self, params): | ||
""" | ||
Loglikelihood for observations of Negative Binomial model |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should add "NB-P" in all docstrings (first line), e.g.
"Loglikelihood for observations of Negative Binomial NB-P model"
---------- | ||
params : array-like | ||
The parameters of the model. | ||
Returns |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
add empty line before section headers
self._transparams = True | ||
else: | ||
if use_transparams: | ||
warnings.warn("Paramter \"use_transparams\" is ignored", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you can use single quotes to avoid backslash, e.g.
warnings.warn('Parameter "use_transparams" is ignored',
Note misspelled Parameter, missing e
discretefit = L1NegativeBinomialResults(self, cntfit) | ||
else: | ||
raise TypeError( | ||
"argument method == %s, which is not handled" % method) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't check if this is the general pattern in fit_regularized.
If we raise an exception based on arguments, then it should raise immediately before computations are done.
move
if method not in ...:
raise TypeError
to the top of the method
which='mean'): | ||
""" | ||
Predict response variable of a count model given exogenous variables. | ||
Notes |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Parameter and Returns section are missing
empty line before section headers
|
||
#NOTE: The bse is much closer precitions to stata | ||
def test_bse(self): | ||
assert_almost_equal(self.res1.bse, self.res2.bse, DECIMAL_3) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in general: use assert_allclose
with appropriate choice of atol and/or rtol.
assert_almost_equal
is much less flexible and comes from code that was written before numpy had assert_allclose
This pull request introduces 3 alerts - view on lgtm.com new alerts:
Comment posted by lgtm.com |
@josef-pkt |
This pull request introduces 3 alerts - view on lgtm.com new alerts:
Comment posted by lgtm.com |
This PR introduce implementation of Generalized Negative Binomial (NB-P) model.
This model include:
Status - merged #3874