-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add fitlog
#552
add fitlog
#552
Conversation
Codecov Report
@@ Coverage Diff @@
## main #552 +/- ##
==========================================
- Coverage 96.25% 96.05% -0.21%
==========================================
Files 27 27
Lines 2431 2456 +25
==========================================
+ Hits 2340 2359 +19
- Misses 91 97 +6
Continue to review full report at Codecov.
|
@dmbates I took the opportunity to add a plot showing the different convergence pattern of Nelder-Mead and BOBYQA: https://juliastats.org/MixedModels.jl/previews/PR552/optimization/#Modifying-the-optimization-process |
@@ -13,7 +13,11 @@ include("modelcache.jl") | |||
|
|||
@testset "contra" begin | |||
contra = dataset(:contra) | |||
gm0 = fit(MixedModel, only(gfms[:contra]), contra, Bernoulli(); fast=true, progress=false) | |||
thin = 5 | |||
gm0 = fit(MixedModel, only(gfms[:contra]), contra, Bernoulli(); fast=true, progress=false, thin) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That syntax leads me astray. I keep forgetting that it expands to , thin=thin)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for making these changes. I think it is much improved.
@dmbates Are you happy with tagging a new release with this and constrained sigma? That locks us into this API until 5.0 lands. |
Can you hold off tagging a new release until I work on the leverage changes
a bit more? They won't be breaking changes - just a different way of
evaluating the same results.
…On Thu, Aug 19, 2021 at 11:37 AM Phillip Alday ***@***.***> wrote:
@dmbates <https://github.com/dmbates> Are you happy with tagging a new
release with this and constrained sigma? That locks us into this API until
5.0 lands.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#552 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAC2UOVK64RVT4BJECNM45TT5UXNLANCNFSM5CKJUG7Q>
.
|
Yep, easy. |
Addresses (but does not close) #549 (because we want to changeOptSummary
later to be smarter about this).Closes #549. There is a minor inefficiency in that
initial
,final
,finitial
,fmin
store information duplicated in thefitlog
. Not duplicating this information (by cheating and using properties that redirect to the relevant field entries) turns out to make the bookkeeping a lot messier, so I've let this inefficiency remain (which also keeps better compatibility). I useVector
inside of the log instead ofSVector
because the changing size of the parameter vector in GLMM creates problems.It also seems that I might have found another case where JuliaLang/julia#40048 isn't as fixed as we had hoped -- I get a big stackoverflow type inference dump as a Heisenbug when stepping through the
contra
testset, BUT it continues afterwards with the correct result.