-
Notifications
You must be signed in to change notification settings - Fork 84
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Accumulating common random effects #206
Comments
Yes, this is a generic problem, and mark-recapture is a good way to illustrate it. You cannot Hans |
So if I understand correctly you are saying that you would write functions in R for the likelihood and gradients that would be used in optim that would loop over and call f$fn() and f$gr() and sum them after weighting. If so, this could be made to work over the unique capture histories and could be completely general. Can you pass the switch value to the cpp program through f$fn and f$gr or do you need to re-run MakeADFun with the switch value included in the data? Essentially you would be taking the loop over individuals out of the cpp function and doing it in R. Is my interpretation correct? regards --jeff |
Yes, this is what I meant; apply the weighting on the R side. I do not see an easy way |
I work with capture-recapture data sets and in most data sets there are usually many individuals that are released and never resighted/recaptured. In fitting models with individual (animal) random effects, the random effect will be identical for each of these animals. Thus what you would like to do is to pass a single capture history with a frequency (number with that capture history - eg 10000000). This works quite well with fixed effects by multiplying the log-likelihood contribution by the frequency. However, if you do the same with the dnorm(ui,0,1,true) to reflect that it is the same random effect for each individual, you do not get the same likelihood (not even close). This extends to other capture histories as well which are represented by more than one animal. A solution to this problem would have important speedup consequences. I don't think this is a unique problem to capture-recapture and could be important for large data sets with far fewer unique responses. Any ideas would be appreciated. I'd like to avoid using something like gauss-hermite integration because I also often have other random effects like time or space. For models with time random effects only, the accumulation works well because they are same across all individuals and you don't have to weight the likelihood contribution for the random effect.
Ideas?
regards--jeff
The text was updated successfully, but these errors were encountered: