-
-
Notifications
You must be signed in to change notification settings - Fork 39
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WIP: Improve metafor support #514
Conversation
You're the meta analysis expert. And in 95% of all cases, the things you did made sense. So I trust you here :-) |
What Daniel said (: |
What Mattan said ;) |
Codecov Report
@@ Coverage Diff @@
## main #514 +/- ##
==========================================
- Coverage 51.62% 51.30% -0.33%
==========================================
Files 119 119
Lines 13964 14052 +88
==========================================
Hits 7209 7209
- Misses 6755 6843 +88
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. |
This would be a very useful commit. Any updates? |
I just saw you wanted to remove
|
I think we should remove the studies option from parameters and use get_data() for that |
tl;dr: What do you think of my adding
include_interval
arguments toget_data.rma()
rather than these being inparameters::parameters()
? @strengejacke @DominiqueMakowski @IndrajeetPatil @mattansbI'm looking to improve our handling of meta-analysis objects, particularly from metafor.
The first step here is to get various functions in insight working.
Currently, I've improved missing data handling in
get_data.rma()
, and madeget_formula.rma()
work for most cases.I have also added arguments to include confidence intervals for effect sizes in
get_data.rma()
. This is to facilitate creation of forest plots. Currently, we have this functionality inparameters::model_parameters.rma()
, but it doesn't make much sense to me there. The observed study effect sizes are not really model parameters--they are the observed raw data. Including them inparameters()
also produces odd output for meta-analysis with predictors (vs intercept-only). So I suggest we move this functionality here. I think the reason they might be inparameters()
is because they are inbroom::tidy()
, but this doesn't make much sense to me there either. This choice was probably to facilitate making forest plots, but there are better ways to do that, and this structure isn't really all that useful for these plots anyway.Still to come:
insight
get_predicted()
In
get_predicted()
, we should have:coef()
ormodelbased::estimate_grouplevel()
in an MLM)Should the the third option rather be in
parameters::model_parameters(..., group_level = TRUE)
, with options "random" and "total"/"blup"? That would be more similar to how we handled mixed effects modelsIn addition,
get_predicted()
needs some work to handle metafor's unusual expected inputs topredict()
(a model matrix rather than a model frame) and unusual output (a single row for intercept-only models).The various flavors of rma models should be handled.
parameters
Update
model_parameters()
:include_studies
performance
check_model()
workShould generally look and work like with mixed effects models
modelbased
estimate_*()
functions to ensure they worksee
I could move this off into their package (easymeta or similar) if desired