Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Confidence level in augment.lm #949

Closed
julian-urbano opened this issue Oct 11, 2020 · 8 comments · Fixed by #1191
Closed

Confidence level in augment.lm #949

julian-urbano opened this issue Oct 11, 2020 · 8 comments · Fixed by #1191

Comments

@julian-urbano
Copy link

Hi!

I was checking augment to compute intervals, and wanted to use different confidence levels. I presumed this would be done via something like conf.level=0.9, thinking about how predict is used. However, the intervals are always the same, regardless of the confidence level.

I looked at the code for predict.lm and found this line here

df <- augment_newdata(x, data, newdata, se_fit, interval)

I think it should read as follows

df <- augment_newdata(x, data, newdata, se_fit, interval, ...)

to forward other arguments, such as conf.level in this case.

Thoughts?

@simonpcouch
Copy link
Collaborator

This sounds like a good idea to me!

re: the comment be *incredibly* careful that the ... are passed correctly in the augment_newdata comments, checking in—does this sound good to you, @alexpghayes?

@julian-urbano
Copy link
Author

Just wanted to nudge this a bit. I'm updating my course materials one year after and this is unfortunately not correct yet.

@petebaker
Copy link

Hi @julian-urbano, @simonpcouch and @alexpghayes

I would also like to nudge this topic along but I'm not sure about the interest level. I think that broom is fantastic and it's a great step in the right direction. However, being able to set the level for a CI but not a PI is a deal breaker for me and I believe it is a trap for the unwary.

Recently, I highly recommended broom to colleagues but once I found out the state of play w.r.t. prediction intervals I've highly un-recommended it. Here's why.

My colleagues wanted to study various prediction intervals 80%, 90% etc but they pointed out that the prediction intervals are always 95% which makes augment.lm essentially useless for them, and unfortunately they couldn't work out how to make it work so they abandoned augment.lm and rolled their own using predict.lm and level.

The documentation around ... is somewhat messy but the important problem is that ... is not implemented. The documentation gives a heads up that things can go wrong when options are chosen via ... which is perfectly sensible. (However, given that ... isn't implemented for prediction, then it is simply wrong). Am I missing something here?

I think broom is great but if standard options/arguments like level for predict.lm simply aren't implemented when it appears that they are available, then this seems dangerous to me.

If maintainers are interested, then I will have a go at a pull request.

Comments most welcome!

Cheers
Peter

PS: Having produced both CIs and PIs for about 40 years now, then I would say that prediction intervals for new observations are often more useful than confidence intervals for the mean response (unless fixed at some arbitrary level of course).

@alexpghayes
Copy link
Collaborator

alexpghayes commented Apr 12, 2022 via email

@ptoche
Copy link

ptoche commented Apr 27, 2022

This would definitely be a nice addition. I think most people use tidy() to set confidence levels, rather than augment(), but I too tried it and noticed the confidence intervals weren't updated. Feels like a natural thing to try. Would be nice to have.

@julian-urbano
Copy link
Author

julian-urbano commented Aug 31, 2022

So another year has passed, and I decided to have a go at this and create a pull request, as suggested by @alexpghayes.
However, when testing I get this error:

Failure (test-stats-lm.R:9:3): lm tidier arguments
length(not_allowed) == 0 is not TRUE

`actual`:   FALSE
`expected`: TRUE 
Arguments level to `augment.lm` must be listed in the argument glossary.
Backtrace:
 1. modeltests::check_arguments(augment.lm)
      at test-stats-lm.R:9:2
 2. testthat::expect_true(...)

which seems to refer to the glossary on acceptable argument names. It looks like augment does not currently accept anything related to confidence levels.

Does this mean that this issue will not be addressed anytime soon? How about forwarding ... to predict?

@alexpghayes
Copy link
Collaborator

You'll need to make a second PR to alexpghayes/modeltests to add any new argument names to individual augment() methods.

Copy link

This issue has been automatically locked. If you believe you have found a related problem, please file a new issue (with a reprex: https://reprex.tidyverse.org) and link to this issue.

@github-actions github-actions bot locked and limited conversation to collaborators May 21, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants