-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adds @is_post_processing
macro
#589
Conversation
inside-model-macros a bit clearer + a `PostProcessingContext` and macro to enable user to perform conditional computation outside of inference
Pull Request Test Coverage Report for Build 8755557702Details
💛 - Coveralls |
@@ -129,6 +129,7 @@ export AbstractVarInfo, | |||
unfix, | |||
# Convenience macros | |||
@addlogprob!, | |||
@is_post_processing, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@is_post_processing, | |
@is_generated_quantities, |
of post-processing the inference results, e.g. making predictions or computing | ||
generated quantities. | ||
""" | ||
struct PostProcessingContext{Ctx} <: AbstractContext |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
struct PostProcessingContext{Ctx} <: AbstractContext | |
struct GeneratedQuantitiesContext{Ctx} <: AbstractContext |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like a helpful feature, but I'm not sure we want to promote it as the standard workflow. In my view, users should write separate functions that take the MCMCChains object as input and return generated quantities. But it might be hard to resist a sugar syntax for convenience!
@is_post_processing
macro@is_generated_quantities
macro
@is_generated_quantities
macro@is_post_processing
macro
@@ -664,3 +664,66 @@ function fixed(context::FixedContext) | |||
# precedence over decendants of `context`. | |||
return merge(context.values, fixed(childcontext(context))) | |||
end | |||
|
|||
"""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It might be better to move these new codes and the existing generated_quantities
functions into a new file generated_quantities.jl
, so it is self-contained.
Now that we can track any variable into function f(s::NamedTuple)
end Then provide generated_quantities(f::Function, c::MCMCChains) as a convenience function to calculate compute-intensive generated variables instead of optionally skipping them inside the model. |
I'm very much against this. How is this better than using IMO this is not worth it compared to just using @model demo() = x ~ Normal()
@model demo_for_post_inference() = (@submodel x = demo(); return f(x)) and then you can run inference on |
It's good to know the limitations of For clarity, I am unsure of the motivation for a second model, @model demo() = x ~ Normal()
generated_quantities(f∘demo(), chain::MCMCChains.Chain) # maybe: compose operator not implemented
If so, we can consider adding a new method, |
Even if we define this map(f, generated_quantities(model, chain)) which makes me somewhat uncertain why we'd want to hide this behind a |
That works well and could be recommended as the standard workflow. I am closing this PR now since it introduces additional syntax and complexity that can otherwise avoided without sacrificing functionality. |
NOTE: I'd love suggestions for different names though!
We're seeing more and more users making use of
generated_quantities
and the like, which means there are more and more use-cases where the user wants to exclude certain computations from the model during inference but include these during "post-processing"-steps (can we find a better name?).This PR adds a macro
@is_post_processing
(again, better name please <3) which can be used to conditionally perform computation when not doing inference.E.g.
This topic has come up on multiple occasions: #510, #94 (comment), and definitively other places too. I know there's generally a reluctance to rely on additional macros for these purposes but AFAIK there's no better solution, so at some point, e.g. now, I think we gotta bite the bullet and get something done.
And I don't think making the user add specific arguments to the model indicating whether they're in "post-processing"-mode or inference-mode is a good way to go for the following reasons:
@submodel
, since otherwise this "are we inference-ing"-argument needs to be passed down aaaall the models, which is annoying. Doing it with contexts, as we do in this PR, nesting of models, etc. just works.