Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

change the verbose param of optimize_prior! from a boolean to an integer? #125

Closed
pasq-cat opened this issue Oct 3, 2024 · 3 comments · Fixed by #127
Closed

change the verbose param of optimize_prior! from a boolean to an integer? #125

pasq-cat opened this issue Oct 3, 2024 · 3 comments · Fixed by #127

Comments

@pasq-cat
Copy link
Member

pasq-cat commented Oct 3, 2024

at the moment in the mlj interface the verbosity parameter accepted by mlj must be translated in a boolean through
verbose = verbosity == 0 ? false : true

In accordance with MLJ, maybe we can change the verbose param in "verbosity" and change its nature to an integer so that in future different levels of informations can be displayed depending on the verbosity level set in MLJ.
Otherwise i will keep things as they are.

btw since the training loop of optimize_prior! is not exposed i cannot continue it in the MMI.update function, only start it from zero with a new number of steps.

@pasq-cat
Copy link
Member Author

@pat-alt before completing the interface there is also this.

@pat-alt
Copy link
Member

pat-alt commented Oct 16, 2024

Totally agree regarding verbosity.

As for this

btw since the training loop of optimize_prior! is not exposed i cannot continue it in the MMI.update function, only start it from zero with a new number of steps.

I'd have to think about it more. Worth having a separate issue for that?

@pasq-cat
Copy link
Member Author

pasq-cat commented Oct 16, 2024

if , according to your experience, laplaceredux only needs few hundred steps for the majority of cases then it's not really a problem and it's not worth the trouble.
If instead you think that for bigger problems the computational cost of starting from zero may be consistent then it may be a good idea to expose the training function. from what i have seen, it doesn't seem a problem.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants