-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remarks on benchmark problems #3
Comments
Don't define the pipelines. We can put stuff in an InferOpt extension relying on InferOptBenchmarks to train our pipelines. Imagine we define these benchmarks for a comparison between InferOpt and a competitor
InferOptBenchmarks should not depend on InferOpt. Maybe we should rename it to "DecisionFocusedLearningBenchmarks". Take inspiration from https://github.com/JuliaSmoothOptimizers/OptimizationProblems.jl |
In the docs and tests of this package, use a black box optimizer in the pipeline (no autodiff shenanigans) to learn without depending on InferOpt |
Today's meeting notes:
|
Interface
generate_blabla
doesgenerate_maximizer
does not return a differentiable layergenerate_maximizer
the signature (args and kwargs) of the returned closureGetting data
Data sources
Problem meaning
Varying instance sizes
ShortestPathBenchmark
to draw a random grid size from specified ranges of height and width, then see what you need in the interface to make it workThe text was updated successfully, but these errors were encountered: