-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Synthetic Semantic Segmentation Benchmark #820
Conversation
n_labels: int, | ||
height: int, | ||
width: int, | ||
) -> Segmentation: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you should separate out the parts that generate synthetic data because those parts are useful outside of benchmarking. For example this could be used by something similar to the test_stability.py
unit tests that detection and classification have. I have also used these to play around with the API and see what evaluations look like with different options. Very useful IMO and worth making it easy to do.
Not exactly sure where it should go, maybe in segmentation/synthetic.py
or segmentation/test_data.py
? Maybe right in segmentation/annotation.py
itself? I'm open to other ideas.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Moved generate_segmentation
to annotation.py
!
Changes
valor-lite.profiling
valor-lite.semantic_segmentation.benchmark
benchmarking.ipynb