-
Notifications
You must be signed in to change notification settings - Fork 43
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
rewrite SortBenchmarks.jl #305
Conversation
@LilithHafner do you have any idea on why SortBenchmarks.jl is failing on the nightly? |
Oh! The stack trace that I was looking for is here, thanks for pointing out the failure on nightly. JuliaLang/julia#47822 should fix it. |
Is it possible to tell nanosoldier use this branch for a specific invocation before the PR merges? |
No, Nanosoldier's BenchmarkJob (which @vtjnash currently admins) doesn't support such configuration. It would need to be added to the BenchmarkJob struct, https://github.com/JuliaCI/Nanosoldier.jl/blob/542ba41bc1a72287991a0c9e99859f3e92cb2ca6/src/jobs/BenchmarkJob.jl#L43-L50, add some parsing code to pick up an invocation flag, https://github.com/JuliaCI/Nanosoldier.jl/blob/542ba41bc1a72287991a0c9e99859f3e92cb2ca6/src/jobs/BenchmarkJob.jl#L52-L104, and use that option when checking out BaseBenchmarks, https://github.com/JuliaCI/Nanosoldier.jl/blob/master/src/jobs/BenchmarkJob.jl#L326-L349. |
Oh well. Regardless, pending review of the technical details (e.g. labels for benchmark groups) I think it is fairly low risk to merge this because I'd like to merge this because it will provide a much more robust automated search for performance regressions. For example, there are still multiple regressions detected by this PR's version of SortingBenchmarks between release-1.9 and release-1.8. |
Bump. Sorting nanosoldier results are unhelpful right now because they don't even touch the most common cases like |
LGTM, too. @oscardssmith I don't have write access to JuliaCI. If you are waiting for additional approving reviews that's okay, but if not I'd appreciate it if you merge this. |
Greatly increase the diversity of benchmarks while reducing runtime to 0.6x of what it was before
I crafted these benchmarks for personal experience, from building off of the benchmarks I used in #47383, and from searching everything that is tagged with both sort and performance. I kept most of the existing benchmarks, but reduced their redundancy significantly.
Something had to be lost in this change and what was lost is primarily tests of sorting large arrays that are already sorted, reverse sorted or all same with specified non-default algorithms.