-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Benchmarks vs 0.6 in prep for 0.7 release [do not merge] #27030
Conversation
@nanosoldier |
Your benchmark job has completed - possible performance regressions were detected. A full report can be found here. cc @ararslan |
Wow, overall that actually looks really good. |
@nanosoldier |
Your benchmark job has completed - possible performance regressions were detected. A full report can be found here. cc @ararslan |
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
ad1b830
to
f777434
Compare
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
@nanosoldier |
Your benchmark job has completed - possible performance regressions were detected. A full report can be found here. cc @ararslan |
I updated the OP with the results of the latest benchmark. I scrolled through kinda quickly, so feel free to add things to it. While we are doing great on a bunch of micro benchmarks it is a bit worrying that many of the larger examples ( |
What happened with
Reduced to: 0.6:
0.7
Ok, it is the new match options
Ref #26731 |
Regression for julia> chars = [rand(Char) for i in 1:10^4];
0.7
julia> @btime join(chars, "");
4.729 ms (30019 allocations: 1004.14 KiB)
0.6
julia> @btime join(chars, "");
771.538 μs (20 allocations: 66.67 KiB) Relevant part of the profile:
Seems assigning to the Probably fixed by #27685. |
Regression for Without changes:
Adding
Damn Boxes. |
Regression for spellcheck seems to be reduced to: const ALPHABET = "abcdefghijklmnopqrstuvwxyz"
function edits(splits)
s = splits[1:end-1]
m = Matrix{String}(undef, length(ALPHABET), length(s))
i = 1
for (a,b) in s
j = 1
for c in ALPHABET
m[j, i] = string(a, c, b[2:end])
j += 1
end
i += 1
end
return m
end
word = "Foobar"
splits = [(word[1:i], word[i+1:end]) for i=0:length(word) ] # 0.6
julia> @btime edits(splits)
9.300 μs (288 allocations: 10.39 KiB)
julia> @btime edits(splits)
75.803 μs (1380 allocations: 49.39 KiB) Further reduced to: # 0.6
julia> @btime string("foo", 'b', "ar")
36.804 ns (1 allocation: 32 bytes)
# 0.7
julia> @btime string("foo", 'b', "ar")
443.838 ns (8 allocations: 288 bytes) Contrast with: 0.6
julia> @btime string("foo", "b", "ar")
35.629 ns (1 allocation: 32 bytes)
0.7
julia> @btime string("foo", "b", "ar")
32.122 ns (1 allocation: 32 bytes) @StefanKarpinski it seems the replacement of function string2(a::Union{String,AbstractChar}...)
s = IOBuffer()
for x in a
print(s, x)
end
return String(resize!(s.data, s.size))
end helps a bit: julia> @btime string2("Foo", 'b', "ar")
110.308 ns (3 allocations: 192 bytes) but still 3x slower than before. |
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
I think BaseBenchmarks should be working now? @nanosoldier |
Your benchmark job has completed - possible performance regressions were detected. A full report can be found here. cc @ararslan |
Has anyone looked at the
This should be nanoseconds |
The reason for the regression is quite simple. In 0.6, arrays and ranges were not considered |
Updated OP with the latest result. Now with deprecation warnings fixed and some performance fixes in, it is starting to look quite nice! |
From ae8e95f#commitcomment-29628207 (update of #27539), there's a couple more potential regressions to investigate: mapr_access (array) - we appear to be faster than v0.6 by 2-3x, but slower than we were 2 months ago by up to 40x |
I don't think there's any nanosoldier tests for this. This change is due to #23528, which changed the definition from |
Let's run an update after #27945 (comment). @nanosoldier |
Your benchmark job has completed - possible performance regressions were detected. A full report can be found here. cc @ararslan |
Sigh, last one was against the wrong target... @nanosoldier but it is interesting that this PR that does nothing gives so many memory improvements |
Er, it's nice to know that the version number bump will make several benchmarks much faster relative to master 😜 |
Your benchmark job has completed - possible performance regressions were detected. A full report can be found here. cc @ararslan |
what's wrong with the Laplacian benchmark? Deprecation? |
I think the |
Some more fixes in (mostly to SIMD). Let's check again. @nanosoldier |
Your benchmark job has completed - possible performance regressions were detected. A full report can be found here. cc @ararslan |
Just to make sure nothing crept in: @nanosoldier |
Issue to see how our benchmarks are compared to the previous release.
Updated table:
Intersect:
E.g:
Problem
I regard these as extra important because they tend to be non toy-examples.
Spellcheck analysis: #27030 (comment)
Shootout
Regex DNA is: #26731
Simd
Parsing
Latest benchmark at: https://github.com/JuliaCI/BaseBenchmarkReports/blob/a6f383f75c9cb427e1e93b97e559cd31189c1bdf/e1dfb30_vs_df1c1c9/report.md