-
Notifications
You must be signed in to change notification settings - Fork 12.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Optimize integer decoding a bit #109867
Optimize integer decoding a bit #109867
Conversation
r? @oli-obk (rustbot has picked a reviewer for you, use r? to override) |
@bors try @rust-timer queue |
This comment has been minimized.
This comment has been minimized.
⌛ Trying commit e917431d24c42fc06111de885ba14992119fe1f0 with merge e08f6ec27f9f0602d0b1b4d149c542d8aa697236... |
💔 Test failed - checks-actions |
e917431
to
69f33e5
Compare
This comment has been minimized.
This comment has been minimized.
@bors try @rust-timer queue |
This comment has been minimized.
This comment has been minimized.
⌛ Trying commit 69f33e5 with merge 7198778f65090ef7dc1cdf01b81a5caf5f7e8746... |
☀️ Try build successful - checks-actions |
This comment has been minimized.
This comment has been minimized.
Finished benchmarking commit (7198778f65090ef7dc1cdf01b81a5caf5f7e8746): comparison URL. Overall result: ❌✅ regressions and improvements - ACTION NEEDEDBenchmarking this pull request likely means that it is perf-sensitive, so we're automatically marking it as not fit for rolling up. While you can manually mark this PR as fit for rollup, we strongly recommend not doing so since this PR may lead to changes in compiler perf. Next Steps: If you can justify the regressions found in this try perf run, please indicate this with @bors rollup=never Instruction countThis is a highly reliable metric that was used to determine the overall result at the top of this comment.
Max RSS (memory usage)ResultsThis is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.
CyclesResultsThis is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.
|
Interesting that tt-muncher goes a different direction. I'll look into this tomorrow. |
I do not see anything being decoded. tt-muncher seems to be noisy (also see #109849 (comment)) and the relevant code here has inlining attributes, so it's inlining sensitive, which is often noisy. |
let byte = slice[*position]; | ||
*position += 1; | ||
#[inline] | ||
fn inner(slice: &[u8], position: &mut usize) -> Option<$int_ty> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why the inner
? Couldn't you just use indexing syntax for the panicking gets? Do you want to minimize the number of panic paths to minimize code size?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. The code that I'm porting this from is a must-never-panic parser, so these failures are hoisted to an outer layer with ?
. Then in picking through the existing codegen in rustc, I noticed that we have two separate calls to core::panicking::panic_bounds_check
at the bottom of this function, because the use of panicking indexing requires rustc to preserve the line numbers in the panic message. But nobody cares exactly which line panicked, only that leb128 decoding failed (not even that actually, but our hands are tied by the Decodable
interface)
I'm pushing a commit to establish if this matters.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, looks like it matters 🙃
@bors try @rust-timer queue |
This comment has been minimized.
This comment has been minimized.
8c4b4c6
to
d8877cd
Compare
@bors try @rust-timer queue |
This comment has been minimized.
This comment has been minimized.
⌛ Trying commit d8877cd with merge d56f32254643aa51b39424440e71cf24c1fe9b71... |
☀️ Try build successful - checks-actions |
This comment has been minimized.
This comment has been minimized.
Finished benchmarking commit (d56f32254643aa51b39424440e71cf24c1fe9b71): comparison URL. Overall result: ❌ regressions - ACTION NEEDEDBenchmarking this pull request likely means that it is perf-sensitive, so we're automatically marking it as not fit for rolling up. While you can manually mark this PR as fit for rollup, we strongly recommend not doing so since this PR may lead to changes in compiler perf. Next Steps: If you can justify the regressions found in this try perf run, please indicate this with @bors rollup=never Instruction countThis is a highly reliable metric that was used to determine the overall result at the top of this comment.
Max RSS (memory usage)ResultsThis is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.
CyclesResultsThis is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.
|
Local profiling indicates the u32 decoding slow path is unusually hot. I think this regresses because we are decoding a lot of big |
What are the different perf runs for exactly? #109867 (comment) seemed pretty good on its own, orthogonally from changing how we encode |
I am not convinced that these concerns are actually orthogonal. Here is the distribution of the lengths in bytes of leb128 encoded integers, encoded during a stage2 build:
Note that the change from 2 to 3 to 4 is nearly linear. We should expect something more like power law. Based on that data I think we have 3 classes of integers that pass through leb128 encoding:
leb128 is only optimal for the last category, because small values are vastly more common. But for global counters, the most-used value ranges are multi-byte. These global counters apply disproportionate stress to the multi-byte leb128 path, making a cold path hotter than it should be. I have proved out this hypothesis here: #110050 (comment) It is my opinion that these global counters shouldn't hit the leb128 code path at all, and that when they are shifted out of the leb128 code path, there is a reasonable chance we will find that a different structure of the leb128 code is optimal. Additionally, this PR shows a maximum of 1% improvement and mostly on incr-unchanged benchmarks. If someone else wants to do up the PR for the first perf run here that's fine by me. I'm just personally not excited about micro-optimizing some code then re-micro-optimizing it shortly after. |
Ah yes, I hadn't considered that effect. Avoiding packed encoding for certain types seems reasonable to me |
So... how would a plan look here? It does sound like you have a plan for how to continue, but I'm not sure what experiments were run here or what your desired path forward exactly is. |
My suggestion would be to first track down which data structures give us the >9 bytes encodings. If we indeed have hashes, they should have a dedicated type to use a more adapted encoding, like Fingerprint does. For the global counters, it depends on where they come from... |
I think Zulip is a better place to discuss plans, because GitHub is already hiding a lot in this PR: https://rust-lang.zulipchat.com/#narrow/stream/247081-t-compiler.2Fperformance/topic/Integer.20encoding/near/347877677 |
Rewrite MemDecoder around pointers not a slice This is basically rust-lang#109910 but I'm being a lot more aggressive. The pointer-based structure means that it makes a lot more sense to absorb more complexity into `MemDecoder`, most of the diff is just complexity moving from one place to another. The primary argument for this structure is that we only incur a single bounds check when doing multi-byte reads from a `MemDecoder`. With the slice-based implementation we need to do those with `data[position..position + len]` , which needs to account for `position + len` wrapping. It would be possible to dodge the first bounds check if we stored a slice that starts at `position`, but that would require updating the pointer and length on every read. This PR also embeds the failure path in a separate function, which means that this PR should subsume all the perf wins observed in rust-lang#109867.
Rewrite MemDecoder around pointers not a slice This is basically rust-lang/rust#109910 but I'm being a lot more aggressive. The pointer-based structure means that it makes a lot more sense to absorb more complexity into `MemDecoder`, most of the diff is just complexity moving from one place to another. The primary argument for this structure is that we only incur a single bounds check when doing multi-byte reads from a `MemDecoder`. With the slice-based implementation we need to do those with `data[position..position + len]` , which needs to account for `position + len` wrapping. It would be possible to dodge the first bounds check if we stored a slice that starts at `position`, but that would require updating the pointer and length on every read. This PR also embeds the failure path in a separate function, which means that this PR should subsume all the perf wins observed in rust-lang/rust#109867.
The changes in this PR which I deduced are useful have been subsumed by #110634 |
Rewrite MemDecoder around pointers not a slice This is basically rust-lang/rust#109910 but I'm being a lot more aggressive. The pointer-based structure means that it makes a lot more sense to absorb more complexity into `MemDecoder`, most of the diff is just complexity moving from one place to another. The primary argument for this structure is that we only incur a single bounds check when doing multi-byte reads from a `MemDecoder`. With the slice-based implementation we need to do those with `data[position..position + len]` , which needs to account for `position + len` wrapping. It would be possible to dodge the first bounds check if we stored a slice that starts at `position`, but that would require updating the pointer and length on every read. This PR also embeds the failure path in a separate function, which means that this PR should subsume all the perf wins observed in rust-lang/rust#109867.
Rewrite MemDecoder around pointers not a slice This is basically rust-lang/rust#109910 but I'm being a lot more aggressive. The pointer-based structure means that it makes a lot more sense to absorb more complexity into `MemDecoder`, most of the diff is just complexity moving from one place to another. The primary argument for this structure is that we only incur a single bounds check when doing multi-byte reads from a `MemDecoder`. With the slice-based implementation we need to do those with `data[position..position + len]` , which needs to account for `position + len` wrapping. It would be possible to dodge the first bounds check if we stored a slice that starts at `position`, but that would require updating the pointer and length on every read. This PR also embeds the failure path in a separate function, which means that this PR should subsume all the perf wins observed in rust-lang/rust#109867.
(I wrote a leb128 decoder before from scratch and came up with a different design than what currently exists in rustc, so I'm dropping in what I came up with)
Perf indicates that these changes help:
Option::None
from two branches, so that we only have one panic blockPerf does not indicate that these changes help:
noalias
This function is pretty big, but the top small part is hotter. Since we know an upper bound on the loop trip count, we can get LLVM to unroll it completely. So there is a potential design where we factor this into two functions, inlining a small body for 1 or 2-byte integers, and jump to a much longer body for multi-byte decoding, no loops.