Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimize integer decoding a bit #109867

Closed
wants to merge 2 commits into from

Conversation

saethlin
Copy link
Member

@saethlin saethlin commented Apr 2, 2023

(I wrote a leb128 decoder before from scratch and came up with a different design than what currently exists in rustc, so I'm dropping in what I came up with)

Perf indicates that these changes help:

  • Wrap a function that doesn't panic but returns Option::None from two branches, so that we only have one panic block

Perf does not indicate that these changes help:

  • Batched updates to the position instead of trying to rely on noalias

This function is pretty big, but the top small part is hotter. Since we know an upper bound on the loop trip count, we can get LLVM to unroll it completely. So there is a potential design where we factor this into two functions, inlining a small body for 1 or 2-byte integers, and jump to a much longer body for multi-byte decoding, no loops.

@rustbot
Copy link
Collaborator

rustbot commented Apr 2, 2023

r? @oli-obk

(rustbot has picked a reviewer for you, use r? to override)

@rustbot rustbot added S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. T-compiler Relevant to the compiler team, which will review and decide on the PR/issue. labels Apr 2, 2023
@saethlin
Copy link
Member Author

saethlin commented Apr 2, 2023

@bors try @rust-timer queue

@rust-timer

This comment has been minimized.

@rustbot rustbot added the S-waiting-on-perf Status: Waiting on a perf run to be completed. label Apr 2, 2023
@bors
Copy link
Contributor

bors commented Apr 2, 2023

⌛ Trying commit e917431d24c42fc06111de885ba14992119fe1f0 with merge e08f6ec27f9f0602d0b1b4d149c542d8aa697236...

@bors
Copy link
Contributor

bors commented Apr 2, 2023

💔 Test failed - checks-actions

@bors bors added S-waiting-on-author Status: This is awaiting some action (such as code changes or more information) from the author. and removed S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. labels Apr 2, 2023
@rust-log-analyzer

This comment has been minimized.

@saethlin
Copy link
Member Author

saethlin commented Apr 2, 2023

@bors try @rust-timer queue

@rust-timer

This comment has been minimized.

@bors
Copy link
Contributor

bors commented Apr 2, 2023

⌛ Trying commit 69f33e5 with merge 7198778f65090ef7dc1cdf01b81a5caf5f7e8746...

@bors
Copy link
Contributor

bors commented Apr 3, 2023

☀️ Try build successful - checks-actions
Build commit: 7198778f65090ef7dc1cdf01b81a5caf5f7e8746 (7198778f65090ef7dc1cdf01b81a5caf5f7e8746)

@rust-timer

This comment has been minimized.

@rust-timer
Copy link
Collaborator

Finished benchmarking commit (7198778f65090ef7dc1cdf01b81a5caf5f7e8746): comparison URL.

Overall result: ❌✅ regressions and improvements - ACTION NEEDED

Benchmarking this pull request likely means that it is perf-sensitive, so we're automatically marking it as not fit for rolling up. While you can manually mark this PR as fit for rollup, we strongly recommend not doing so since this PR may lead to changes in compiler perf.

Next Steps: If you can justify the regressions found in this try perf run, please indicate this with @rustbot label: +perf-regression-triaged along with sufficient written justification. If you cannot justify the regressions please fix the regressions and do another perf run. If the next run shows neutral or positive results, the label will be automatically removed.

@bors rollup=never
@rustbot label: -S-waiting-on-perf +perf-regression

Instruction count

This is a highly reliable metric that was used to determine the overall result at the top of this comment.

mean range count
Regressions ❌
(primary)
- - 0
Regressions ❌
(secondary)
0.9% [0.4%, 1.2%] 8
Improvements ✅
(primary)
-0.6% [-1.1%, -0.3%] 72
Improvements ✅
(secondary)
-0.5% [-0.8%, -0.3%] 30
All ❌✅ (primary) -0.6% [-1.1%, -0.3%] 72

Max RSS (memory usage)

Results

This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.

mean range count
Regressions ❌
(primary)
2.3% [2.3%, 2.3%] 1
Regressions ❌
(secondary)
2.5% [2.5%, 2.5%] 1
Improvements ✅
(primary)
- - 0
Improvements ✅
(secondary)
- - 0
All ❌✅ (primary) 2.3% [2.3%, 2.3%] 1

Cycles

Results

This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.

mean range count
Regressions ❌
(primary)
- - 0
Regressions ❌
(secondary)
- - 0
Improvements ✅
(primary)
- - 0
Improvements ✅
(secondary)
-0.9% [-0.9%, -0.9%] 1
All ❌✅ (primary) - - 0

@rustbot rustbot added perf-regression Performance regression. and removed S-waiting-on-perf Status: Waiting on a perf run to be completed. labels Apr 3, 2023
@saethlin
Copy link
Member Author

saethlin commented Apr 3, 2023

Interesting that tt-muncher goes a different direction. I'll look into this tomorrow.

@Noratrieb
Copy link
Member

23,321,538  ???:<rustc_parse::parser::Parser>::parse_token_tree
 4,155,531  ???:<alloc::rc::Rc<alloc::vec::Vec<rustc_ast::tokenstream::TokenTree>> as core::ops::drop::Drop>::drop

I do not see anything being decoded. tt-muncher seems to be noisy (also see #109849 (comment)) and the relevant code here has inlining attributes, so it's inlining sensitive, which is often noisy.

let byte = slice[*position];
*position += 1;
#[inline]
fn inner(slice: &[u8], position: &mut usize) -> Option<$int_ty> {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why the inner? Couldn't you just use indexing syntax for the panicking gets? Do you want to minimize the number of panic paths to minimize code size?

Copy link
Member Author

@saethlin saethlin Apr 3, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. The code that I'm porting this from is a must-never-panic parser, so these failures are hoisted to an outer layer with ?. Then in picking through the existing codegen in rustc, I noticed that we have two separate calls to core::panicking::panic_bounds_check at the bottom of this function, because the use of panicking indexing requires rustc to preserve the line numbers in the panic message. But nobody cares exactly which line panicked, only that leb128 decoding failed (not even that actually, but our hands are tied by the Decodable interface)

I'm pushing a commit to establish if this matters.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, looks like it matters 🙃

@saethlin
Copy link
Member Author

saethlin commented Apr 3, 2023

@bors try @rust-timer queue

@rust-timer

This comment has been minimized.

@rustbot rustbot added the S-waiting-on-perf Status: Waiting on a perf run to be completed. label Apr 3, 2023
@saethlin
Copy link
Member Author

saethlin commented Apr 4, 2023

@bors try @rust-timer queue

@rust-timer

This comment has been minimized.

@rustbot rustbot added the S-waiting-on-perf Status: Waiting on a perf run to be completed. label Apr 4, 2023
@bors
Copy link
Contributor

bors commented Apr 4, 2023

⌛ Trying commit d8877cd with merge d56f32254643aa51b39424440e71cf24c1fe9b71...

@bors
Copy link
Contributor

bors commented Apr 4, 2023

☀️ Try build successful - checks-actions
Build commit: d56f32254643aa51b39424440e71cf24c1fe9b71 (d56f32254643aa51b39424440e71cf24c1fe9b71)

@rust-timer

This comment has been minimized.

@rust-timer
Copy link
Collaborator

Finished benchmarking commit (d56f32254643aa51b39424440e71cf24c1fe9b71): comparison URL.

Overall result: ❌ regressions - ACTION NEEDED

Benchmarking this pull request likely means that it is perf-sensitive, so we're automatically marking it as not fit for rolling up. While you can manually mark this PR as fit for rollup, we strongly recommend not doing so since this PR may lead to changes in compiler perf.

Next Steps: If you can justify the regressions found in this try perf run, please indicate this with @rustbot label: +perf-regression-triaged along with sufficient written justification. If you cannot justify the regressions please fix the regressions and do another perf run. If the next run shows neutral or positive results, the label will be automatically removed.

@bors rollup=never
@rustbot label: -S-waiting-on-perf +perf-regression

Instruction count

This is a highly reliable metric that was used to determine the overall result at the top of this comment.

mean range count
Regressions ❌
(primary)
1.7% [0.3%, 4.0%] 108
Regressions ❌
(secondary)
1.4% [0.4%, 3.5%] 54
Improvements ✅
(primary)
- - 0
Improvements ✅
(secondary)
- - 0
All ❌✅ (primary) 1.7% [0.3%, 4.0%] 108

Max RSS (memory usage)

Results

This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.

mean range count
Regressions ❌
(primary)
- - 0
Regressions ❌
(secondary)
- - 0
Improvements ✅
(primary)
-2.5% [-2.5%, -2.5%] 1
Improvements ✅
(secondary)
- - 0
All ❌✅ (primary) -2.5% [-2.5%, -2.5%] 1

Cycles

Results

This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.

mean range count
Regressions ❌
(primary)
1.9% [1.1%, 3.7%] 48
Regressions ❌
(secondary)
2.1% [1.5%, 2.8%] 16
Improvements ✅
(primary)
- - 0
Improvements ✅
(secondary)
- - 0
All ❌✅ (primary) 1.9% [1.1%, 3.7%] 48

@rustbot rustbot removed the S-waiting-on-perf Status: Waiting on a perf run to be completed. label Apr 4, 2023
@saethlin
Copy link
Member Author

saethlin commented Apr 4, 2023

Local profiling indicates the u32 decoding slow path is unusually hot.

I think this regresses because we are decoding a lot of big u32s. I think this might descend from a type that's bit packing into a u32. If that is the case, it would be significantly more efficient to not use leb128 for those. Same goes for hashes.

@oli-obk
Copy link
Contributor

oli-obk commented Apr 4, 2023

What are the different perf runs for exactly?

#109867 (comment) seemed pretty good on its own, orthogonally from changing how we encode u32s

@saethlin
Copy link
Member Author

saethlin commented Apr 7, 2023

I am not convinced that these concerns are actually orthogonal.

Here is the distribution of the lengths in bytes of leb128 encoded integers, encoded during a stage2 build:

(  1) 295764927 (74.0%, 74.0%): 1
(  2) 59918634 (15.0%, 89.0%): 2
(  3) 37650848 ( 9.4%, 98.4%): 3
(  4)  4110796 ( 1.0%, 99.5%): 4
(  5)  1069878 ( 0.3%, 99.7%): 10
(  6)   987847 ( 0.2%,100.0%): 9
(  7)    50884 ( 0.0%,100.0%): 19
(  8)    17557 ( 0.0%,100.0%): 18
(  9)    13337 ( 0.0%,100.0%): 8
( 10)     8752 ( 0.0%,100.0%): 5
( 11)      402 ( 0.0%,100.0%): 17
( 12)       22 ( 0.0%,100.0%): 7
( 13)       16 ( 0.0%,100.0%): 6
( 14)        2 ( 0.0%,100.0%): 16

Note that the change from 2 to 3 to 4 is nearly linear. We should expect something more like power law.

Based on that data I think we have 3 classes of integers that pass through leb128 encoding:

  • Hashes or something bit packing into high bits (no idea how else we have so many 19-byte numbers)
  • Global counters
  • Local counters, lengths of collections

leb128 is only optimal for the last category, because small values are vastly more common. But for global counters, the most-used value ranges are multi-byte. These global counters apply disproportionate stress to the multi-byte leb128 path, making a cold path hotter than it should be. I have proved out this hypothesis here: #110050 (comment)

It is my opinion that these global counters shouldn't hit the leb128 code path at all, and that when they are shifted out of the leb128 code path, there is a reasonable chance we will find that a different structure of the leb128 code is optimal.

Additionally, this PR shows a maximum of 1% improvement and mostly on incr-unchanged benchmarks. If someone else wants to do up the PR for the first perf run here that's fine by me. I'm just personally not excited about micro-optimizing some code then re-micro-optimizing it shortly after.

@oli-obk
Copy link
Contributor

oli-obk commented Apr 7, 2023

It is my opinion that these global counters shouldn't hit the leb128 code path at all, and that when they are shifted out of the leb128 code path, there is a reasonable chance we will find that a different structure of the leb128 code is optimal.

Ah yes, I hadn't considered that effect.

Avoiding packed encoding for certain types seems reasonable to me

@oli-obk
Copy link
Contributor

oli-obk commented Apr 7, 2023

So... how would a plan look here? It does sound like you have a plan for how to continue, but I'm not sure what experiments were run here or what your desired path forward exactly is.

@cjgillot
Copy link
Contributor

cjgillot commented Apr 7, 2023

My suggestion would be to first track down which data structures give us the >9 bytes encodings.

If we indeed have hashes, they should have a dedicated type to use a more adapted encoding, like Fingerprint does.

For the global counters, it depends on where they come from...

@saethlin
Copy link
Member Author

saethlin commented Apr 8, 2023

I think Zulip is a better place to discuss plans, because GitHub is already hiding a lot in this PR: https://rust-lang.zulipchat.com/#narrow/stream/247081-t-compiler.2Fperformance/topic/Integer.20encoding/near/347877677

bors added a commit to rust-lang-ci/rust that referenced this pull request Apr 26, 2023
Rewrite MemDecoder around pointers not a slice

This is basically rust-lang#109910 but I'm being a lot more aggressive. The pointer-based structure means that it makes a lot more sense to absorb more complexity into `MemDecoder`, most of the diff is just complexity moving from one place to another.

The primary argument for this structure is that we only incur a single bounds check when doing multi-byte reads from a `MemDecoder`. With the slice-based implementation we need to do those with `data[position..position + len]` , which needs to account for `position + len` wrapping. It would be possible to dodge the first bounds check if we stored a slice that starts at `position`, but that would require updating the pointer and length on every read.

This PR also embeds the failure path in a separate function, which means that this PR should subsume all the perf wins observed in rust-lang#109867.
RalfJung pushed a commit to RalfJung/miri that referenced this pull request Apr 26, 2023
Rewrite MemDecoder around pointers not a slice

This is basically rust-lang/rust#109910 but I'm being a lot more aggressive. The pointer-based structure means that it makes a lot more sense to absorb more complexity into `MemDecoder`, most of the diff is just complexity moving from one place to another.

The primary argument for this structure is that we only incur a single bounds check when doing multi-byte reads from a `MemDecoder`. With the slice-based implementation we need to do those with `data[position..position + len]` , which needs to account for `position + len` wrapping. It would be possible to dodge the first bounds check if we stored a slice that starts at `position`, but that would require updating the pointer and length on every read.

This PR also embeds the failure path in a separate function, which means that this PR should subsume all the perf wins observed in rust-lang/rust#109867.
@saethlin
Copy link
Member Author

The changes in this PR which I deduced are useful have been subsumed by #110634

@saethlin saethlin closed this Apr 30, 2023
@saethlin saethlin deleted the integer-decoding branch April 30, 2023 18:59
RalfJung pushed a commit to RalfJung/rust-analyzer that referenced this pull request Apr 20, 2024
Rewrite MemDecoder around pointers not a slice

This is basically rust-lang/rust#109910 but I'm being a lot more aggressive. The pointer-based structure means that it makes a lot more sense to absorb more complexity into `MemDecoder`, most of the diff is just complexity moving from one place to another.

The primary argument for this structure is that we only incur a single bounds check when doing multi-byte reads from a `MemDecoder`. With the slice-based implementation we need to do those with `data[position..position + len]` , which needs to account for `position + len` wrapping. It would be possible to dodge the first bounds check if we stored a slice that starts at `position`, but that would require updating the pointer and length on every read.

This PR also embeds the failure path in a separate function, which means that this PR should subsume all the perf wins observed in rust-lang/rust#109867.
RalfJung pushed a commit to RalfJung/rust-analyzer that referenced this pull request Apr 27, 2024
Rewrite MemDecoder around pointers not a slice

This is basically rust-lang/rust#109910 but I'm being a lot more aggressive. The pointer-based structure means that it makes a lot more sense to absorb more complexity into `MemDecoder`, most of the diff is just complexity moving from one place to another.

The primary argument for this structure is that we only incur a single bounds check when doing multi-byte reads from a `MemDecoder`. With the slice-based implementation we need to do those with `data[position..position + len]` , which needs to account for `position + len` wrapping. It would be possible to dodge the first bounds check if we stored a slice that starts at `position`, but that would require updating the pointer and length on every read.

This PR also embeds the failure path in a separate function, which means that this PR should subsume all the perf wins observed in rust-lang/rust#109867.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
perf-regression Performance regression. S-waiting-on-author Status: This is awaiting some action (such as code changes or more information) from the author. T-compiler Relevant to the compiler team, which will review and decide on the PR/issue.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants