-
Notifications
You must be signed in to change notification settings - Fork 13.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Replace associated item bound vars with placeholders when projecting #86993
Conversation
@bors try @rust-timer queue |
Awaiting bors try build completion. @rustbot label: +S-waiting-on-perf |
⌛ Trying commit c63f1fe with merge 7baa78ec683fd14db4d4c1869dbef5cbbc5b774d... |
☀️ Try build successful - checks-actions |
Queued 7baa78ec683fd14db4d4c1869dbef5cbbc5b774d with parent fdfe819, future comparison URL. |
Finished benchmarking try commit (7baa78ec683fd14db4d4c1869dbef5cbbc5b774d): comparison url. Summary: This change led to significant regressions 😿 in compiler performance.
If you disagree with this performance assessment, please file an issue in rust-lang/rustc-perf. Benchmarking this pull request likely means that it is perf-sensitive, so we're automatically marking it as not fit for rolling up. While you can manually mark this PR as fit for rollup, we strongly recommend not doing so since this PR led to changes in compiler perf. Next Steps: If you can justify the regressions found in this perf run, please indicate this with @bors rollup=never |
let (data, mapped_regions, mapped_types, mapped_consts) = match replaced { | ||
Some(r) => r, | ||
None => { | ||
bug!("{:?} {:?}", data, self.universes); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh...hmm. interesting. Didn't mean to leave this. But since all the tests pass, maybe the only place that we pass escaping bound vars here is under polymorphization (and supposedly there's no test for the intersection of polymorphization and GATs). But the underlying problem still exists (normalize_erasing_regions). So not sure if it's better to just ICE if we get passed bound vars (gives us test cases) or to fail and return an error (which might still allow us to succeed compilation overall).
Thoughts @nikomatsakis?
251990f
to
cf001dc
Compare
@bors r+ |
📌 Commit cf001dc has been approved by |
☀️ Test successful - checks-actions |
| | ||
LL | fn subscribe(&mut self, t : Box<dyn Subscriber<Input=<Self as Publisher>::Output> + 'a>) { | ||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | ||
| ^^^^^^^^^ | ||
= note: expected `Publisher<'_>` | ||
found `Publisher<'_>` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sad about the output change in this case :(
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, these changes are caused by liberating late bound regions before normalizing in wfcheck
(see #87113). Honestly though, neither output is great, but it definitely got a bit worse.
Just a heads-up: This PR was flagged as causing regressions, up to 1.5% on style-servo-debug and wf-projection-stress-65510-{opt,check,debug}. The results overall seem somewhat mixed to me: On a bunch of other benchmarks there were (smallish) wins. So it may all just be noise. |
Also, someone mentioned to me that when you do a performance run, it is good form to make some sort of note about the outcome, especially when its been tagged as a significant regression, so that others reviewing the pull request in the future will now how to interpret what action, if any, was taken in response to the performance run. If you revise the PR to address perf warnings, say so! Or, if we're choosing to land a PR despite the perf warnings, that's fine, but its good to know what the justification is. |
Good point @pnkfelix. The perf run I had done matched about what I had seen in #85499, so I wasn't surprised (the thought was that the regression here probably would have canceled out the future regression there). On that PR, I've tried digging into to the regression I saw and really didn't get anywhere. I guess the couple commits that I added after the perf run was done changed the perf results a good amount. I would definitely lean towards these being noise or something that can't be easily fixed without some detailed fine tuning. |
@jackh726 None of the testcases that showed regressions show noise historically so while noise is still possible, it's unlikely. I'll go ahead and mark this as being addressed since you said you looked into and did not see any easy and obvious ways to address the regression. We're still working through what exactly the process should be in cases like this where the regression is fairly minor. @rustbot label +perf-regression-triaged |
Fixes #76407
Fixes #76826
Similar, but more limited, to #85499. This allows us to handle things like
for<'a> <T as Trait>::Assoc<'a>
but notfor<'a> <T as Trait<'a>>::Assoc
, unblocking GATs.r? @nikomatsakis