-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sound type avoidance (hopefully!) #14026
Conversation
In particular, handle situations like: val x: (Int => Int) & (Int => Int) = x => x which end up arising in at least one test case during type inference after the avoidance fixes in this PR.
test performance please |
performance test scheduled: 1 job(s) in queue, 1 running. |
Performance test finished successfully: Visit https://dotty-bench.epfl.ch/14026/ to see the changes. Benchmarks is based on merging with master (1003d7c) |
test performance please |
performance test scheduled: 1 job(s) in queue, 0 running. |
Performance test finished successfully: Visit https://dotty-bench.epfl.ch/14026/ to see the changes. Benchmarks is based on merging with master (6b1a662) |
We seem to get a consistent 2% slowdown for dotty and re2. Is there a way to recover this? 2% would take a LOT of effort to get back elsewhere. What's more, the other tests seem not to run at all for the moment. I don't see this PR in the labels. @anatoliykmetyuk can you check please? |
I don't think we can infer that given the variance of the benchmarks from just two runs, e.g. if I display all the points in re2, I see about as many points above and below the points specific to this PR. However, we can try to measure some things objectively, like the number of type variables created and the maximum number of type variables present in the constraint size at once for which I get:
So interestingly the net effect of this PR is to slightly decrease the number of type variables we end up creating even though avoidance requires creating fresh type variables sometimes, but we do see that reflected in the larger maximum constraint size. |
test performance please |
performance test scheduled: 1 job(s) in queue, 0 running. |
Constraint size could be an important factor here. Many constraint ops are linear in constraint size. |
In fact, it would be good if we could also get average constraint size, or total number of constraint entries generated. |
Performance test finished successfully: Visit https://dotty-bench.epfl.ch/14026/ to see the changes. Benchmarks is based on merging with master (6b1a662) |
I did some benchmarking locally and when compiling the exact same set of compiler source files (unlike our benchmark infrastructure which will use the set corresponding to the compiler version being tested itself), I wasn't able to see a clear difference in either direction when comparing master with this PR (on some runs master was faster, on others this PR was). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Otherwise LGTM.
Overall a nice simplification!
By definition, a skolem is neither a subtype nor a supertype of a different skolem, so regardless of the variance, we shouldn't return a new skolem when approximating an existing skolem. Fixing derivedSkolemType to not do this lets us remove a special-case in `avoid`.
Not that the added test case still infers `Local` for the type of `a` because avoidance doesn't handle type variables correctly and because the nesting level checks are too coarse, this doesn't lead to an error because the check for forward references in TreePickler is currently disabled. All these issues are fixed in later commits of this PR.
Previously, the nesting level of a symbol was always equal to the nesting level of its owner incremented by one, but in a situation like i8900a.scala where we have: ... val a = unwrap[?Outer]({ class Local ... the owner of both `?Outer` and `Local` is `a`, and so they ended up with the same level even though `Local` should not leak outside of the block that defines it (i8900a.scala compiled regardless due to the disabled check for forward references in TreePickler re-enabled later in this PR). We rectify this by associating each scope with a level which is always greated than the level of the enclosing scope (repurposing the existing Scope#nestingLevel method which wasn't used for anything), newly created symbols then simply take the level of the current scope (this required tweaking typedCase so that the pattern symbols were typed in the scope were they end up being entered). Also add a `-Yprint-level` option for debugging level-related issues.
The name was misleading since it was also used in approximateWildcards, the new getter also returns true in GADT mode instead of having `either` manually check both the mode and the variable (this does not affect the behavior of approximateWildcards since GADT constraints never involve wildcards).
It turns out that the variance was flipped: when adding a constraint `P <: List[WildcardType(Int, Any)]`, we should over-constrain to `P <: List[Int]`, but previously we would under-constrain to `P <: List[Any]` which would allow us later to infer `P := List[String]` for example. However, this logic needs to be flipped when inferring necessary constraints (this explains why the previous behavior was seemingly correct). No test case but this will end up being important in later commits of this PR where we re-use the same map to do more approximations.
d2b6bd5
to
adc0833
Compare
After improving Inferencing#variances to take the type variables appearing into the expected type into account, the special-casing based on necessaryConstraintsOnly provided by approximateWildcards turned out to be unnecessary. This change required tweaking the -Ytest-pickler logic to avoid a regression in tests/pos/i8802a.scala where a widened skolem in a prefix lead to a pretty-printing difference.
@odersky All comments should be addressed now, see https://github.com/lampepfl/dotty/compare/d2b6bd5df40e93ed457295ba7b3aa0f6d689bd60..629006b3e5c456d5073e5eda2fa9c39ddc32a135 for the diff. |
This was added in scala#14026
This was added in scala#14026
I looked a bit closer at this to see whether I could make progress on any of the tests that would fail if level checking is enabled. The failing tests are:
But I could not make progress. What I observed is that each of these tests generated a lot of avoiding type variables and that once that happened it was very hard to see what goes on. This makes me uneasy. It seems to be an influence that makes type inference more unpredictable. Is there a way we can do with fewer avoiding type variables? Maybe that could fix some of issues? |
Is enabling |
It definitely stress tests type inference, but as far as I can tell this mostly reveals flaws in the existing code which could manifest themselves in other ways, like the fact that the order in which we instantiate type variables matters due to widening: #14494 (comment)
It's a difficult balance, we need type variables to be level-correct because we cannot delay level-checking until instantiation (because we could get fooled by bad bounds) and we cannot create type variables at a level that guarantees we won't need to avoid them or update the level of existing variant variables without restricting expressiveness in seemingly arbitrary ways. |
Maybe they are flaws, maybe they are just the right tradeoffa. We don't know until we have a version that passes all tests and improves in the status quo. |
The Config flag (renamed to `checkedLevelsOnConstraints`) now picks between original level checking (as implemented in scala#14026) and instantiation checking.
The Config flag (renamed to `checkedLevelsOnConstraints`) now picks between original level checking (as implemented in scala#14026) and instantiation checking.
This PR fixes #8900 (except for the GADT-related bugs for which I'll open a separate issue), see each commit message for details but the high-level overview is that: