You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When merging candidates for trait and normalization goals, we incompletely prefer candidates from the environment, i.e. ParamEnv and AliasBound candidates.
We have to prefer AliasBound for opaques as self types to prefer the item bounds of opaque types over blanket impls:
fnimpl_trait() -> implInto<u32>{0u16}fnmain(){// There are two possible types for `x`:// - `u32` by using the "alias bound" of `impl Into<u32>`// - `impl Into<u32>`, i.e. `u16`, by using `impl<T> From<T> for T`//// We infer the type of `x` to be `u32` here as it is highly likely// that this is expected by the user.let x = impl_trait().into();println!("{}",std::mem::size_of_val(&x));}
If we do not prefer alias bounds this example would break with the new solver. For this we need to prefer them even if they constrain non-region inference variables. There are a few existing UI tests which depend on this behavior.
TODO: The issue is even bigger for opaque uses in the defining scope.
The same pattern also exists for projections. We should probably also prefer those:
traitTrait{typeAssoc:Into<u32>;}impl<T:Into<u32>>TraitforT{typeAssoc = T;}fnprefer_alias_bound<T:Trait>(x:T::Assoc){// There are two possible types for `x`:// - `u32` by using the "alias bound" of `<T as Trait>::Assoc`// - `<T as Trait>::Assoc`, i.e. `u16`, by using `impl<T> From<T> for T`//// We infer the type of `x` to be `u32` here as it is highly likely// that this is expected by the user.let x = x.into();println!("{}",std::mem::size_of_val(&x));}fnmain(){prefer_alias_bound::<u16>(0);}
ParamEnv candidates
We need to prefer ParamEnv candidates which only guide region inference as otherwise impls fail their WF check: ui test
traitBar<'a>{}impl<T>Bar<'static>forT{}traitFoo<'a>{}// We have to prove `T: Foo<'a>` given `T: Bar<'a>`. We have two candidates:// - `T: Bar<'a>` candidate from the environment// - `impl<T> Bar<'static> for T` impl candidate//// The concept of "prefering candidates with no constraints" breaks once we introduce// regions, as the trait solver does not know whether a given constraint is a noop.impl<'a,T:Bar<'a>>Foo<'a>forT{}fnmain(){}
The text was updated successfully, but these errors were encountered:
When merging candidates for trait and normalization goals, we incompletely prefer candidates from the environment, i.e.
ParamEnv
andAliasBound
candidates.https://github.com/rust-lang/rust/blob/1a449dcfd25143f7e1f6b6f5ddf1c12af361e2ff/compiler/rustc_trait_selection/src/solve/assembly/mod.rs#L778-L793
Why
AliasBound
candidatesWe have to prefer
AliasBound
for opaques as self types to prefer the item bounds of opaque types over blanket impls:If we do not prefer alias bounds this example would break with the new solver. For this we need to prefer them even if they constrain non-region inference variables. There are a few existing UI tests which depend on this behavior.
TODO: The issue is even bigger for opaque uses in the defining scope.
The same pattern also exists for projections. We should probably also prefer those:
ParamEnv
candidatesWe need to prefer
ParamEnv
candidates which only guide region inference as otherwise impls fail their WF check: ui testThe text was updated successfully, but these errors were encountered: