Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Type inference starts to fail to infer argument type when making return type of method more precise #3478

Open
mkustermann opened this issue Nov 23, 2023 · 6 comments
Labels
inference question Further information is requested

Comments

@mkustermann
Copy link
Member

See this example

class A<T> {
  A();
}

class B<T> implements A<T> {
  B();
}

main() {
  final obj = foo<int>() ?? A();
  final obj2 = fooMorePreciseReturn<int>() ?? A();
  print(obj.runtimeType);
  print(obj2.runtimeType);
}

A<T>? foo<T>() => null;
B<T>? fooMorePreciseReturn<T>() => null;

This will print surprisingly

A<int>
A<dynamic>

So when making functions like foo return a more precise type, the type inference stops inferring that A() should be an A<int>(). Though it knows the class relationship from B<T> to A<T> and could infer it to be int.

(reported by @osa1 )

@chloestefantsova @lrhn @eernstg Is this expected behavior?

@eernstg
Copy link
Member

eernstg commented Nov 23, 2023

Inference for the e1 ?? e2 expression will use NonNull(T) as the context type for e2 when there is no context type for the expression as a whole. It is a general property of our type inference that it is able to infer actual type arguments of supertypes of the types mentioned constraints, but not subtypes. For example:

class A<X> {}
class B<X> implements A<X> {}

void main() {
  Object o = B<int>();
  if (o is A<int>) {
    o = B(); // Infer `B()` with context type `A<int>`.
    print(o.runtimeType); // `B<int>`.
  }
  if (o is B<int>) {
    o = A(); // Infer `A()` with context type `B<int>`.
    print(o.runtimeType); // `A<dynamic>`.
  }
}

So I'd say that it is expected. It may or may not be an appropriate choice to transfer information about the type arguments from the context type in the case where that context type will not be a supertype of the type of the target expression after inference. In the situation with context type B<int> above, the type of the expression will be A<S> for some S, and that's never a subtype of B<int>. Who says that we'd want an A<int> and not an A<String> when it isn't going to be a B<...> anyway?

Also, perhaps you have some additional comments on this, @stereotype441 or @leafpetersen?

@osa1
Copy link
Member

osa1 commented Nov 23, 2023

For more context, we discovered this while rolling google/protobuf.dart#903 to g3.

In that PR I updated return types of some generated methods from List<X> to PbList<X>, and PbList is a subtype of List.

The roll CL shows the breakage: cl/584816240.

Another related issue is that giving local types in this case sometimes generates omit_local_variable_types warnings even though the local types fix the type errors.

@mraleph
Copy link
Member

mraleph commented Nov 23, 2023

This is a language issue so I am going to transfer this to dart-sdk/language repo.

@mraleph mraleph transferred this issue from dart-lang/sdk Nov 23, 2023
@osa1
Copy link
Member

osa1 commented Nov 23, 2023

Another related issue is that giving local types in this case sometimes generates omit_local_variable_types warnings even though the local types fix the type errors.

As an example, if I add type to obj2 in @mkustermann's example I get a "unnecessary type annotation" warning, but it changes the runtime type of obj2.

void main() {
  final obj = foo<int>() ?? A();
  final A<int> obj2 = fooMorePreciseReturn<int>() ?? A();  // generates warning
  print(obj.runtimeType);
  print(obj2.runtimeType);
}

@lrhn
Copy link
Member

lrhn commented Nov 23, 2023

The "unnecessary type annotation" - and "unnecessary cast" - warnings are fairly naive. They check whether the static type of the expression is the same as the type of the variable, or if the cast is an up-cast with the currently inferred types, but do not take into account whether removing the type annotation or cast would change that static type. (Because it doesn't run a second type inference on the same expression, which is what it would require to know the effect of removing the context type, or adding one in the case of removing a cast.)

Be ready to ignore "unnecessary type/cast..." warnings when they're wrong.
Or rewrite to avoid the least-upper-bound computation, which is usually the path of least resistance when types start acting up.

@lrhn lrhn added question Further information is requested inference labels Nov 23, 2023
@eernstg
Copy link
Member

eernstg commented Nov 23, 2023

Could there be some kind of provenance on the DartType of an expression, indicating that it was influenced by the context type? This would not be part of the language per se, but the analyzer would be free to provide that information, and the linter could refer to it when considering T v = e;, and omit the warning about T when it matters.

It does seem useful to avoid all these false positives, especially if it introduces bugs much later in the code (rather than just compile-time errors in the enclosing scope).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
inference question Further information is requested
Projects
None yet
Development

No branches or pull requests

5 participants